id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
1,887,433
https://en.wikipedia.org/wiki/Visual%20phototransduction
Visual phototransduction is the sensory transduction process of the visual system by which light is detected by photoreceptor cells (rods and cones) in the vertebrate retina. A photon is absorbed by a retinal chromophore (each bound to an opsin), which initiates a signal cascade through several intermediate cells, then through the retinal ganglion cells (RGCs) comprising the optic nerve. Overview Light enters the eye, passes through the optical media, then the inner neural layers of the retina before finally reaching the photoreceptor cells in the outer layer of the retina. The light may be absorbed by a chromophore bound to an opsin, which photoisomerizes the chromophore, initiating both the visual cycle, which "resets" the chromophore, and the phototransduction cascade, which transmits the visual signal to the brain. The cascade begins with graded polarisation (an analog signal) of the excited photoreceptor cell, as its membrane potential increases from a resting potential of -70 mV, proportional to the light intensity. At rest, the photoreceptor cells are continually releasing glutamate at the synaptic terminal to maintain the potential. The transmitter release rate is lowered (hyperpolarization) as light intensity increases. Each synaptic terminal makes up to 500 contacts with horizontal cells and bipolar cells. These intermediate cells (along with amacrine cells) perform comparisons of photoreceptor signals within a receptive field, but their precise functionalities are not well understood. The signal remains as a graded polarization in all cells until it reaches the RGCs, where it is converted to an action potential and transmitted to the brain. Photoreceptors The photoreceptor cells involved in vertebrate vision are the rods, the cones, and the photosensitive ganglion cells (ipRGCs). These cells contain a chromophore (11-cis-retinal, the aldehyde of vitamin A1 and light-absorbing portion) that is bound to a cell membrane protein, opsin. Rods are responsible for vision under low light intensity and contrast detections. Because they all have the same response across frequencies, no color information can be deduced from the rods only, as in low light conditions for example. Cones, on the other hand, are of different kinds with different frequency response, such that color can be perceived through comparison of the outputs of different kinds of cones. Each cone type responds best to certain wavelengths, or colors, of light because each type has a slightly different opsin. The three types of cones are L-cones, M-cones and S-cones that respond optimally to long wavelengths (reddish color), medium wavelengths (greenish color), and short wavelengths (bluish color) respectively. Humans have trichromatic photopic vision consisting of three opponent process channels that enable color vision. Rod photoreceptors are the most common cell type in the retina and develop quite late. Most cells become postmitotic before birth, but differentiation occurs after birth. In the first week after birth, cells mature and the eye becomes fully functional at the time of opening. The visual pigment rhodopsin (rho) is the first known sign of differentiation in rods. Transduction process To understand the photoreceptor's behavior to light intensities, it is necessary to understand the roles of different currents. There is an ongoing outward potassium current through nongated K+-selective channels. This outward current tends to hyperpolarize the photoreceptor at around −70 mV (the equilibrium potential for K+). There is also an inward sodium current carried by cGMP-gated sodium channels. This "dark current" depolarizes the cell to around −40 mV. This is significantly more depolarized than most other neurons. A high density of Na+-K+ pumps enables the photoreceptor to maintain a steady intracellular concentration of Na+ and K+. When light intensity increases, the potential of the membrane decreases (hyperpolarization). Because as the intensity increases, the release of the stimulating neurotransmitter glutamate of the photoreceptors is reduced. When light intensity decreases, that is, in the dark environment, glutamate release by photoreceptors increases. This increases the membrane potential and produces membrane depolarization. In the dark Photoreceptor cells are unusual cells in that they depolarize in response to absence of stimuli or scotopic conditions (darkness). In photopic conditions (light), photoreceptors hyperpolarize to a potential of −60 mV. In the dark, cGMP levels are high and keep cGMP-gated sodium channels open allowing a steady inward current, called the dark current. This dark current keeps the cell depolarized at about −40 mV, leading to glutamate release which inhibits excitation of neurons. The depolarization of the cell membrane in scotopic conditions opens voltage-gated calcium channels. An increased intracellular concentration of Ca2+ causes vesicles containing glutamate, a neurotransmitter, to merge with the cell membrane, therefore releasing glutamate into the synaptic cleft, an area between the end of one cell and the beginning of another neuron. Glutamate, though usually excitatory, functions here as an inhibitory neurotransmitter. In the cone pathway, glutamate: Hyperpolarizes on-center bipolar cells. Glutamate that is released from the photoreceptors in the dark binds to metabotropic glutamate receptors (mGluR6), which, through a G-protein coupling mechanism, causes non-specific cation channels in the cells to close, thus hyperpolarizing the bipolar cell. Depolarizes off-center bipolar cells. Binding of glutamate to ionotropic glutamate receptors results in an inward cation current that depolarizes the bipolar cell. In the light In summary: Light closes cGMP-gated sodium channels, reducing the influx of both Na+ and Ca2+ ions. Stopping the influx of Na+ ions effectively switches off the dark current. Reducing this dark current causes the photoreceptor to hyperpolarise, which reduces glutamate release which thus reduces the inhibition of retinal nerves, leading to excitation of these nerves. This reduced Ca2+ influx during phototransduction enables deactivation and recovery from phototransduction, as discussed below in . A photon interacts with a retinal molecule in an opsin complex in a photoreceptor cell. The retinal undergoes isomerisation, changing from the 11-cis-retinal to the all-trans-retinal configuration. Opsin therefore undergoes a conformational change to metarhodopsin II. Metarhodopsin II activates a G protein known as transducin. This causes transducin to dissociate from its bound GDP, and bind GTP; then the alpha subunit of transducin dissociates from the beta and gamma subunits, with the GTP still bound to the alpha subunit. The alpha subunit-GTP complex activates phosphodiesterase, also known as PDE6. It binds to one of two regulatory subunits of PDE (which itself is a tetramer) and stimulates its activity. PDE hydrolyzes cGMP, forming GMP. This lowers the intracellular concentration of cGMP and therefore the sodium channels close. Closure of the sodium channels causes hyperpolarization of the cell due to the ongoing efflux of potassium ions. Hyperpolarization of the cell causes voltage-gated calcium channels to close. As the calcium level in the photoreceptor cell drops, the amount of the neurotransmitter glutamate that is released by the cell also drops. This is because calcium is required for the glutamate-containing vesicles to fuse with cell membrane and release their contents (see SNARE proteins). A decrease in the amount of glutamate released by the photoreceptors causes depolarization of on-center bipolar cells (rod and cone On bipolar cells) and hyperpolarization of cone off-center bipolar cells. Deactivation of the phototransduction cascade In light, low cGMP levels close Na+ and Ca2+ channels, reducing intracellular Na+ and Ca2+. During recovery (dark adaptation), the low Ca2+ levels induce recovery (termination of the phototransduction cascade), as follows: Low intracellular Ca2+ causes Ca2+ to dissociate from guanylate cyclase activating protein (GCAP). The liberated GCAP ultimately restores depleted cGMP levels, which re-opens the cGMP-gated cation channels (restoring dark current). Low intracellular Ca2+ causes Ca2+ to dissociate from GTPase-activating protein (GAP), also known as regulator of G protein signaling. The liberated GAP deactivates transducin, terminating the phototransduction cascade (restoring dark current). Low intracellular Ca2+ makes intracellular Ca-recoverin-RK dissociate into Ca2+ and recoverin and rhodopsin kinase (RK). The liberated RK then phosphorylates the Metarhodopsin II, reducing its binding affinity for transducin. Arrestin then completely deactivates the phosphorylated-metarhodopsin II, terminating the phototransduction cascade (restoring dark current). Low intracellular Ca2+ make the Ca2+/calmodulin complex within the cGMP-gated cation channels more sensitive to low cGMP levels (thereby, keeping the cGMP-gated cation channel open even at low cGMP levels, restoring dark current) In more detail: GTPase Accelerating Protein (GAP) of RGS (regulators of G protein signaling) interacts with the alpha subunit of transducin, and causes it to hydrolyse its bound GTP to GDP, and thus halts the action of phosphodiesterase, stopping the transformation of cGMP to GMP. This deactivation step of the phototransduction cascade (the deactivation of the G protein transducer) was found to be the rate limiting step in the deactivation of the phototransduction cascade. In other words: Guanylate Cyclase Activating Protein (GCAP) is a calcium binding protein, and as the calcium levels in the cell have decreased, GCAP dissociates from its bound calcium ions, and interacts with Guanylate Cyclase, activating it. Guanylate Cyclase then proceeds to transform GTP to cGMP, replenishing the cell's cGMP levels and thus reopening the sodium channels that were closed during phototransduction. Finally, Metarhodopsin II is deactivated. Recoverin, another calcium binding protein, is normally bound to Rhodopsin Kinase when calcium is present. When the calcium levels fall during phototransduction, the calcium dissociates from recoverin, and rhodopsin kinase is released and phosphorylates metarhodopsin II, which decreases its affinity for transducin. Finally, arrestin, another protein, binds the phosphorylated metarhodopsin II, completely deactivating it. Thus, finally, phototransduction is deactivated, and the dark current and glutamate release is restored. It is this pathway, where Metarhodopsin II is phosphorylated and bound to arrestin and thus deactivated, which is thought to be responsible for the S2 component of dark adaptation. The S2 component represents a linear section of the dark adaptation function present at the beginning of dark adaptation for all bleaching intensities. Visual cycle The visual cycle occurs via G-protein coupled receptors called retinylidene proteins which consists of a visual opsin and a chromophore 11-cis-retinal. The 11-cis-retinal is covalently linked to the opsin receptor via Schiff base. When it absorbs a photon, 11-cis-retinal undergoes photoisomerization to all-trans-retinal, which changes the conformation of the opsin GPCR leading to signal transduction cascades which causes closure of cyclic GMP-gated cation channel, and hyperpolarization of the photoreceptor cell. Following photoisomerization, all-trans-retinal is released from the opsin protein and reduced to all-trans-retinol, which travels to the retinal pigment epithelium to be "recharged". It is first esterified by lecithin retinol acyltransferase (LRAT) and then converted to 11-cis-retinol by the isomerohydrolase RPE65. The isomerase activity of RPE65 has been shown; it is uncertain whether it also acts as the hydrolase. Finally, it is oxidized to 11-cis-retinal before traveling back to the photoreceptor cell outer segment where it is again conjugated to an opsin to form new, functional visual pigment (retinylidene protein), namely photopsin or rhodopsin. In invertebrates Visual phototransduction in invertebrates like the fruit fly differs from that of vertebrates, described up to now. The primary basis of invertebrate phototransduction is the PI(4,5)P2 cycle. Here, light induces the conformational change into rhodopsin and converts it into meta-rhodopsin. This helps in dissociation of G-protein complex. Alpha sub-unit of this complex activates the PLC enzyme (PLC-beta) which hydrolyze the PIP2 into DAG. This hydrolysis leads to opening of TRP channels and influx of calcium. Invertebrate photoreceptor cells differ morphologically and physiologically from their vertebrate counterparts. Visual stimulation in vertebrates causes a hyperpolarization (weakening) of the photoreceptor membrane potential, whereas invertebrates experience a depolarization with light intensity. Single-photon events produced under identical conditions in invertebrates differ from vertebrates in time course and size. Likewise, multi-photon events are longer than single-photon responses in invertebrates. However, in vertebrates, the multi-photon response is similar to the single-photon response. Both phyla have light adaptation and single-photon events are smaller and faster. Calcium plays an important role in this adaptation. Light adaptation in vertebrates is primarily attributable to calcium feedback, but in invertebrates cyclic AMP is another control on dark adaptation. References External links Visual pigments and visual transduction at med.utah.edu Transduction of Light Prezi A General Overview on Visual Perception at brynmawr.edu Visual system Nervous system Sensory receptors Metabolism
Visual phototransduction
Chemistry,Biology
3,259
4,981,862
https://en.wikipedia.org/wiki/Cytisine
Cytisine, also known as baptitoxine, cytisinicline, or sophorine, is an alkaloid that occurs naturally in several plant genera, such as Laburnum and Cytisus of the family Fabaceae. It has been used medically to help with smoking cessation. It has been found effective in several randomized clinical trials, including in the United States and New Zealand, and is being investigated in additional trials in the United States and a non-inferiority trial in Australia in which it is being compared head-to-head with the smoking cessation aid varenicline (sold in the United States as Chantix). It has also been used entheogenically via mescalbeans by some Native American groups, historically in the Rio Grande Valley predating even peyote. Sources Cytisine is extracted from the seeds of Cytisus laburnum L. (Golden Rain acacia), and is found in several genera of the subfamily Faboideae of the family Fabaceae, including Laburnum, Anagyris, Thermopsis, Cytisus, Genista, Retama and Sophora. Cytisine is also present in Gymnocladus of the subfamily Caesalpinioideae. Uses Smoking cessation Cytisine has been available in post-Soviet states for more than 40 years as an aid to smoking cessation under the brand name Tabex from the Bulgarian pharmaceutical company Sopharma AD. In 1961, Bulgarian pharmacist Strashimir Ingilizov synthesized Tabex using the alkaloid Cytisine which was derived from the seeds of the yellow acacia (Cytisus laburnum), a European decorative shrub prevalent in Bulgaria and commonly referred to as "golden rain". It was first marketed in Bulgaria in 1964 and then became widely available in the Soviet Union. In Poland, it is sold under the brand name Desmoxan, and it is also available in Canada under the brand name Cravv. Its molecular structure has some similarity to that of nicotine, and it has similar pharmacological effects. Like the smoking cessation aid varenicline, cytisine is a partial agonist of nicotinic acetylcholine receptors (nAChRs). Cytisine has a short half-life of 4.8 hours. As a result, the extract provides smokers with satisfaction similar to smoking a cigarette, alleviating the urge to smoke and reducing the severity of nicotine withdrawal symptoms, while also reducing the reward experience of any cigarettes smoked. In 2011, a randomized controlled trial with 740 patients found cytisine improved 12-month abstinence from nicotine from 2.4% with placebo to 8.4% with cytisine. A 2013 meta-analysis of eight studies demonstrated that cytisine has similar effectiveness to varenicline but with substantially lower side effects. A 2014 systematic review and economic evaluation concluded that cytisine was more likely to be cost-effective for smoking cessation than varenicline. Recreational Plants containing cytisine, including the scotch broom and mescalbean, have also been used recreationally. Positive effects are reported to include a nicotine-like intoxication. Reagent for organic chemistry (−)-Cytisine extracted from Laburnum anagyroides seeds was used as a starting material for the preparation of "(+)- sparteine surrogate", for the preparation of enantiomerically enriched lithium anions of opposite stereochemistry to those anions obtained from sparteine. Toxicity Cytisine has been found to interfere with breathing and cause death in test mice; i.v. in mice is about 2 mg/kg. Cytisine is also teratogenic. Māmane (Sophora chrysophylla) can contain amounts of cytisine that are lethal to most animals. The palila (Loxioides bailleui, a bird), Uresiphita polygonalis virescens and Cydia species (moths), and possibly sheep and goats are not affected by the toxin for various reasons, and consume māmane, or parts of it, as food. U. p. virescens caterpillars are possibly able to sequester the cytisine to give themselves protection from predation; they have aposematic coloration which would warn off potential predators. References External links Nicotinic agonists Alkaloids found in Fabaceae Lactams Bridged heterocyclic compounds Teratogens Plant toxins Nitrogen heterocycles Quinolizidine alkaloids Heterocyclic compounds with 3 rings
Cytisine
Chemistry
986
55,148,523
https://en.wikipedia.org/wiki/Environmental%20impact%20of%20silver%20nanoparticles
In 2015, 251 million tubes of toothpaste were sold in the United States. A single tube holds roughly 170 grams of toothpaste, so approximately 43 kilotonnes of toothpaste get washed into the water systems annually. Toothpaste contains silver nanoparticles, also known as nanosilver or AgNPs, among other compounds. Each tube of toothpaste contains approximately 91 mg of silver nanoparticles, with approximately 3.9 tonnes of silver nanoparticles entering the environment annually. Silver nanoparticles are not entirely cleared from the water during the wastewater treatment process, possibly leading to detrimental environmental effects. Silver nanoparticles in toothpaste Silver nanoparticles are used for catalyzing chemical reactions, Raman imaging, and antimicrobial sterilization. Along with its antimicrobial properties, its low mammalian cell toxicity makes these particles a common addition to consumer products. Washing textiles embedded with silver nanoparticles results in the oxidation and transformation of metallic Ag into AgCl. Silver nanoparticles have different physicochemical characteristics from the free silver ion, Ag+ and possess increased optical, electromagnetic, and catalytic properties. Particles with one dimension of 100 nm or less can generate reactive oxygen species. Smaller particles less than 10 nm may pass through cellular membranes and accumulate within the cell. Silver nanoparticles were also found to attach to cellular membranes, eventually dissipating the proton motive force, leading to cell death. Silver nanoparticles that are larger than the openings of membrane channel proteins can easily clog channels, leading to the disruption of membrane permeability and transport. However, the antimicrobial effectiveness of silver nanoparticles has been shown to decrease when dissolved in liquid media. The free silver ion are potentially toxic to bacteria and planktonic species in the water. The positively charged silver ion can also attach to the negatively charged cell walls of bacteria, leading to deactivation of cellular enzymes, disruption of membrane permeability, and eventually, cell lysis and death. However, its toxicity to microorganisms is not overtly observed since the free silver ion is found in low concentrations in wastewater treatment systems and the natural environment due to its complexation with ligands such as chloride, sulfide, and thiosulfate. Wastewater treatment A majority of silver nanoparticles in consumer products go down the drain and are eventually released into sewer systems and reach wastewater treatment plants. Primary screening and grit removal in wastewater treatment does not completely filter out silver nanoparticles, and coagulation treatment may lead to further condensation into wastewater sludge. The secondary wastewater treatment process involves suspended growth systems which allow bacteria to decompose organic matter within the water. Any silver nanoparticles still suspended in the water may collect on these microbes, potentially killing them due to their antimicrobial effects. After passing through both treatment processes, the silver nanoparticles are eventually deposited into the environment. A majority of the submerged portions of wastewater treatment plants are anoxic and rich in sulfur. During the wastewater treatment process, silver nanoparticles either remain the same, are converted into free silver ions, complex with ligands, or agglomerate. Silver nanoparticles can also attach to wastewater biosolids found in both the sludge and the effluent. Silver ions in wastewater are removed efficiently because of their strong complexation with chloride or sulfide. A majority of the silver found in wastewater treatment plant effluent is associated with reduced sulfur as organic thiol groups and inorganic sulfides. Silver nanoparticles also tend to accumulate in activated sludge, and the dominant form of the silver found in sewage sludge is Ag2S. Therefore, most of the silver found in wastewater treatment plants is in the form of silver nanoparticles or silver precipitates such as Ag2S and AgCl. The amount of silver precipitate formed depends on silver ion release, which increases with increasing dissolved oxygen concentration and decreasing pH. Silver ions account for approximately 1% of total silver after silver nanoparticles are suspended in aerated water. In anoxic wastewater treatment environments, silver ion release is therefore often negligible, and most of the silver nanoparticles in wastewater remain in the original silver nanoparticle form. The presence of natural organic matter can also decrease oxidative dissolution rates and therefore the release rate of free silver ions. The slow oxidation of silver nanoparticles may enable new pathways for its transfer into the environment. Transformation in the environment The silver nanoparticles that pass through wastewater treatment plants undergo transformations in the environment through changes in aggregation state, oxidation state, precipitation of secondary phases, or sorption of organic species. These transformations can result in the formation of colloidal solutions. Each of these new species potentially have toxic effects which have yet to be fully examined. Most silver nanoparticles in products have an organic shell structure around a core of Ag0. This shell is often created with carboxylic acids functional groups, usually using citrate, leading to stabilization through adsorption or covalent attachment of organic compounds. In seawater, glutathione reacts with citrate to form a thioester via esterification. Thioesters exhibit electrosteric repulsive forces due to amine functional groups and their size, which prevents aggregation. These electrostatic repulsive forces are weakened by counterions in solution, such as Ca2+ found in seawater. Ca2+ ions are naturally found in seawater due to the weathering of calcareous rocks, and allow for dissolution of the oxide-coated particle at low electrolyte concentrations. This leads to the aggregation of silver nanoparticles onto thioesters in seawater. When aggregation occurs, the silver nanoparticles lose microbial toxicity, but have greater exposure in the environment for larger organisms. These effects have not been completely identified, but may be hazardous to an organism’s health via biological magnification. Chemical reactions in seawater Silver nanoparticles are thermodynamically unstable in oxic environments. In seawater, silver oxide is not thermodynamically favored when chloride and sulfur are present. On the surface where O2 is present in much greater quantities than chloride or sulfur, silver reacts to form a silver oxide surface layer. This oxidation has been shown to occur in nanoparticles as well, despite their shell.Dissolution of Ag2O in Water:Ag2O + H2O → 2Ag− + 2OH− The nano-size of the particles aids in oxidation since their smaller surface area increases their redox potential. The silver oxide layer easily dissolves in water because of its low Ksp value of 4×10−11. Possible Oxidation Reactions of Silver:Ag + O2 → Ag+ + O2−4Ag + O2 → 4Ag+ + 2O2−In aerobic, acidic seawater, oxidation of Ag can occur through the following reaction:Oxidation of Silver in Seawater:2Ag(s) + ½ O2(aq) + 2H+(aq) ⇌ 2Ag+(aq) + H2O(l) The formation of these Ag+ ions are a concern for environmental health, as these ions freely interact with other organic compounds, such as humic acids, and disrupt the normal balance of an ecosystem. These Ag+ ions will also react with Cl− to form complexes such as AgCl2−, AgCl32−, and AgCl43−, which are bioavailable forms of silver that are potentially more toxic to bacteria and fish than silver nanoparticles. The etched structure of silver nanoparticles provides the chloride with the preferred atomic steps for nucleation to occur.Reaction of Silver with Chloride:Ag+ + Cl− → AgClAgCl(s) + Cl−(aq) → AgCl2−(aq) Ag has also been shown to readily react with sulfur in water. Free Ag+ ions will react with H2S in the water to form the precipitate Ag2S. Silver and Sulfur Reaction in Seawater:2Ag(aq) + H2S(aq) → Ag2S(s) + H2(aq) H2S is not the only source of sulfur that Ag will readily bind to. Organosulfur compounds, which are produced by aquatic organisms, form extremely stable sulfide complexes with silver. Silver outcompetes other metals for the available sulfide, leading to an overall decrease in bioavailable sulfur in the community. Thus, the formation of Ag2S limits the amount of bioavailable sulfur and contributes to a reduction in toxicity of silver nanoparticles to nitrifying bacteria. Effect on bacteria Silver nanoparticles are experimentally shown to inhibit autotrophic nitrifying bacterial growth (86±3%) more than Ag+ ions (42±7%) or AgCl colloids (46±4%). Silver nanoparticle-inhibited heterotrophic growth (55±8%) in Escherichia coli is best observed at lower concentrations, between 1.0 uM and 4.2 uM. This is less than Ag+ ions (~100%), but greater than AgCl colloids (66±6%). The actual cause of these results is undetermined as growth conditions and cell properties differ between nitrifying bacteria and heterotrophic E. coli. Studies conducted in natural lake environments show less response from bacterioplankton than in laboratory environments when exposed to similar concentrations of silver nanoparticles. This may be due to the binding of free Ag+ ions to dissolved organic matter in lake environments, rendering the Ag+ unavailable. Within toothpaste, Ag+ ions have been shown to have a stronger effect on gram-negative bacteria than on gram-positive bacteria. In comparison to other nanoparticles, such as gold, silver tends to have a broader antimicrobial effect, which is another reason why it is incorporated into so many products. Ag+ is less effective on gram-positive bacteria due to the thick layer of peptidoglycan around them that gram-negative species lack. Approximately half of the peptidoglycan wall is composed of teichoic acids linked by phosphodiester bonds, which results in an overall negative charge in the peptidoglycan layer. This negative charge may trap the positive Ag+ and prevent them from entering the cell and disrupting the flow of electrons. Toxicology in aquatic environments The most environmentally relevant species of these nanoparticles are silver chloride within marine ecosystems and organic thiols within terrestrial ecosystems. Once Ag0 enters the environment, it is oxidized to Ag+. Of the potential species formed in seawater, such as Ag2S and Ag2CO3, AgCl is the most thermodynamically favored due to its stability, solubility, and the abundance of Cl− in seawater. Research has shown that partially oxidized nanoparticles may be more toxic than those that are freshly prepared. It was also found that Ag dissolutes more in solution when the pH is low and bleaching has occurred. This effect, coupled with ocean acidification and increasing coral reef bleaching events, leads to a compounding effect of Ag accumulation in the global marine ecosystem. These free formed Ag+ ions can accumulate and block the regulation of Na+ and Cl− ion exchange within the gills of fish, leading to blood acidosis which is fatal if left unchecked. Additionally, fish can accumulate Ag through their diet. Phytoplankton, which form the base level of aquatic food chains, can absorb and collect silver from their surroundings. As fish eat phytoplankton, the silver accumulates within their circulatory system, which has been shown to negatively impact embryonic fish, causing spinal cord deformities and cardiac arrhythmia. The other class of organisms heavily affected by silver nanoparticles is bivalves. Filter feeding bivalves accumulate nanoparticles to concentrations 10,000 times greater than was added to seawater, and Ag+ ions are proven to be extremely toxic to them. The base of complex food webs consists of microbes, and these organisms are most heavily impacted by nanoparticles. These effects cascade into the problems that have now reached an observable scale. As global temperatures rise and oceanic pH drops, some species, such as oysters, will be even more susceptible to the negative impacts of nanoparticles as they are stressed. See also Environmental impact of pharmaceuticals and personal care products Plastic resin pellet pollution References Environmental chemistry Silver Nanoparticles by composition Environmental impact of products Ocean pollution
Environmental impact of silver nanoparticles
Chemistry,Environmental_science
2,674
36,443,292
https://en.wikipedia.org/wiki/Clavulina%20griseohumicola
Clavulina griseohumicola is a species of fungus in the family Clavulinaceae. Described as new to science in 2005, it occurs in Guyana. References External links Fungi described in 2005 Fungi of Guyana griseohumicola Fungus species
Clavulina griseohumicola
Biology
54
11,684,056
https://en.wikipedia.org/wiki/Fusarium%20tricinctum
Fusarium tricinctum is a fungal and plant pathogen of various plant diseases worldwide, especially in temperate regions. It is found on many crops in the world including malt barley (Andersen et al., 1996), and cereals (Chelkowski et al., 1989; Bottalico and Perrone, 2002; Kosiak et al., 2003; and Wiśniewska et al., 2014;). It is also found on animals such as Rainbow trout, Marasas et al., 1967. In cereals, it is one of the most common species causes of Fusarium head blight (FHB) and also root rot. References tricinctum Fungal plant pathogens and diseases Fungi described in 1886 Fungus species
Fusarium tricinctum
Biology
157
59,051,815
https://en.wikipedia.org/wiki/Ilha%20de%20Ferro
Ilha de Ferro ( Iron Island) is a Brazilian drama streaming television series created by Max Mallmann and Adriana Lunardi for Globoplay. Directed by Afonso Poyart, and produced by TV Globo's production division Estúdios Globo, it premiered on the streaming service on November 14, 2018. The series follows the story of a team of oil tankers workers that are divided between the dilemmas of their personal life on land and working on the high seas. It is the last credited work of Mallmann, who died some months prior to the series' premiere. Premise Dante (Cauã Reymond) is the production coordinator of PLT-137, an oil rig known for its accidents history. He dreams of becoming manager of the oil rig, but is angry when he realizes he needs to compete with the newcomer Julia (Maria Casadevall) for the job. However, it is in the midst of this dispute that ends up a passion between the two able to change the course of their lives. Cast Main Cauã Reymond as Dante Maria Casadevall as Júlia Herbert Richers Jr. as Horácio Bravo Taumaturgo Ferreira as Buda Klebber Toledo as Bruno (main, seasons 1; guest, season 2) Sophie Charlotte as Leona (season 1) Mariana Ximenes as Dr. Olívia Dias (season 2) Eriberto Leão as Diogo Bravo (season 2) Rômulo Estrela as Ramiro (season 2) Daphne Bozaski as Maria Eduarda Giordano (season 2) Recurring and guests Cássia Kis as Isabel Osmar Prado as João Bravo Jonathan Azevedo as Fiapo Milhem Cortaz as Astério Moacyr Franco as Amorim Bruce Gomlevsky as Leviatã Production The filming of the first season of the series ended on May 12, 2018, in Rio de Janeiro. A reproduction of the oil extraction platform was built at the Globo Studios. Release The series had its first season of 12 episodes released at Globoplay on November 14, 2018. Episodes Season 1 (2018) Season 2 (2019) References External links 2018 Brazilian television series debuts Brazilian drama television series Portuguese-language television shows Television shows set in Rio de Janeiro (city) Globoplay original programming Works about petroleum Television series by Estúdios Globo
Ilha de Ferro
Chemistry
490
53,268,700
https://en.wikipedia.org/wiki/PBS-1%20silencer
The PBS-1 is a silencer designed for the 7.62x39mm AKM variant of the Soviet AK-47 assault rifle in the Kalashnikov rifle family. It is in diameter and long. History The PBS-1 silencer, designed for use with the AKM to reduce the noise when firing, was introduced in the 1960s, and was used mostly by Spetsnaz forces and the KGB. They were used by the Spetsnaz in the Soviet–Afghan War in the 1980s, requiring the use of the AKM (modernized variant of the AK-47), because the newer AK-74 did not have a silencer available. Until a variant of the AK74, the AKS-74UB adapted for use with the PBS-4 suppressor (used in combination with subsonic 5.45×39mm Russian ammunition). The PBS-1 is a two-chambered silencer using baffles and a rubber wipe. It was designed for use in conjunction with subsonic rifle ammunition. The PBS-1 has been extensively tested by the United States Army Foreign Weapons Test Lab. The rubber wipe requires replacement after 20–25 rounds. With a rubber wipe in place the PBS-1 reliably reduces the sound of an AKM discharge by 15 dB, which make the noise between 130—135 dB. Gallery See also PBS-4 silencer AK-104 References Firearm components Weapons and ammunition introduced in the 1960s Kalashnikov derivatives Cold War weapons of the Soviet Union
PBS-1 silencer
Technology
309
547,220
https://en.wikipedia.org/wiki/Ribulose%201%2C5-bisphosphate
Ribulose 1,5-bisphosphate (RuBP) is an organic substance that is involved in photosynthesis, notably as the principal acceptor in plants. It is a colourless anion, a double phosphate ester of the ketopentose (ketone-containing sugar with five carbon atoms) called ribulose. Salts of RuBP can be isolated, but its crucial biological function happens in solution. RuBP occurs not only in plants but in all domains of life, including Archaea, Bacteria, and Eukarya. History RuBP was originally discovered by Andrew Benson in 1951 while working in the lab of Melvin Calvin at UC Berkeley. Calvin, who had been away from the lab at the time of discovery and was not listed as a co-author, controversially removed the full molecule name from the title of the initial paper, identifying it solely as "ribulose". At the time, the molecule was known as ribulose diphosphate (RDP or RuDP) but the prefix di- was changed to bis- to emphasize the nonadjacency of the two phosphate groups. Role in photosynthesis and the Calvin-Benson Cycle The enzyme ribulose-1,5-bisphosphate carboxylase-oxygenase (rubisco) catalyzes the reaction between RuBP and carbon dioxide. The product is the highly unstable six-carbon intermediate known as 3-keto-2-carboxyarabinitol 1,5-bisphosphate, or 2'-carboxy-3-keto-D-arabinitol 1,5-bisphosphate (CKABP). This six-carbon β-ketoacid intermediate hydrates into another six-carbon intermediate in the form of a gem-diol. This intermediate then cleaves into two molecules of 3-phosphoglycerate (3-PGA) which is used in a number of metabolic pathways and is converted into glucose. In the Calvin-Benson cycle, RuBP is a product of the phosphorylation of ribulose-5-phosphate (produced by glyceraldehyde 3-phosphate) by ATP. Interactions with rubisco RuBP acts as an enzyme inhibitor for the enzyme rubisco, which regulates the net activity of carbon fixation. When RuBP is bound to an active site of rubisco, the ability to activate via carbamylation with and is blocked. The functionality of rubisco activase involves removing RuBP and other inhibitory bonded molecules to re-enable carbamylation on the active site. Role in photorespiration Rubisco also catalyzes RuBP with oxygen () in an interaction called photorespiration, a process that is more prevalent at high temperatures. During photorespiration RuBP combines with to become 3-PGA and phosphoglycolic acid. Like the Calvin-Benson Cycle, the photorespiratory pathway has been noted for its enzymatic inefficiency although this characterization of the enzymatic kinetics of rubisco has been contested. Due to enhanced RuBP carboxylation and decreased rubisco oxygenation stemming from the increased concentration of in the bundle sheath, rates of photorespiration are decreased in plants. Similarly, photorespiration is limited in CAM photosynthesis due to kinetic delays in enzyme activation, again stemming from the ratio of carbon dioxide to oxygen. Measurement RuBP can be measured isotopically via the conversion of and RuBP into glyceraldehyde 3-phosphate. G3P can then be measured using an enzymatic optical assay. Given the abundance of RuBP in biological samples, an added difficulty is distinguishing particular reservoirs of the substrate, such as the RuBP internal to a chloroplast vs external. One approach to resolving this is by subtractive inference, or measuring the total RuBP of a system, removing a reservoir (e.g. by centrifugation), re-measuring the total RuBP, and using the difference to infer the concentration in the given repository. See also Rubisco Calvin-Benson cycle 3-Phosphoglyceric acid Photosynthesis References Photosynthesis Organophosphates Monosaccharide derivatives
Ribulose 1,5-bisphosphate
Chemistry,Biology
897
3,190,686
https://en.wikipedia.org/wiki/Tip%20growth
Tip growth is an extreme form of polarised growth of living cells that results in an elongated cylindrical cell morphology with a rounded tip at which the growth activity takes place. Tip growth occurs in algae (e.g., Acetabularia acetabulum), fungi (hyphae) and plants (e.g. root hairs and pollen tubes). Tip growth is a process that has many similarities in diverse walled cells such as pollen tubes, root hairs, and hyphae. Fungal tip growth and hyphal tropisms Fungal hyphae extend continuously at their extreme tips, where enzymes are released into the environment and where new wall materials are synthesised. The rate of tip extension can be extremely rapid - up to 40 micrometres per minute. It is supported by the continuous movement of materials into the tip from older regions of the hyphae. So, in effect, a fungal hypha is a continuously moving mass of protoplasm in a continuously extending tube. This unique mode of growth - apical growth - is the hallmark of fungi, and it accounts for much of their environmental and economic significance. References Developmental biology
Tip growth
Biology
235
26,597,035
https://en.wikipedia.org/wiki/Pairwise%20summation
In numerical analysis, pairwise summation, also called cascade summation, is a technique to sum a sequence of finite-precision floating-point numbers that substantially reduces the accumulated round-off error compared to naively accumulating the sum in sequence. Although there are other techniques such as Kahan summation that typically have even smaller round-off errors, pairwise summation is nearly as good (differing only by a logarithmic factor) while having much lower computational cost—it can be implemented so as to have nearly the same cost (and exactly the same number of arithmetic operations) as naive summation. In particular, pairwise summation of a sequence of n numbers xn works by recursively breaking the sequence into two halves, summing each half, and adding the two sums: a divide and conquer algorithm. Its worst-case roundoff errors grow asymptotically as at most O(ε log n), where ε is the machine precision (assuming a fixed condition number, as discussed below). In comparison, the naive technique of accumulating the sum in sequence (adding each xi one at a time for i = 1, ..., n) has roundoff errors that grow at worst as O(εn). Kahan summation has a worst-case error of roughly O(ε), independent of n, but requires several times more arithmetic operations. If the roundoff errors are random, and in particular have random signs, then they form a random walk and the error growth is reduced to an average of for pairwise summation. A very similar recursive structure of summation is found in many fast Fourier transform (FFT) algorithms, and is responsible for the same slow roundoff accumulation of those FFTs. The algorithm In pseudocode, the pairwise summation algorithm for an array of length ≥ 0 can be written: s = pairwise(x[1...n]) if n ≤ N base case: naive summation for a sufficiently small array s = 0 for i = 1 to n s = s + x[i] else divide and conquer: recursively sum two halves of the array m = floor(n / 2) s = pairwise(x[1...m]) + pairwise(x[m+1...n]) end if For some sufficiently small , this algorithm switches to a naive loop-based summation as a base case, whose error bound is O(Nε). The entire sum has a worst-case error that grows asymptotically as O(ε log n) for large n, for a given condition number (see below). In an algorithm of this sort (as for divide and conquer algorithms in general), it is desirable to use a larger base case in order to amortize the overhead of the recursion. If N = 1, then there is roughly one recursive subroutine call for every input, but more generally there is one recursive call for (roughly) every N/2 inputs if the recursion stops at exactly n = N. By making N sufficiently large, the overhead of recursion can be made negligible (precisely this technique of a large base case for recursive summation is employed by high-performance FFT implementations). Regardless of N, exactly n−1 additions are performed in total, the same as for naive summation, so if the recursion overhead is made negligible then pairwise summation has essentially the same computational cost as for naive summation. A variation on this idea is to break the sum into b blocks at each recursive stage, summing each block recursively, and then summing the results, which was dubbed a "superblock" algorithm by its proposers. The above pairwise algorithm corresponds to b = 2 for every stage except for the last stage which is b = N. Dalton, Wang & Blainey (2014) describe a iterative, "shift-reduce" formulation for pairwise summation. It can be unrolled and sped up using SIMD instructions. The non-unrolled version is: double shift_reduce_sum(double ∗x, size_t n) { double stack[64], v; size_t p = 0; for (size_t i = 0; i < n; ++i) { v = x[i]; // shift for (size_t b = 1; i & b; b <<= 1, −−p) // reduce v += stack[p−1]; stack[p++] = v; } double sum = 0.0; while (p) sum += stack[−−p]; return sum; } Accuracy Suppose that one is summing n values xi, for i = 1, ..., n. The exact sum is: (computed with infinite precision). With pairwise summation for a base case N = 1, one instead obtains , where the error is bounded above by: where ε is the machine precision of the arithmetic being employed (e.g. ε ≈ 10−16 for standard double precision floating point). Usually, the quantity of interest is the relative error , which is therefore bounded above by: In the expression for the relative error bound, the fraction (Σ|xi|/|Σxi|) is the condition number of the summation problem. Essentially, the condition number represents the intrinsic sensitivity of the summation problem to errors, regardless of how it is computed. The relative error bound of every (backwards stable) summation method by a fixed algorithm in fixed precision (i.e. not those that use arbitrary-precision arithmetic, nor algorithms whose memory and time requirements change based on the data), is proportional to this condition number. An ill-conditioned summation problem is one in which this ratio is large, and in this case even pairwise summation can have a large relative error. For example, if the summands xi are uncorrelated random numbers with zero mean, the sum is a random walk and the condition number will grow proportional to . On the other hand, for random inputs with nonzero mean the condition number asymptotes to a finite constant as . If the inputs are all non-negative, then the condition number is 1. Note that the denominator is effectively 1 in practice, since is much smaller than 1 until n becomes of order 21/ε, which is roughly 101015 in double precision. In comparison, the relative error bound for naive summation (simply adding the numbers in sequence, rounding at each step) grows as multiplied by the condition number. In practice, it is much more likely that the rounding errors have a random sign, with zero mean, so that they form a random walk; in this case, naive summation has a root mean square relative error that grows as and pairwise summation has an error that grows as on average. Software implementations Pairwise summation is the default summation algorithm in NumPy and the Julia technical-computing language, where in both cases it was found to have comparable speed to naive summation (thanks to the use of a large base case). Other software implementations include the HPCsharp library for the C# language and the standard library summation in D. References Computer arithmetic Numerical analysis Articles with example pseudocode
Pairwise summation
Mathematics
1,534
22,731,864
https://en.wikipedia.org/wiki/Hydroid%20dermatitis
Hydroid dermatitis is a cutaneous condition that occurs after contact with the small marine hydroid Halecium. See also Sea anemone dermatitis List of cutaneous conditions References Parasitic infestations, stings, and bites of the skin Animal attacks
Hydroid dermatitis
Biology
56
4,698,837
https://en.wikipedia.org/wiki/184%20%28number%29
184 (one hundred [and] eighty-four) is the natural number following 183 and preceding 185. In mathematics There are 184 different Eulerian graphs on eight unlabeled vertices, and 184 paths by which a chess rook can travel from one corner of a 4 × 4 chessboard to the opposite corner without passing through the same square twice. 184 is also a refactorable number. In other fields Some physicists have proposed that 184 is a magic number for neutrons in atomic nuclei. See also References Integers
184 (number)
Mathematics
108
9,256
https://en.wikipedia.org/wiki/Enigma%20machine
The Enigma machine is a cipher device developed and used in the early- to mid-20th century to protect commercial, diplomatic, and military communication. It was employed extensively by Nazi Germany during World War II, in all branches of the German military. The Enigma machine was considered so secure that it was used to encipher the most top-secret messages. The Enigma has an electromechanical rotor mechanism that scrambles the 26 letters of the alphabet. In typical use, one person enters text on the Enigma's keyboard and another person writes down which of the 26 lights above the keyboard illuminated at each key press. If plaintext is entered, the illuminated letters are the ciphertext. Entering ciphertext transforms it back into readable plaintext. The rotor mechanism changes the electrical connections between the keys and the lights with each keypress. The security of the system depends on machine settings that were generally changed daily, based on secret key lists distributed in advance, and on other settings that were changed for each message. The receiving station would have to know and use the exact settings employed by the transmitting station to decrypt a message. Although Nazi Germany introduced a series of improvements to the Enigma over the years that hampered decryption efforts, they did not prevent Poland from cracking the machine as early as December 1932 and reading messages prior to and into the war. Poland's sharing of their achievements enabled the Allies to exploit Enigma-enciphered messages as a major source of intelligence. Many commentators say the flow of Ultra communications intelligence from the decrypting of Enigma, Lorenz, and other ciphers shortened the war substantially and may even have altered its outcome. History The Enigma machine was invented by German engineer Arthur Scherbius at the end of World War I. The German firm Scherbius & Ritter, co-founded by Scherbius, patented ideas for a cipher machine in 1918 and began marketing the finished product under the brand name Enigma in 1923, initially targeted at commercial markets. Early models were used commercially from the early 1920s, and adopted by military and government services of several countries, most notably Nazi Germany before and during World War II. Several Enigma models were produced, but the German military models, having a plugboard, were the most complex. Japanese and Italian models were also in use. With its adoption (in slightly modified form) by the German Navy in 1926 and the German Army and Air Force soon after, the name Enigma became widely known in military circles. Pre-war German military planning emphasized fast, mobile forces and tactics, later known as blitzkrieg, which depended on radio communication for command and coordination. Since adversaries would likely intercept radio signals, messages had to be protected with secure encipherment. Compact and easily portable, the Enigma machine filled that need. Breaking Enigma Hans-Thilo Schmidt was a German who spied for the French, obtaining access to German cipher materials that included the daily keys used in September and October 1932. Those keys included the plugboard settings. The French passed the material to Poland. Around December 1932, Marian Rejewski, a Polish mathematician and cryptologist at the Polish Cipher Bureau, used the theory of permutations, and flaws in the German military-message encipherment procedures, to break message keys of the plugboard Enigma machine. Rejewski used the French supplied material and the message traffic that took place in September and October to solve for the unknown rotor wiring. Consequently, the Polish mathematicians were able to build their own Enigma machines, dubbed "Enigma doubles". Rejewski was aided by fellow mathematician-cryptologists Jerzy Różycki and Henryk Zygalski, both of whom had been recruited with Rejewski from Poznań University, which had been selected for its students' knowledge of the German language, since that area was held by Germany prior to World War I. The Polish Cipher Bureau developed techniques to defeat the plugboard and find all components of the daily key, which enabled the Cipher Bureau to read German Enigma messages starting from January 1933. Over time, the German cryptographic procedures improved, and the Cipher Bureau developed techniques and designed mechanical devices to continue reading Enigma traffic. As part of that effort, the Poles exploited quirks of the rotors, compiled catalogues, built a cyclometer (invented by Rejewski) to help make a catalogue with 100,000 entries, invented and produced Zygalski sheets, and built the electromechanical cryptologic bomba (invented by Rejewski) to search for rotor settings. In 1938 the Poles had six bomby (plural of bomba), but when that year the Germans added two more rotors, ten times as many bomby would have been needed to read the traffic. On 26 and 27 July 1939, in Pyry, just south of Warsaw, the Poles initiated French and British military intelligence representatives into the Polish Enigma-decryption techniques and equipment, including Zygalski sheets and the cryptologic bomb, and promised each delegation a Polish-reconstructed Enigma (the devices were soon delivered). In September 1939, British Military Mission 4, which included Colin Gubbins and Vera Atkins, went to Poland, intending to evacuate cipher-breakers Marian Rejewski, Jerzy Różycki, and Henryk Zygalski from the country. The cryptologists, however, had been evacuated by their own superiors into Romania, at the time a Polish-allied country. On the way, for security reasons, the Polish Cipher Bureau personnel had deliberately destroyed their records and equipment. From Romania they traveled on to France, where they resumed their cryptological work, collaborating by teletype with the British, who began work on decrypting German Enigma messages, using the Polish equipment and techniques. Gordon Welchman, who became head of Hut 6 at Bletchley Park, wrote: "Hut 6 Ultra would never have got off the ground if we had not learned from the Poles, in the nick of time, the details both of the German military version of the commercial Enigma machine, and of the operating procedures that were in use." The Polish transfer of theory and technology at Pyry formed the crucial basis for the subsequent World War II British Enigma-decryption effort at Bletchley Park, where Welchman worked. During the war, British cryptologists decrypted a vast number of messages enciphered on Enigma. The intelligence gleaned from this source, codenamed "Ultra" by the British, was a substantial aid to the Allied war effort. Though Enigma had some cryptographic weaknesses, in practice it was German procedural flaws, operator mistakes, failure to systematically introduce changes in encipherment procedures, and Allied capture of key tables and hardware that, during the war, enabled Allied cryptologists to succeed. The Abwehr used different versions of Enigma machines. In November 1942, during Operation Torch, a machine was captured which had no plugboard and the three rotors had been changed to rotate 11, 15, and 19 times rather than once every 26 letters, plus a plate on the left acted as a fourth rotor. The Abwehr code had been broken on 8 December 1941 by Dilly Knox. Agents sent messages to the Abwehr in a simple code which was then sent on using an Enigma machine. The simple codes were broken and helped break the daily Enigma cipher. This breaking of the code enabled the Double-Cross System to operate.From October 1944, the German Abwehr used the Schlüsselgerät 41 in limited quantities. Design Like other rotor machines, the Enigma machine is a combination of mechanical and electrical subsystems. The mechanical subsystem consists of a keyboard; a set of rotating disks called rotors arranged adjacently along a spindle; one of various stepping components to turn at least one rotor with each key press, and a series of lamps, one for each letter. These design features are the reason that the Enigma machine was originally referred to as the rotor-based cipher machine during its intellectual inception in 1915. Electrical pathway An electrical pathway is a route for current to travel. By manipulating this phenomenon the Enigma machine was able to scramble messages. The mechanical parts act by forming a varying electrical circuit. When a key is pressed, one or more rotors rotate on the spindle. On the sides of the rotors are a series of electrical contacts that, after rotation, line up with contacts on the other rotors or fixed wiring on either end of the spindle. When the rotors are properly aligned, each key on the keyboard is connected to a unique electrical pathway through the series of contacts and internal wiring. Current, typically from a battery, flows through the pressed key, into the newly configured set of circuits and back out again, ultimately lighting one display lamp, which shows the output letter. For example, when encrypting a message starting ANX..., the operator would first press the A key, and the Z lamp might light, so Z would be the first letter of the ciphertext. The operator would next press N, and then X in the same fashion, and so on. Current flows from the battery (1) through a depressed bi-directional keyboard switch (2) to the plugboard (3). Next, it passes through the (unused in this instance, so shown closed) plug "A" (3) via the entry wheel (4), through the wiring of the three (Wehrmacht Enigma) or four (Kriegsmarine M4 and Abwehr variants) installed rotors (5), and enters the reflector (6). The reflector returns the current, via an entirely different path, back through the rotors (5) and entry wheel (4), proceeding through plug "S" (7) connected with a cable (8) to plug "D", and another bi-directional switch (9) to light the appropriate lamp. The repeated changes of electrical path through an Enigma scrambler implement a polyalphabetic substitution cipher that provides Enigma's security. The diagram on the right shows how the electrical pathway changes with each key depression, which causes rotation of at least the right-hand rotor. Current passes into the set of rotors, into and back out of the reflector, and out through the rotors again. The greyed-out lines are other possible paths within each rotor; these are hard-wired from one side of each rotor to the other. The letter A encrypts differently with consecutive key presses, first to G, and then to C. This is because the right-hand rotor steps (rotates one position) on each key press, sending the signal on a completely different route. Eventually other rotors step with a key press. Rotors The rotors (alternatively wheels or drums, Walzen in German) form the heart of an Enigma machine. Each rotor is a disc approximately in diameter made from Ebonite or Bakelite with 26 brass, spring-loaded, electrical contact pins arranged in a circle on one face, with the other face housing 26 corresponding electrical contacts in the form of circular plates. The pins and contacts represent the alphabet — typically the 26 letters A–Z, as will be assumed for the rest of this description. When the rotors are mounted side by side on the spindle, the pins of one rotor rest against the plate contacts of the neighbouring rotor, forming an electrical connection. Inside the body of the rotor, 26 wires connect each pin on one side to a contact on the other in a complex pattern. Most of the rotors are identified by Roman numerals, and each issued copy of rotor I, for instance, is wired identically to all others. The same is true for the special thin beta and gamma rotors used in the M4 naval variant. By itself, a rotor performs only a very simple type of encryption, a simple substitution cipher. For example, the pin corresponding to the letter E might be wired to the contact for letter T on the opposite face, and so on. Enigma's security comes from using several rotors in series (usually three or four) and the regular stepping movement of the rotors, thus implementing a polyalphabetic substitution cipher. Each rotor can be set to one of 26 starting positions when placed in an Enigma machine. After insertion, a rotor can be turned to the correct position by hand, using the grooved finger-wheel which protrudes from the internal Enigma cover when closed. In order for the operator to know the rotor's position, each has an alphabet tyre (or letter ring) attached to the outside of the rotor disc, with 26 characters (typically letters); one of these is visible through the window for that slot in the cover, thus indicating the rotational position of the rotor. In early models, the alphabet ring was fixed to the rotor disc. A later improvement was the ability to adjust the alphabet ring relative to the rotor disc. The position of the ring was known as the Ringstellung ("ring setting"), and that setting was a part of the initial setup needed prior to an operating session. In modern terms it was a part of the initialization vector. Each rotor contains one or more notches that control rotor stepping. In the military variants, the notches are located on the alphabet ring. The Army and Air Force Enigmas were used with several rotors, initially three. On 15 December 1938, this changed to five, from which three were chosen for a given session. Rotors were marked with Roman numerals to distinguish them: I, II, III, IV and V, all with single turnover notches located at different points on the alphabet ring. This variation was probably intended as a security measure, but ultimately allowed the Polish Clock Method and British Banburismus attacks. The Naval version of the Wehrmacht Enigma had always been issued with more rotors than the other services: At first six, then seven, and finally eight. The additional rotors were marked VI, VII and VIII, all with different wiring, and had two notches, resulting in more frequent turnover. The four-rotor Naval Enigma (M4) machine accommodated an extra rotor in the same space as the three-rotor version. This was accomplished by replacing the original reflector with a thinner one and by adding a thin fourth rotor. That fourth rotor was one of two types, Beta or Gamma, and never stepped, but could be manually set to any of 26 positions. One of the 26 made the machine perform identically to the three-rotor machine. Stepping To avoid merely implementing a simple (solvable) substitution cipher, every key press caused one or more rotors to step by one twenty-sixth of a full rotation, before the electrical connections were made. This changed the substitution alphabet used for encryption, ensuring that the cryptographic substitution was different at each new rotor position, producing a more formidable polyalphabetic substitution cipher. The stepping mechanism varied slightly from model to model. The right-hand rotor stepped once with each keystroke, and other rotors stepped less frequently. Turnover The advancement of a rotor other than the left-hand one was called a turnover by the British. This was achieved by a ratchet and pawl mechanism. Each rotor had a ratchet with 26 teeth and every time a key was pressed, the set of spring-loaded pawls moved forward in unison, trying to engage with a ratchet. The alphabet ring of the rotor to the right normally prevented this. As this ring rotated with its rotor, a notch machined into it would eventually align itself with the pawl, allowing it to engage with the ratchet, and advance the rotor on its left. The right-hand pawl, having no rotor and ring to its right, stepped its rotor with every key depression. For a single-notch rotor in the right-hand position, the middle rotor stepped once for every 26 steps of the right-hand rotor. Similarly for rotors two and three. For a two-notch rotor, the rotor to its left would turn over twice for each rotation. The first five rotors to be introduced (I–V) contained one notch each, while the additional naval rotors VI, VII and VIII each had two notches. The position of the notch on each rotor was determined by the letter ring which could be adjusted in relation to the core containing the interconnections. The points on the rings at which they caused the next wheel to move were as follows. The design also included a feature known as double-stepping. This occurred when each pawl aligned with both the ratchet of its rotor and the rotating notched ring of the neighbouring rotor. If a pawl engaged with a ratchet through alignment with a notch, as it moved forward it pushed against both the ratchet and the notch, advancing both rotors. In a three-rotor machine, double-stepping affected rotor two only. If, in moving forward, the ratchet of rotor three was engaged, rotor two would move again on the subsequent keystroke, resulting in two consecutive steps. Rotor two also pushes rotor one forward after 26 steps, but since rotor one moves forward with every keystroke anyway, there is no double-stepping. This double-stepping caused the rotors to deviate from odometer-style regular motion. With three wheels and only single notches in the first and second wheels, the machine had a period of 26×25×26 = 16,900 (not 26×26×26, because of double-stepping). Historically, messages were limited to a few hundred letters, and so there was no chance of repeating any combined rotor position during a single session, denying cryptanalysts valuable clues. To make room for the Naval fourth rotors, the reflector was made much thinner. The fourth rotor fitted into the space made available. No other changes were made, which eased the changeover. Since there were only three pawls, the fourth rotor never stepped, but could be manually set into one of 26 possible positions. A device that was designed, but not implemented before the war's end, was the Lückenfüllerwalze (gap-fill wheel) that implemented irregular stepping. It allowed field configuration of notches in all 26 positions. If the number of notches was a relative prime of 26 and the number of notches were different for each wheel, the stepping would be more unpredictable. Like the Umkehrwalze-D it also allowed the internal wiring to be reconfigured. Entry wheel The current entry wheel (Eintrittswalze in German), or entry stator, connects the plugboard to the rotor assembly. If the plugboard is not present, the entry wheel instead connects the keyboard and lampboard to the rotor assembly. While the exact wiring used is of comparatively little importance to security, it proved an obstacle to Rejewski's progress during his study of the rotor wirings. The commercial Enigma connects the keys in the order of their sequence on a QWERTZ keyboard: Q→A, W→B, E→C and so on. The military Enigma connects them in straight alphabetical order: A→A, B→B, C→C, and so on. It took inspired guesswork for Rejewski to penetrate the modification. Reflector With the exception of models A and B, the last rotor came before a 'reflector' (German: Umkehrwalze, meaning 'reversal rotor'), a patented feature unique to Enigma among the period's various rotor machines. The reflector connected outputs of the last rotor in pairs, redirecting current back through the rotors by a different route. The reflector ensured that Enigma would be self-reciprocal; thus, with two identically configured machines, a message could be encrypted on one and decrypted on the other, without the need for a bulky mechanism to switch between encryption and decryption modes. The reflector allowed a more compact design, but it also gave Enigma the property that no letter ever encrypted to itself. This was a severe cryptological flaw that was subsequently exploited by codebreakers. In Model 'C', the reflector could be inserted in one of two different positions. In Model 'D', the reflector could be set in 26 possible positions, although it did not move during encryption. In the Abwehr Enigma, the reflector stepped during encryption in a manner similar to the other wheels. In the German Army and Air Force Enigma, the reflector was fixed and did not rotate; there were four versions. The original version was marked 'A', and was replaced by Umkehrwalze B on 1 November 1937. A third version, Umkehrwalze C was used briefly in 1940, possibly by mistake, and was solved by Hut 6. The fourth version, first observed on 2 January 1944, had a rewireable reflector, called Umkehrwalze D, nick-named Uncle Dick by the British, allowing the Enigma operator to alter the connections as part of the key settings. Plugboard The plugboard (Steckerbrett in German) permitted variable wiring that could be reconfigured by the operator. It was introduced on German Army versions in 1928, and was soon adopted by the Reichsmarine (German Navy). The plugboard contributed more cryptographic strength than an extra rotor, as it had 150 trillion possible settings (see below). Enigma without a plugboard (known as unsteckered Enigma) could be solved relatively straightforwardly using hand methods; these techniques were generally defeated by the plugboard, driving Allied cryptanalysts to develop special machines to solve it. A cable placed onto the plugboard connected letters in pairs; for example, E and Q might be a steckered pair. The effect was to swap those letters before and after the main rotor scrambling unit. For example, when an operator pressed E, the signal was diverted to Q before entering the rotors. Up to 13 steckered pairs might be used at one time, although only 10 were normally used. Current flowed from the keyboard through the plugboard, and proceeded to the entry-rotor or Eintrittswalze. Each letter on the plugboard had two jacks. Inserting a plug disconnected the upper jack (from the keyboard) and the lower jack (to the entry-rotor) of that letter. The plug at the other end of the crosswired cable was inserted into another letter's jacks, thus switching the connections of the two letters. Accessories Other features made various Enigma machines more secure or more convenient. Schreibmax Some M4 Enigmas used the Schreibmax, a small printer that could print the 26 letters on a narrow paper ribbon. This eliminated the need for a second operator to read the lamps and transcribe the letters. The Schreibmax was placed on top of the Enigma machine and was connected to the lamp panel. To install the printer, the lamp cover and light bulbs had to be removed. It improved both convenience and operational security; the printer could be installed remotely such that the signal officer operating the machine no longer had to see the decrypted plaintext. Fernlesegerät Another accessory was the remote lamp panel Fernlesegerät. For machines equipped with the extra panel, the wooden case of the Enigma was wider and could store the extra panel. A lamp panel version could be connected afterwards, but that required, as with the Schreibmax, that the lamp panel and light bulbs be removed. The remote panel made it possible for a person to read the decrypted plaintext without the operator seeing it. Uhr In 1944, the Luftwaffe introduced a plugboard switch, called the Uhr (clock), a small box containing a switch with 40 positions. It replaced the standard plugs. After connecting the plugs, as determined in the daily key sheet, the operator turned the switch into one of the 40 positions, each producing a different combination of plug wiring. Most of these plug connections were, unlike the default plugs, not pair-wise. In one switch position, the Uhr did not swap letters, but simply emulated the 13 stecker wires with plugs. Mathematical analysis The Enigma transformation for each letter can be specified mathematically as a product of permutations. Assuming a three-rotor German Army/Air Force Enigma, let denote the plugboard transformation, denote that of the reflector (), and , , denote those of the left, middle and right rotors respectively. Then the encryption can be expressed as After each key press, the rotors turn, changing the transformation. For example, if the right-hand rotor is rotated positions, the transformation becomes where is the cyclic permutation mapping A to B, B to C, and so forth. Similarly, the middle and left-hand rotors can be represented as and rotations of and . The encryption transformation can then be described as Combining three rotors from a set of five, each of the 3 rotor settings with 26 positions, and the plugboard with ten pairs of letters connected, the military Enigma has 158,962,555,217,826,360,000 different settings (nearly 159 quintillion or about 67 bits). Choose 3 rotors from a set of 5 rotors = 5 x 4 x 3 = 60 26 positions per rotor = 26 x 26 x 26 = 17,576 Plugboard = 26! / ( 6! x 10! x 2^10) = 150,738,274,937,250 Multiply each of the above = 158,962,555,217,826,360,000 Operation Basic operation A German Enigma operator would be given a plaintext message to encrypt. After setting up his machine, he would type the message on the Enigma keyboard. For each letter pressed, one lamp lit indicating a different letter according to a pseudo-random substitution determined by the electrical pathways inside the machine. The letter indicated by the lamp would be recorded, typically by a second operator, as the cyphertext letter. The action of pressing a key also moved one or more rotors so that the next key press used a different electrical pathway, and thus a different substitution would occur even if the same plaintext letter were entered again. For each key press there was rotation of at least the right hand rotor and less often the other two, resulting in a different substitution alphabet being used for every letter in the message. This process continued until the message was completed. The cyphertext recorded by the second operator would then be transmitted, usually by radio in Morse code, to an operator of another Enigma machine. This operator would type in the cyphertext and — as long as all the settings of the deciphering machine were identical to those of the enciphering machine — for every key press the reverse substitution would occur and the plaintext message would emerge. Details In use, the Enigma required a list of daily key settings and auxiliary documents. In German military practice, communications were divided into separate networks, each using different settings. These communication nets were termed keys at Bletchley Park, and were assigned code names, such as Red, Chaffinch, and Shark. Each unit operating in a network was given the same settings list for its Enigma, valid for a period of time. The procedures for German Naval Enigma were more elaborate and more secure than those in other services and employed auxiliary codebooks. Navy codebooks were printed in red, water-soluble ink on pink paper so that they could easily be destroyed if they were endangered or if the vessel was sunk. An Enigma machine's setting (its cryptographic key in modern terms; Schlüssel in German) specified each operator-adjustable aspect of the machine: Wheel order (Walzenlage) – the choice of rotors and the order in which they are fitted. Ring settings (Ringstellung) – the position of each alphabet ring relative to its rotor wiring. Plug connections (Steckerverbindungen) – the pairs of letters in the plugboard that are connected together. In very late versions, the wiring of the reconfigurable reflector. Starting position of the rotors (Grundstellung) – chosen by the operator, should be different for each message. For a message to be correctly encrypted and decrypted, both sender and receiver had to configure their Enigma in the same way; rotor selection and order, ring positions, plugboard connections and starting rotor positions must be identical. Except for the starting positions, these settings were established beforehand, distributed in key lists and changed daily. For example, the settings for the 18th day of the month in the German Luftwaffe Enigma key list number 649 (see image) were as follows: Wheel order: IV, II, V Ring settings: 15, 23, 26 Plugboard connections: EJ OY IV AQ KW FX MT PS LU BD Reconfigurable reflector wiring: IU AS DV GL FT OX EZ CH MR KN BQ PW Indicator groups: lsa zbw vcj rxn Enigma was designed to be secure even if the rotor wiring was known to an opponent, although in practice considerable effort protected the wiring configuration. If the wiring is secret, the total number of possible configurations has been calculated to be around (approximately 380 bits); with known wiring and other operational constraints, this is reduced to around (76 bits). Because of the large number of possibilities, users of Enigma were confident of its security; it was not then feasible for an adversary to even begin to try a brute-force attack. Indicator Most of the key was kept constant for a set time period, typically a day. A different initial rotor position was used for each message, a concept similar to an initialisation vector in modern cryptography. The reason is that encrypting many messages with identical or near-identical settings (termed in cryptanalysis as being in depth), would enable an attack using a statistical procedure such as Friedman's Index of coincidence. The starting position for the rotors was transmitted just before the ciphertext, usually after having been enciphered. The exact method used was termed the indicator procedure. Design weakness and operator sloppiness in these indicator procedures were two of the main weaknesses that made cracking Enigma possible. One of the earliest indicator procedures for the Enigma was cryptographically flawed and allowed Polish cryptanalysts to make the initial breaks into the plugboard Enigma. The procedure had the operator set his machine in accordance with the secret settings that all operators on the net shared. The settings included an initial position for the rotors (the Grundstellung), say, AOH. The operator turned his rotors until AOH was visible through the rotor windows. At that point, the operator chose his own arbitrary starting position for the message he would send. An operator might select EIN, and that became the message setting for that encryption session. The operator then typed EIN into the machine twice, this producing the encrypted indicator, for example XHTLOA. This was then transmitted, at which point the operator would turn the rotors to his message settings, EIN in this example, and then type the plaintext of the message. At the receiving end, the operator set the machine to the initial settings (AOH) and typed in the first six letters of the message (XHTLOA). In this example, EINEIN emerged on the lamps, so the operator would learn the message setting that the sender used to encrypt this message. The receiving operator would set his rotors to EIN, type in the rest of the ciphertext, and get the deciphered message. This indicator scheme had two weaknesses. First, the use of a global initial position (Grundstellung) meant all message keys used the same polyalphabetic substitution. In later indicator procedures, the operator selected his initial position for encrypting the indicator and sent that initial position in the clear. The second problem was the repetition of the indicator, which was a serious security flaw. The message setting was encoded twice, resulting in a relation between first and fourth, second and fifth, and third and sixth character. These security flaws enabled the Polish Cipher Bureau to break into the pre-war Enigma system as early as 1932. The early indicator procedure was subsequently described by German cryptanalysts as the "faulty indicator technique". During World War II, codebooks were only used each day to set up the rotors, their ring settings and the plugboard. For each message, the operator selected a random start position, let's say WZA, and a random message key, perhaps SXT. He moved the rotors to the WZA start position and encoded the message key SXT. Assume the result was UHL. He then set up the message key, SXT, as the start position and encrypted the message. Next, he transmitted the start position, WZA, the encoded message key, UHL, and then the ciphertext. The receiver set up the start position according to the first trigram, WZA, and decoded the second trigram, UHL, to obtain the SXT message setting. Next, he used this SXT message setting as the start position to decrypt the message. This way, each ground setting was different and the new procedure avoided the security flaw of double encoded message settings. This procedure was used by Wehrmacht and Luftwaffe only. The Kriegsmarine procedures on sending messages with the Enigma were far more complex and elaborate. Prior to encryption the message was encoded using the Kurzsignalheft code book. The Kurzsignalheft contained tables to convert sentences into four-letter groups. A great many choices were included, for example, logistic matters such as refuelling and rendezvous with supply ships, positions and grid lists, harbour names, countries, weapons, weather conditions, enemy positions and ships, date and time tables. Another codebook contained the Kenngruppen and Spruchschlüssel: the key identification and message key. Additional details The Army Enigma machine used only the 26 alphabet characters. Punctuation was replaced with rare character combinations. A space was omitted or replaced with an X. The X was generally used as full-stop. Some punctuation marks were different in other parts of the armed forces. The Wehrmacht replaced a comma with ZZ and the question mark with FRAGE or FRAQ. The Kriegsmarine replaced the comma with Y and the question mark with UD. The combination CH, as in "Acht" (eight) or "Richtung" (direction), was replaced with Q (AQT, RIQTUNG). Two, three and four zeros were replaced with CENTA, MILLE and MYRIA. The Wehrmacht and the Luftwaffe transmitted messages in groups of five characters and counted the letters. The Kriegsmarine used four-character groups and counted those groups. Frequently used names or words were varied as much as possible. Words like Minensuchboot (minesweeper) could be written as MINENSUCHBOOT, MINBOOT or MMMBOOT. To make cryptanalysis harder, messages were limited to 250 characters. Longer messages were divided into several parts, each using a different message key. Example enciphering process The character substitutions by the Enigma machine as a whole can be expressed as a string of letters with each position occupied by the character that will replace the character at the corresponding position in the alphabet. For example, a given machine configuration that enciphered A to L, B to U, C to S, ..., and Z to J could be represented compactly as LUSHQOXDMZNAIKFREPCYBWVGTJ and the enciphering of a particular character by that configuration could be represented by highlighting the enciphered character as in D > LUS(H)QOXDMZNAIKFREPCYBWVGTJ Since the operation of an Enigma machine enciphering a message is a series of such configurations, each associated with a single character being enciphered, a sequence of such representations can be used to represent the operation of the machine as it enciphers a message. For example, the process of enciphering the first sentence of the main body of the famous "Dönitz message" to RBBF PMHP HGCZ XTDY GAHG UFXG EWKB LKGJ can be represented as 0001 F > KGWNT(R)BLQPAHYDVJIFXEZOCSMU CDTK 25 15 16 26 0002 O > UORYTQSLWXZHNM(B)VFCGEAPIJDK CDTL 25 15 16 01 0003 L > HLNRSKJAMGF(B)ICUQPDEYOZXWTV CDTM 25 15 16 02 0004 G > KPTXIG(F)MESAUHYQBOVJCLRZDNW CDUN 25 15 17 03 0005 E > XDYB(P)WOSMUZRIQGENLHVJTFACK CDUO 25 15 17 04 0006 N > DLIAJUOVCEXBN(M)GQPWZYFHRKTS CDUP 25 15 17 05 0007 D > LUS(H)QOXDMZNAIKFREPCYBWVGTJ CDUQ 25 15 17 06 0008 E > JKGO(P)TCIHABRNMDEYLZFXWVUQS CDUR 25 15 17 07 0009 S > GCBUZRASYXVMLPQNOF(H)WDKTJIE CDUS 25 15 17 08 0010 I > XPJUOWIY(G)CVRTQEBNLZMDKFAHS CDUT 25 15 17 09 0011 S > DISAUYOMBPNTHKGJRQ(C)LEZXWFV CDUU 25 15 17 10 0012 T > FJLVQAKXNBGCPIRMEOY(Z)WDUHST CDUV 25 15 17 11 0013 S > KTJUQONPZCAMLGFHEW(X)BDYRSVI CDUW 25 15 17 12 0014 O > ZQXUVGFNWRLKPH(T)MBJYODEICSA CDUX 25 15 17 13 0015 F > XJWFR(D)ZSQBLKTVPOIEHMYNCAUG CDUY 25 15 17 14 0016 O > FSKTJARXPECNUL(Y)IZGBDMWVHOQ CDUZ 25 15 17 15 0017 R > CEAKBMRYUVDNFLTXW(G)ZOIJQPHS CDVA 25 15 18 16 0018 T > TLJRVQHGUCXBZYSWFDO(A)IEPKNM CDVB 25 15 18 17 0019 B > Y(H)LPGTEBKWICSVUDRQMFONJZAX CDVC 25 15 18 18 0020 E > KRUL(G)JEWNFADVIPOYBXZCMHSQT CDVD 25 15 18 19 0021 K > RCBPQMVZXY(U)OFSLDEANWKGTIJH CDVE 25 15 18 20 0022 A > (F)CBJQAWTVDYNXLUSEZPHOIGMKR CDVF 25 15 18 21 0023 N > VFTQSBPORUZWY(X)HGDIECJALNMK CDVG 25 15 18 22 0024 N > JSRHFENDUAZYQ(G)XTMCBPIWVOLK CDVH 25 15 18 23 0025 T > RCBUTXVZJINQPKWMLAY(E)DGOFSH CDVI 25 15 18 24 0026 Z > URFXNCMYLVPIGESKTBOQAJZDH(W) CDVJ 25 15 18 25 0027 U > JIOZFEWMBAUSHPCNRQLV(K)TGYXD CDVK 25 15 18 26 0028 G > ZGVRKO(B)XLNEIWJFUSDQYPCMHTA CDVL 25 15 18 01 0029 E > RMJV(L)YQZKCIEBONUGAWXPDSTFH CDVM 25 15 18 02 0030 B > G(K)QRFEANZPBMLHVJCDUXSOYTWI CDWN 25 15 19 03 0031 E > YMZT(G)VEKQOHPBSJLIUNDRFXWAC CDWO 25 15 19 04 0032 N > PDSBTIUQFNOVW(J)KAHZCEGLMYXR CDWP 25 15 19 05 where the letters following each mapping are the letters that appear at the windows at that stage (the only state changes visible to the operator) and the numbers show the underlying physical position of each rotor. The character mappings for a given configuration of the machine are in turn the result of a series of such mappings applied by each pass through a component of the machine: the enciphering of a character resulting from the application of a given component's mapping serves as the input to the mapping of the subsequent component. For example, the 4th step in the enciphering above can be expanded to show each of these stages using the same representation of mappings and highlighting for the enciphered character: G > ABCDEF(G)HIJKLMNOPQRSTUVWXYZ   P EFMQAB(G)UINKXCJORDPZTHWVLYS         AE.BF.CM.DQ.HU.JN.LX.PR.SZ.VW   1 OFRJVM(A)ZHQNBXPYKCULGSWETDI  N  03  VIII   2 (N)UKCHVSMDGTZQFYEWPIALOXRJB  U  17  VI   3 XJMIYVCARQOWH(L)NDSUFKGBEPZT  D  15  V   4 QUNGALXEPKZ(Y)RDSOFTVCMBIHWJ  C  25  β   R RDOBJNTKVEHMLFCWZAXGYIPS(U)Q         c   4 EVTNHQDXWZJFUCPIAMOR(B)SYGLK         β   3 H(V)GPWSUMDBTNCOKXJIQZRFLAEY         V   2 TZDIPNJESYCUHAVRMXGKB(F)QWOL         VI   1 GLQYW(B)TIZDPSFKANJCUXREVMOH         VIII   P E(F)MQABGUINKXCJORDPZTHWVLYS         AE.BF.CM.DQ.HU.JN.LX.PR.SZ.VW F < KPTXIG(F)MESAUHYQBOVJCLRZDNW Here the enciphering begins trivially with the first "mapping" representing the keyboard (which has no effect), followed by the plugboard, configured as AE.BF.CM.DQ.HU.JN.LX.PR.SZ.VW which has no effect on 'G', followed by the VIII rotor in the 03 position, which maps G to A, then the VI rotor in the 17 position, which maps A to N, ..., and finally the plugboard again, which maps B to F, producing the overall mapping indicated at the final step: G to F. This model has 4 rotors (lines 1 through 4) and the reflector (line R) also permutes (garbles) letters. Models The Enigma family included multiple designs. The earliest were commercial models dating from the early 1920s. Starting in the mid-1920s, the German military began to use Enigma, making a number of security-related changes. Various nations either adopted or adapted the design for their own cipher machines. An estimated 40,000 Enigma machines were constructed. After the end of World War II, the Allies sold captured Enigma machines, still widely considered secure, to developing countries. Commercial Enigma On 23 February 1918, Arthur Scherbius applied for a patent for a ciphering machine that used rotors. Scherbius and E. Richard Ritter founded the firm of Scherbius & Ritter. They approached the German Navy and Foreign Office with their design, but neither agency was interested. Scherbius & Ritter then assigned the patent rights to Gewerkschaft Securitas, who founded the Chiffriermaschinen Aktien-Gesellschaft (Cipher Machines Stock Corporation) on 9 July 1923; Scherbius and Ritter were on the board of directors. Enigma Handelsmaschine (1923) Chiffriermaschinen AG began advertising a rotor machine, Enigma Handelsmaschine, which was exhibited at the Congress of the International Postal Union in 1924. The machine was heavy and bulky, incorporating a typewriter. It measured 65×45×38 cm and weighed about . Schreibende Enigma (1924) This was also a model with a type writer. There were a number of problems associated with the printer and the construction was not stable until 1926. Both early versions of Enigma lacked the reflector and had to be switched between enciphering and deciphering. Glühlampenmaschine, Enigma A (1924) The reflector, suggested by Scherbius' colleague Willi Korn, was introduced with the glow lamp version. The machine was also known as the military Enigma. It had two rotors and a manually rotatable reflector. The typewriter was omitted and glow lamps were used for output. The operation was somewhat different from later models. Before the next key pressure, the operator had to press a button to advance the right rotor one step. Enigma B (1924) Enigma model B was introduced late in 1924, and was of a similar construction. While bearing the Enigma name, both models A and B were quite unlike later versions: They differed in physical size and shape, but also cryptographically, in that they lacked the reflector. This model of Enigma machine was referred to as the Glowlamp Enigma or Glühlampenmaschine since it produced its output on a lamp panel rather than paper. This method of output was much more reliable and cost effective. Hence this machine was 1/8th the price of its predecessor. Enigma C (1926) Model C was the third model of the so-called ″glowlamp Enigmas″ (after A and B) and it again lacked a typewriter. Enigma D (1927) The Enigma C quickly gave way to Enigma D (1927). This version was widely used, with shipments to Sweden, the Netherlands, United Kingdom, Japan, Italy, Spain, United States and Poland. In 1927 Hugh Foss at the British Government Code and Cypher School was able to show that commercial Enigma machines could be broken, provided suitable cribs were available. Soon, the Enigma D would pioneer the use of a standard keyboard layout to be used in German computing. This "QWERTZ" layout is very similar to the American QWERTY keyboard format used in many languages. "Navy Cipher D" Other countries used Enigma machines. The Italian Navy adopted the commercial Enigma as "Navy Cipher D". The Spanish also used commercial Enigma machines during their Civil War. British codebreakers succeeded in breaking these machines, which lacked a plugboard. Enigma machines were also used by diplomatic services. Enigma H (1929) There was also a large, eight-rotor printing model, the Enigma H, called Enigma II by the Reichswehr. In 1933 the Polish Cipher Bureau detected that it was in use for high-level military communication, but it was soon withdrawn, as it was unreliable and jammed frequently. Enigma K The Swiss used a version of Enigma called Model K or Swiss K for military and diplomatic use, which was very similar to commercial Enigma D. The machine's code was cracked by Poland, France, the United Kingdom and the United States; the latter code-named it INDIGO. An Enigma T model, code-named Tirpitz, was used by Japan. Military Enigma The various services of the Wehrmacht used various Enigma versions, and replaced them frequently, sometimes with ones adapted from other services. Enigma seldom carried high-level strategic messages, which when not urgent went by courier, and when urgent went by other cryptographic systems including the Geheimschreiber. Funkschlüssel C The Reichsmarine was the first military branch to adopt Enigma. This version, named Funkschlüssel C ("Radio cipher C"), had been put into production by 1925 and was introduced into service in 1926. The keyboard and lampboard contained 29 letters — A-Z, Ä, Ö and Ü — that were arranged alphabetically, as opposed to the QWERTZUI ordering. The rotors had 28 contacts, with the letter X wired to bypass the rotors unencrypted. Three rotors were chosen from a set of five and the reflector could be inserted in one of four different positions, denoted α, β, γ and δ. The machine was revised slightly in July 1933. Enigma G (1928–1930) By 15 July 1928, the German Army (Reichswehr) had introduced their own exclusive version of the Enigma machine, the Enigma G. The Abwehr used the Enigma G. This Enigma variant was a four-wheel unsteckered machine with multiple notches on the rotors. This model was equipped with a counter that incremented upon each key press, and so is also known as the "counter machine" or the Zählwerk Enigma. Wehrmacht Enigma I (1930–1938) Enigma machine G was modified to the Enigma I by June 1930. Enigma I is also known as the Wehrmacht, or "Services" Enigma, and was used extensively by German military services and other government organisations (such as the railways) before and during World War II. The major difference between Enigma I (German Army version from 1930), and commercial Enigma models was the addition of a plugboard to swap pairs of letters, greatly increasing cryptographic strength. Other differences included the use of a fixed reflector and the relocation of the stepping notches from the rotor body to the movable letter rings. The machine measured and weighed around . In August 1935, the Air Force introduced the Wehrmacht Enigma for their communications. M3 (1934) By 1930, the Reichswehr had suggested that the Navy adopt their machine, citing the benefits of increased security (with the plugboard) and easier interservice communications. The Reichsmarine eventually agreed and in 1934 brought into service the Navy version of the Army Enigma, designated Funkschlüssel ' or M3. While the Army used only three rotors at that time, the Navy specified a choice of three from a possible five. Two extra rotors (1938) In December 1938, the Army issued two extra rotors so that the three rotors were chosen from a set of five. In 1938, the Navy added two more rotors, and then another in 1939 to allow a choice of three rotors from a set of eight. M4 (1942) A four-rotor Enigma was introduced by the Navy for U-boat traffic on 1 February 1942, called M4 (the network was known as Triton, or Shark to the Allies). The extra rotor was fitted in the same space by splitting the reflector into a combination of a thin reflector and a thin fourth rotor. Surviving machines The effort to break the Enigma was not disclosed until 1973. Since then, interest in the Enigma machine has grown. Enigmas are on public display in museums around the world, and several are in the hands of private collectors and computer history enthusiasts. The Deutsches Museum in Munich has both the three- and four-rotor German military variants, as well as several civilian versions. The Deutsches Spionagemuseum in Berlin also showcases two military variants. Enigma machines are also exhibited at the National Codes Centre in Bletchley Park, the Government Communications Headquarters, the Science Museum in London, Discovery Park of America in Tennessee, the Polish Army Museum in Warsaw, the Swedish Army Museum (Armémuseum) in Stockholm, the Military Museum of A Coruña in Spain, the Nordland Red Cross War Memorial Museum in Narvik, Norway, The Artillery, Engineers and Signals Museum in Hämeenlinna, Finland the Technical University of Denmark in Lyngby, Denmark, in Skanderborg Bunkerne at Skanderborg, Denmark, and at the Australian War Memorial and in the foyer of the Australian Signals Directorate, both in Canberra, Australia. The Jozef Pilsudski Institute in London exhibited a rare Polish Enigma double assembled in France in 1940. In 2020, thanks to the support of the Ministry of Culture and National Heritage, it became the property of the Polish History Museum. In the United States, Enigma machines can be seen at the Computer History Museum in Mountain View, California, and at the National Security Agency's National Cryptologic Museum in Fort Meade, Maryland, where visitors can try their hand at enciphering and deciphering messages. Two machines that were acquired after the capture of during World War II are on display alongside the submarine at the Museum of Science and Industry in Chicago, Illinois. A three-rotor Enigma is on display at Discovery Park of America in Union City, Tennessee. A four-rotor device is on display in the ANZUS Corridor of the Pentagon on the second floor, A ring, between corridors 8 and 9. This machine is on loan from Australia. The United States Air Force Academy in Colorado Springs has a machine on display in the Computer Science Department. There is also a machine located at The National WWII Museum in New Orleans. The International Museum of World War II near Boston has seven Enigma machines on display, including a U-boat four-rotor model, one of three surviving examples of an Enigma machine with a printer, one of fewer than ten surviving ten-rotor code machines, an example blown up by a retreating German Army unit, and two three-rotor Enigmas that visitors can operate to encode and decode messages. Computer Museum of America in Roswell, Georgia has a three-rotor model with two additional rotors. The machine is fully restored and CMoA has the original paperwork for the purchase on 7 March 1936 by the German Army. The National Museum of Computing also contains surviving Enigma machines in Bletchley, England. In Canada, a Swiss Army issue Enigma-K, is in Calgary, Alberta. It is on permanent display at the Naval Museum of Alberta inside the Military Museums of Calgary. A four-rotor Enigma machine is on display at the Military Communications and Electronics Museum at Canadian Forces Base (CFB) Kingston in Kingston, Ontario. Occasionally, Enigma machines are sold at auction; prices have in recent years ranged from US$40,000 to US$547,500 in 2017. Replicas are available in various forms, including an exact reconstructed copy of the Naval M4 model, an Enigma implemented in electronics (Enigma-E), various simulators and paper-and-scissors analogues. A rare Abwehr Enigma machine, designated G312, was stolen from the Bletchley Park museum on 1 April 2000. In September, a man identifying himself as "The Master" sent a note demanding £25,000 and threatening to destroy the machine if the ransom was not paid. In early October 2000, Bletchley Park officials announced that they would pay the ransom, but the stated deadline passed with no word from the blackmailer. Shortly afterward, the machine was sent anonymously to BBC journalist Jeremy Paxman, missing three rotors. In November 2000, an antiques dealer named Dennis Yates was arrested after telephoning The Sunday Times to arrange the return of the missing parts. The Enigma machine was returned to Bletchley Park after the incident. In October 2001, Yates was sentenced to ten months in prison and served three months. In October 2008, the Spanish daily newspaper El País reported that 28 Enigma machines had been discovered by chance in an attic of Army headquarters in Madrid. These four-rotor commercial machines had helped Franco's Nationalists win the Spanish Civil War, because, though the British cryptologist Alfred Dilwyn Knox in 1937 broke the cipher generated by Franco's Enigma machines, this was not disclosed to the Republicans, who failed to break the cipher. The Nationalist government continued using its 50 Enigmas into the 1950s. Some machines have gone on display in Spanish military museums, including one at the National Museum of Science and Technology (MUNCYT) in La Coruña and one at the Spanish Army Museum. Two have been given to Britain's GCHQ. The Bulgarian military used Enigma machines with a Cyrillic keyboard; one is on display in the National Museum of Military History in Sofia. On 3 December 2020, German divers working on behalf of the World Wide Fund for Nature discovered a destroyed Enigma machine in Flensburg Firth (part of the Baltic Sea) which is believed to be from a scuttled U-boat. This Enigma machine will be restored by and be the property of the Archaeology Museum of Schleswig Holstein. An M4 Enigma was salvaged in the 1980s from the German minesweeper R15, which was sunk off the Istrian coast in 1945. The machine was put on display in the Pivka Park of Military History in Slovenia on 13 April 2023. Derivatives The Enigma was influential in the field of cipher machine design, spinning off other rotor machines. Once the British discovered Enigma's principle of operation, they created the Typex rotor cipher, which the Germans believed to be unsolvable. Typex was originally derived from the Enigma patents; Typex even includes features from the patent descriptions that were omitted from the actual Enigma machine. The British paid no royalties for the use of the patents. In the United States, cryptologist William Friedman designed the M-325 machine, starting in 1936, that is logically similar. Machines like the SIGABA, NEMA, Typex, and so forth, are not considered to be Enigma derivatives as their internal ciphering functions are not mathematically identical to the Enigma transform. A unique rotor machine called Cryptograph was constructed in 2002 by Netherlands-based Tatjana van Vark. This device makes use of 40-point rotors, allowing letters, numbers and some punctuation to be used; each rotor contains 509 parts. Simulators See also Alastair Denniston Arlington Hall Arne Beurling Beaumanor Hall, a stately home used during the Second World War for military intelligence Cryptanalysis of the Enigma Erhard Maertens—investigated Enigma security Erich Fellgiebel ECM Mark II—cipher machine used by the Americans in the Second World War Fritz Thiele Gisbert Hasenjaeger—responsible for Enigma security United States Naval Computing Machine Laboratory Typex—cipher machine used by the British in the Second World War, based on the principles of the commercial Enigma machine Explanatory notes References Citations General and cited references Further reading Heath, Nick, Hacking the Nazis: The secret story of the women who broke Hitler's codes TechRepublic, 27 March 2015 Marks, Philip. "Umkehrwalze D: Enigma's Rewirable Reflector — Part I", Cryptologia 25(2), April 2001, pp. 101–141. Marks, Philip. "Umkehrwalze D: Enigma's Rewirable Reflector — Part II", Cryptologia 25(3), July 2001, pp. 177–212. Marks, Philip. "Umkehrwalze D: Enigma's Rewirable Reflector — Part III", Cryptologia 25(4), October 2001, pp. 296–310. Perera, Tom. The Story of the ENIGMA: History, Technology and Deciphering, 2nd Edition, CD-ROM, 2004, Artifax Books, sample pages Rebecca Ratcliffe: Searching for Security. The German Investigations into Enigma's security. In: Intelligence and National Security 14 (1999) Issue 1 (Special Issue) S. 146–167. Rejewski, Marian. "How Polish Mathematicians Deciphered the Enigma" , Annals of the History of Computing 3, 1981. This article is regarded by Andrew Hodges, Alan Turing's biographer, as "the definitive account" (see Hodges' Alan Turing: The Enigma, Walker and Company, 2000 paperback edition, p. 548, footnote 4.5). Ulbricht, Heinz. Enigma Uhr, Cryptologia, 23(3), April 1999, pp. 194–205. Untold Story of Enigma Code-Breaker — The Ministry of Defence (U.K.) External links Gordon Corera, Poland's overlooked Enigma codebreakers, BBC News Magazine, 4 July 2014 Long-running list of places with Enigma machines on display Bletchley Park National Code Centre Home of the British codebreakers during the Second World War Enigma machines on the Crypto Museum Web site Pictures of a four-rotor naval enigma, including Flash (SWF) views of the machine Enigma Pictures and Demonstration by NSA Employee at RSA Kenngruppenheft Process of building an Enigma M4 replica Breaking German Navy Ciphers Broken stream ciphers Cryptographic hardware Encryption devices Military communications of Germany Military equipment introduced in the 1920s Products introduced in 1918 Rotor machines Signals intelligence of World War II World War II military equipment of Germany
Enigma machine
Physics,Technology
12,736
7,853,107
https://en.wikipedia.org/wiki/Slotoxin
Slotoxin is a peptide from Centruroides noxius Hoffmann scorpion venom. It belongs to the short scorpion toxin superfamily. Method of isolation For isolation of slotoxin, scorpions of the species Centruroides noxius are milked for venom in the laboratory. The crude venom is being dissolved in distilled water and spun. The supernatant is separated. The active fraction is then further separated. Structure The 37 amino acid peptide belongs to the charybdotoxin sub-family (αKTx1) and was numbered member 11. αKTx1.11 revealed specificity for mammalian MaxiK channels (hSlo), thus, was named slotoxin. Its sequence is H-Thr-Phe-Ile-Asp-Val-Asp-Cys(1)-Thr-Val-Ser-Lys-Glu-Cys(2)-Trp-Ala-Pro-Cys(3)-Lys-Ala-Ala-Phe-Gly-Val-Asp-Arg-Gly-Lys-Cys(1)-Met-Gly-Lys-Lys-Cys(2)-Lys-Cys(3)-Tyr-Val-OH. Targets Slotoxin reversibly blocks the high conductance calcium-activated potassium channels composed of only α-subunits (Kd = 1.5 nM). Unreversibly blocks the high conductance calcium-activated potassium channels composed of α- and β1-subunits. Unreversibly and weakly blocks the high conductance calcium-activated potassium channels composed of α- and β4-subunits. It shows no activity on other potassium channels. Mode of action The positively charged surface (C-terminal) of SloTx has a specific short-range interaction with the negatively charged pore region of potassium-channels leading to channel blockade. Specific hydrophobic residue-residue interactions between SloTx and MaxiK channels may also contribute to toxin-channel interaction. Another region in the potassium channel (flanking the N-terminal of SloTx) is situated in the face opposite to the site of toxin-pore interaction, and might have implications for the modulation of channel blockade by the MaxiK β subunits. SloTx is suggested to interact with the MaxiK channel pore-forming α-subunit by blocking the pore via a bimolecular reaction. Toxicity The large-conductance voltage and calcium-activated potassium (MaxiK, BK) channels are intrinsic membrane proteins that regulate excitability in a large variety of tissues including brain and smooth muscle. References http://www.ncbi.nlm.nih.gov/entrez/viewer.fcgi?db=protein&val=90101392 Neurotoxins Ion channel toxins Peptides Scorpion toxins
Slotoxin
Chemistry
600
6,927,432
https://en.wikipedia.org/wiki/Heteroazeotrope
A heteroazeotrope is an azeotrope where the vapour phase coexists with two liquid phases. Sketch of a T-x/y equilibrium curve of a typical heteroazeotropic mixture Examples of heteroazeotropes Benzene - Water NBP 69.2 °C Dichloromethane - Water NBP 38.5 °C n-Butanol - Water NBP 93.5 °C Toluene - Water NBP 82 °C Continuous heteroazeotropic distillation Heterogeneous distillation means that during the distillation the liquid phase of the mixture is immiscible. In this case on the plates can be two liquid phases and the top vapour condensate splits in two liquid phases, which can be separated in a decanter. The simplest case of continuous heteroazeotropic distillation is the separation of a binary heterogeneous azeotropic mixture. In this case the system contains two columns and a decanter. The fresh feed (A-B) is added into the first column. (The feed may also be added into the decanter directly or into the second column depending on the composition of the mixture). From the decanter the A-rich phase is withdrawn as reflux into the first column while the B-rich phase is withdrawn as reflux into the second column. This mean the first column produces "A" and the second column produces "B" as a bottoms product. In industry the butanol-water mixture is separated with this technique. At the previous case the binary system forms already a heterogeneous azeotrope. The other application of the heteroazeotropic distillation is the separation of a binary system (A-B) forming a homogeneous azeotrope. In this case an entrainer or solvent is added to the mixture in order to form an heteroazeotrope with one or both of the components in order to help the separation of the original A-B mixture. Batch heteroazeotropic distillation Batch heteroazeotropic distillation is an efficient method for the separation of azeotropic and low relative volatility (low α) mixtures. A third component (entrainer, E) is added to the binary A-B mixture, which makes the separation of A and B possible. The entrainer forms a heteroazeotrope with at least one (and preferably with only one (selective entrainer)) of the original components. The main parts of the conventional batch distillation columns are the following: - pot (include reboiler) - column - condenser to condense the top vapour - product receivers - (entrainer fed) In case of the heteroazeotropic distillation the equipment is completed with a decanter, where the two liquid phases are split. Three different cases are possible for the addition of the entrainer: 1, Batch Addition of the Entrainer: The total quantity of the entrainer is added to the charge before the start of the procedure. 2, Continuous Entrainer Feeding: The total quantity of the entrainer is introduced continuously to the column. 3, Mixed Addition of the Entrainer: The combination of the batch addition and continuous feeding of the entrainer. We added one part of the entrainer to the charge before the start of the distillation and the other part continuously during distillation. In the last years the batch heteroazeotropic distillation has come into prominence so several studies have been published. The heteroazeotropic batch distillation was investigated by feasibility studies, rigorous simulation calculations and laboratory experiments. Feasibility analysis is conducted in Modla et al. and Rodriguez-Donis et al. for the separation of low-relative-volatility and azeotropic mixtures by heterogeneous batch distillation in a batch rectifier. Rodriguez-Donis et al. were the first to provide the entrainer selection rules. The feasibility methods was extended and modified by Rodriguez-Donis et al., Rodriguez-Donis et al., (2005), Skouras et al., and Lang and Modla. Varga applied these feasibility studies in her thesis. Experimental result was published by Rodriguez-Donis et al., Xu and Wand, Van Kaam and others. References See also Azeotrope Batch distillation Distillation Steam distillation Phase transitions
Heteroazeotrope
Physics,Chemistry
960
59,055,382
https://en.wikipedia.org/wiki/Hiawatha%20Glacier
Hiawatha Glacier is a glacier in northwest Greenland, with its terminus in Inglefield Land. It was mapped in 1922 by Lauge Koch, who noted that the glacier tongue extended into Lake Alida (near Foulk Fjord). Hiawatha Glacier attracted attention in 2018 because of the discovery of a crater beneath the surface of the ice sheet in the area. A publication noted in 1952 that Hiawatha Glacier had been retreating since 1920. Probable impact structure The proposed impact structure was identified using airborne radar surveys that showed the presence of a 31 km wide crater-like depression in the bedrock beneath the ice. Shocked quartz grains and melt rock clasts have been found in fluvio-glacial sediments deposited by a river that drains the area of the structure. The timing of the impact has been dated using argon-argon dating and uranium-lead dating of zircon crystals within the melt rock to 57.99 ± 0.54 million years ago, during the late Paleocene. From an interpretation of the crystalline nature of the underlying rock, together with chemical analysis of sediment washed from the crater, the impactor was argued to be a type of iron asteroid with a diameter in the order of . If an impact origin for the crater is confirmed, it would be one of the twenty-five largest known impact craters on Earth. See also List of possible impact structures on Earth Bølling–Allerød warming Operation IceBridge References External links International Team, NASA Make Unexpected Discovery Under Greenland Ice NASA/Video Massive crater under Greenland's ice points to climate-altering impact in the time of humans Video Discovering a massive meteorite crater Docu/Video Impact craters of Greenland Glaciers of Greenland Younger Dryas impact hypothesis
Hiawatha Glacier
Biology
349
35,027,733
https://en.wikipedia.org/wiki/Pomarose
Pomarose is a high-impact captive odorant patented by Givaudan. It is a double-unsaturated ketone that does not occur in nature. Pomarose has a powerful fruity rose odor with nuances of apples, plums and raisins, which is almost entirely due to the (2E,5Z)-stereoisomer, while its (2E,5E)-isomer is barely detectable for most people. Catalyzed by traces of acids, both isomers equilibrate however quickly upon standing in glass containers. Discovery and synthesis 5,6,7-Trimethylocta-2,5-dien-4-one was suspected by Philip Kraft et al. by investigation of the NMR spectra of an unknown trace component with damascone odor in a crude complex reaction product. Although this trace component eventually turned out to be the constitutional isomer 2-methyl-3-isopropylhepta-2,5-dien-4-one. Pomarose was synthesized for structural curiosity and found to possess even superior fruity, rosy odor characteristics, reminiscent of apples, plums, raisins and other dried fruits with a low odor threshold of 0.5 ng/L air. The synthesis comprised borontrifluoride-catalyzed addition of methyl isopropyl ketone to 1-ethoxyprop-1-yne, which afforded ethyl 2,3,4-trimethylpent-2-enoate, and then was transformed into the target molecule by Grignard reaction with propen-1-ylmagnesium bromide via in situ enolization. Use in perfumery Pomarose has been used in a variety of perfumes. It had its debut in Be Delicious for Men and it is also used in Unforgivable, 1 Million, CK free, Legend, Unforgivable Woman and John Galliano. Related compounds Damascones References Perfume ingredients Ketones Alkene derivatives
Pomarose
Chemistry
426
34,137,673
https://en.wikipedia.org/wiki/C13H19NOS
{{DISPLAYTITLE:C13H19NOS}} The molecular formula C13H19NOS (molar mass: 237.361 g/mol, exact mass: 237.1187 u) may refer to: α-Pyrrolidinopentiothiophenone SIB-1553A
C13H19NOS
Chemistry
70
21,859,341
https://en.wikipedia.org/wiki/Phonological%20rule
A phonological rule is a formal way of expressing a systematic phonological or morphophonological process in linguistics. Phonological rules are commonly used in generative phonology as a notation to capture sound-related operations and computations the human brain performs when producing or comprehending spoken language. They may use phonetic notation or distinctive features or both. John Goldsmith (1995) defines phonological rules as mappings between two different levels of sounds representation—in this case, the abstract or underlying level and the surface level—and Bruce Hayes (2009) describes them as "generalizations" about the different ways a sound can be pronounced in different environments. That is to say, phonological rules describe how a speaker goes from the abstract representation stored in their brain, to the actual sound they articulate when they speak. In general, phonological rules start with the underlying representation of a sound (the phoneme that is stored in the speaker's mind) and yield the final surface form, or what the speaker actually pronounces. When an underlying form has multiple surface forms, this is often referred to as allophony. For example, the English plural written -s may be pronounced as [s] (in "cats"), [z] (in "cabs", "peas"), or as [əz] (in "buses"); these forms are all theorized to be stored mentally as the same -s, but the surface pronunciations are derived through a series of phonological rules. Phonological rule may also refer to a diachronic sound change in historical linguistics. Example In most dialects of American English, speakers have a process known as intervocalic alveolar flapping that changes the consonants /t/ and /d/ into a quick flap consonant ([ɾ]) in words such as "butter" () and "notable" (). The stop consonants /t/ and /d/ only become a flap in between two vowels, where the first vowel is stressed and the second is stressless. It is common to represent phonological rules using formal rewrite rules in the most general way possible. Thus, the intervocalic alveolar flapping described above can be formalized as Format and notation The rule given above for intervocalic alveolar flapping describes what sound is changed, what the sound changes to, and where the change happens (in other words, what the environment is that triggers the change). The illustration below presents the same rule, with each of its parts labelled and described. Taken together and read from left to right, this notation of the rule for intervocalic alveolar flapping states that any alveolar stop consonant (/t/ or /d/) becomes a tap ([ɾ]) in the environment where it is preceded by a stressed vowel and followed by an unstressed one. Phonological rules are often written using distinctive features, which are (supposedly) natural characteristics that describe the acoustic and articulatory makeup of a sound; by selecting a particular bundle, or "matrix," of features, it is possible to represent a group of sounds that form a natural class and pattern together in phonological rules. For example, in the rule above, rather than writing /t/ and /d/ separately, phonologists may write the features that they have in common, thus capturing the whole set of sounds that are stop consonants and are pronounced by placing the tongue against the alveolar ridge. In the most commonly used feature system, the features to represent these sounds would be [+delayed release, +anterior, -distributed], which describe the manner of articulation and the position and shape of the tongue when pronouncing these two sounds. But rules are not always written using features; in some cases, especially when the rule applies only to a single sound, rules are written using the symbols of the International Phonetic Alphabet. Characteristics Hayes (2009) lists the following characteristics that all phonological rules have in common: Language specificity: A phonological rule that is present in one language may not be present in other languages, or even in all dialects of a given language. Productivity: Phonological rules apply even to new words. For example, if an English speaker is asked to pronounce the plural of the nonsense word "wug" (i.e. "wugs"), they pronounce the final s as [z], not [s], even though they have never used the word before. (This kind of test is called the wug test.) Untaught and subconscious: Speakers apply these rules without being aware of it, and they acquire the rules early in life without any explicit teaching. Intuitive: The rules give speakers intuitions about what words are "well-formed" or "acceptable"; if a speaker hears a word that does not conform to the language's phonological rules, the word will sound foreign or ill-formed. Types Phonological rules can be roughly divided into four types: Assimilation: When a sound changes one of its features to be more similar to an adjacent sound. This is the kind of rule that occurs in the English plural rule described above—the -s becomes voiced or voiceless depending on whether or not the preceding consonant is voiced. Dissimilation: When a sound changes one of its features to become less similar to an adjacent sound, usually to make the two sounds more distinguishable. This type of rule is often seen among people speaking a language that is not their native language, where the sound contrasts may be difficult. Insertion: When an extra sound is added between two others. This also occurs in the English plural rule: when the plural morpheme z is added to "bus," "bus-z" would be unpronounceable for most English speakers, so a short vowel (the schwa, [ə]) is inserted between [s] and the [z]. Deletion: When a sound, such as a stress-less syllable or a weak consonant, is not pronounced; for example, most American English speakers do not pronounce the [d] in "handbag". Rule Ordering According to Jensen, when the application of one particular rule generates a phonological or morphological form that triggers an altogether different rule, resulting in an incorrect surface form, rule ordering is required. Types of Rule Ordering Given two rules, A and B, if we assume that both are equally valid rules, then their ordering will fall into one of the following categories: Feeding: the application of A creates the opportunity for B to apply. Bleeding: the application of A prevents B from being able to apply. Counterfeeding: the application of B creates the opportunity for A Counterbleeding: the application of B prevents A from being able to apply. Derivations When a distinct order between two rules is required, a derivation must be shown. The derivation must consist of a correct application of rule ordering that proves the phonetic representation to be possible as well as a counterexample that proves, given the opposite ordering, an incorrect phonetic representation will be generated. Example Derivation Below is an example of a derivation of rule ordering in Russian as presented by Jensen: Given the following rules with rule 1 applying before rule 2: ___# (l-Deletion) ___ # (Final Devoicing) Correct Derivation: /#greb+l#/ (Underlying Representation) greb (Application of l-Deletion) grep (Application of Final Devoicing) [grep*=] (Correct Phonetic Representation) Incorrect Derivation: /#greb+l#/ (Underlying Representation) ------ (Application of Final Devoicing) greb (Application of l-Deletion) *[greb] (Incorrect Phonetic Representation) Expanded Notation On their own, phonological rules are intended to be comprehensive statements about sound changes in a language. However, languages are rarely uniform in the way they change these sounds. For a formal analysis, it is often required to implement notation conventions in addition to those previously introduced to account for the variety of changes that occur as simply as possible. Subscripts: Indicate the number of occurrences of a phoneme type. indicates that or more consonants occur, where . indicates that or more vowels occur, where . Word Boundaries: indicate the left and right boundaries that, between them contain a complete word, represented with a hash sign. For example, the word "cat". #cat#, the beginning and end hash signs indicate the respective beginning and end of the word "cat". { } (Curly Braces): Indicate a logical-disjunction relationship of two expressions. For example, The two expressions, ABD and AED and be written with curly braces as: , A is followed by either B or E and then D. ( ) (Parenthesis): Indicate a logical-disjunction relationship of two expressions and an abbreviated version of the curly braces notation, while maintaining the same disjunctive relationship function. For example, The two expressions, ABD and AD and be written with parentheses as: , B is optionally permitted to come between A and D. < > (Angled Brackets): Indicate a conditional relationship within a set. For example, vowel harmony in Turkish, __ , All vowels will take on the [+/- back] value of the vowel that precedes it, regardless of the number of intervening consonants. If a vowel is [+ high], it will also take on the [+/- round] value of the preceding vowel, regardless of the number of intervening consonants. See also Alternation (linguistics) Sound change Notes References Citations Sources Books cited Phonology
Phonological rule
Mathematics
2,023
7,658,091
https://en.wikipedia.org/wiki/Microprocessor%20complex%20subunit%20DGCR8
The microprocessor complex subunit DGCR8 (DiGeorge syndrome critical region 8) is a protein that in humans is encoded by the gene. In other animals, particularly the common model organisms Drosophila melanogaster and Caenorhabditis elegans, the protein is known as Pasha (partner of Drosha). It is a required component of the RNA interference pathway. Function The subunit DGCR8 is localized to the cell nucleus and is required for microRNA (miRNA) processing. It binds to the other subunit Drosha, an RNase III enzyme, to form the microprocessor complex that cleaves a primary transcript known as pri-miRNA to a characteristic stem-loop structure known as a pre-miRNA, which is then further processed to miRNA fragments by the enzyme Dicer. DGCR8 contains an RNA-binding domain and is thought to bind pri-miRNA to stabilize it for processing by Drosha. DGCR8 is also required for some types of DNA repair. Removal of UV-induced DNA photoproducts, during transcription coupled nucleotide excision repair (TC-NER), depends on JNK phosphorylation of DGCR8 on serine 153. While DGCR8 is known to function in microRNA biogenesis, this activity is not required for DGCR8-dependent removal of UV-induced photoproducts. Nucleotide excision repair is also needed for repair of oxidative DNA damage due to hydrogen peroxide (), and DGCR8 depleted cells are sensitive to . References Further reading MicroRNA RNA interference DNA repair
Microprocessor complex subunit DGCR8
Biology
349
19,123,538
https://en.wikipedia.org/wiki/XL1
XL1 is the second solo album by Buzzcocks frontman Pete Shelley. It was released in 1983 and reached number 42 in the UK Albums Chart, remaining in that listing for four weeks. The single "Telephone Operator" charted at No. 66 in the UK Singles Chart, making it his biggest single release there. The original release was packaged with a computer program for the ZX Spectrum which displayed lyrics and graphics in time with the music. XL1 had a different running order in the US and contained an edited version of "Many a Time". The original UK album was reissued on CD by Grapevine in 1999 and by Western Songs in 2006, both including two B-side "dub" mixes as bonus tracks. Track listing All tracks composed by Pete Shelley. UK version Side one "Telephone Operator" – 3:14 "If You Ask Me (I Won't Say No)" – 4:20 "What Was Heaven?" – 5:04 "You Know Better Than I Know" – 5:04 "Twilight" – 3:08 Side two "(Millions of People) No One Like You" – 4:07 "Many a Time" – 6:41 "I Just Wanna Touch" – 3:04 "You & I" – 3:00 "XL1" – 3:27 Z X Spectrum Code 1999/2006 CD bonus tracks "Telephone Operator/Many a Time (Dub)" – 13:13 "If You Ask Me/No One Like You (Dub)" – 5:46 UK XL1 + Dub Mix Album cassette Side one "Telephone Operator" – 3:14 "If You Ask Me (I Won't Say No)" – 4:20 "What Was Heaven?" – 5:04 "You Know Better Than I Know" – 5:04 "Twilight" – 3:08 "(Millions of People) No One Like You" – 4:07 "Many a Time" – 6:41 "I Just Wanna Touch" – 3:04 "You and I" – 3:00 "XL1" – 3:27 Side two (dub mixes) "Homosapien (Dub)" "I Don't Know What It Is / Witness The Change (Dub)" "Telephone Operator / Many a Time (Dub)" "If You Ask Me (I Won't Say No) / No One Like You (Dub)" Z X Spectrum Code US version Side one "Telephone Operator" – 3:15 "Many a Time" – 4:18 "I Just Wanna Touch" – 2:54 "You Know Better Than I Know" – 4:48 "XL1" – 3:25 Side two "(Millions of People) No One Like You" – 4:05 "If You Ask Me (I Won't Say No)" – 4:20 "You and I" – 3:01 "What Was Heaven?" – 5:05 "Twilight" – 3:12 Personnel Musicians Pete Shelley Barry Adamson Jim Russell Martin Rushent Technical Martin Rushent – co-producer Pete Shelley – co-producer Joey – computer visuals Mike Prior – photography Bruno Tilley – cover Charts References 1983 albums Albums produced by Martin Rushent Island Records albums Pete Shelley albums Vinyl data
XL1
Engineering
664
7,529,770
https://en.wikipedia.org/wiki/Screw%20conveyor
A screw conveyor or auger conveyor is a mechanism that uses a rotating helical screw blade, called a "flighting", usually within a tube, to move liquid or granular materials. They are used in many bulk handling industries. Screw conveyors in modern industry are often used horizontally or at a slight incline as an efficient way to move semi-solid materials, including food waste, wood chips, aggregates, cereal grains, animal feed, boiler ash, meat, bone meal, municipal solid waste, and many others. The first type of screw conveyor was the Archimedes' screw, used since ancient times to pump irrigation water. They usually consist of a trough or tube containing either a spiral blade coiled around a shaft, driven at one end and held at the other, or a "shaftless spiral", driven at one end and free at the other. The rate of volume transfer is proportional to the rotation rate of the shaft. In industrial control applications, the device is often used as a variable rate feeder by varying the rotation rate of the shaft to deliver a measured rate or quantity of material into a process. Screw conveyors can be operated with the flow of material inclined upward. When space allows, this is a very economical method of elevating and conveying. As the angle of inclination increases, the capacity of a given unit rapidly decreases. The rotating part of the conveyor is sometimes called simply an auger. In agriculture The "grain auger" is used in agriculture to move grain from trucks, grain carts, or grain trailers into grain storage bins (from where it is later removed by gravity chutes at the bottom). A grain auger may be powered by an electric motor; a tractor, through the power take-off; or sometimes an internal combustion engine mounted on the auger. The helical rotates inside a long metal tube, moving the grain upwards. On the lower end, a hopper receives grain from the truck or grain cart. A chute on the upper end guides the grain into the destination location. The modern grain auger of today's farming communities was invented by Peter Pakosh. His grain mover employed a screw-type auger with a minimum of moving parts, a totally new application for this specific use. At Massey Harris (later Massey Ferguson), young Pakosh approached the design department in the 1940s with his auger idea, but was scolded and told that his idea was unimaginable and that once the auger aged and bent that the metal on metal would, according to a head Massey designer, "start fires all across Canada". Pakosh, however, went on to design and build a first prototype auger in 1945, and 8 years later start selling tens of thousands under the 'Versatile' name, making it the standard for modern grain augers. A specialized form of grain auger is used to transfer grain into a seed drill and is usually quite a lot smaller in both length and diameter than the augers used to transfer grain to or from a truck, grain cart or bin. This type of auger is known as a "drill fill". Grain augers with a small diameter, regardless of the use they are put to, are often called "pencil augers". Centerless augers are particularly popular in industrial animal farming facilities, where the primary application is distributing animal feed from a central storage location to individual or group feeding devices. The flexible nature of the auger wire allows feed or other materials to change elevation and move at angles. The first centerless auger was patented by Eldon Hostetler and Chore-Time Equipment in the context of this application. Other uses Various other applications of the screw or auger conveyor include its use in snowblowers, to move snow towards an impeller, where it is thrown into the discharge chute. Combine harvesters use both enclosed and open augers to move the unthreshed crop into the threshing mechanism and to move the grain into and out of the machine's hopper. Ice resurfacers use augers to remove loose ice particles from the surface of the ice. An auger is also a central component of an injection molding machine. An auger is used in some rubbish compactors to push the rubbish into a lowered plate at one end for compaction. Augers are also present in food processing. They are a tool of choice in powder processing when it comes to conveyor does precisely bulk solids (powders, pellets...). In a conventional meat grinder, chunks of meat are led by the auger through a spinning blade and a holed plate. This method emulsifies the fat in beef to soften hamburger patties and is also used to produce a wide variety of sausages and loaves. Augers are also used to force food products through dies to produce pellets. These are then processed further to produce products such as bran flakes. Augers are also used in oil fields as a method of transporting rock cuttings away from the shakers to skips. Augers are also used in some types of pellet stoves and barbecue grills, to move fuel from a storage hopper into the firebox in a controlled manner. Augers are often used in machining, wherein the machine tools may include an auger to direct the swarf (scrap metal or plastic) away from the workpiece. Screw conveyors can also be found in wastewater treatment plants to evacuate solid waste from the treatment process. The amphibious infantry fighting vehicle BMP-3 uses an auger-type propulsion unit in water. Olds elevator The Olds elevator is a variant of a screw conveyor developed by Australian engineer Peter Olds in 2002. Rather than rotate a central screw blade, a stationary screw is contained within a rotating casing that scoops surrounding material into its base. Following similar principles to the conventional screw conveyor, the Olds elevator can lift bulk materials efficiently. Since its invention, it has been assessed as a viable system for industrial uses by a number of academics. See also Spiral separator Screw-propelled vehicle Archimedes' screw References Bulk material handling Industrial equipment Agricultural machinery Grain production
Screw conveyor
Engineering
1,267
59,976,010
https://en.wikipedia.org/wiki/Darwinian%20threshold
Darwinian threshold or Darwinian transition is a term introduced by Carl Woese to describe a transition period during the evolution of the first cells when genetic transmission moves from a predominantly horizontal mode to a vertical mode. The process starts when the ancestors of the Last Universal Common Ancestor (the LUCA) are no longer primarily dependent on horizontal (or lateral) gene transfer (HGT) and become individual entities with vertical heredity upon which natural selection is effective. After this transition, life is characterized by genealogies that have a modern tree-like phylogeny. Before the Darwinian threshold The Last Universal Common Ancestor is often considered to be an already complex organism with a DNA-based genome, a complex informational flow and an efficient metabolism, but some authors, like Carl Woese, believe instead that the LUCA was not a discrete entity but rather a diverse community of cells that survived and evolved as a biological unit. Carl Woese indicated that most likely there existed high mutation rates and small genomes. Also present were small proteins and larger imprecisely translated "statistical proteins". Entities in which translation had not yet developed to the point that proteins of the modern type could arise, have been termed “progenotes,” and the era during which these were the most advanced forms of life, the “progenote era”. These organisms or biological entities, these progenotes (or ribocytes), had RNA as informational molecule instead of DNA. RNA is capable of both catalysis and replication and could have been central to the origins of heredity and life itself. It has been proposed that the initial molecular events were carried out by transfer RNAs (tRNAs). It is hypothesized that structured tRNAs could have provided amino acids during a process called self-translation of a single extended tRNA strand. Compartmentalization with membranes was not yet completed and translation of proteins was not precise. Not every progenote had its own metabolism; different metabolic steps were present in different progenotes. Therefore, it is assumed that there existed a community of sub-systems that started to cooperate collectively and culminated in the LUCA. After the Darwinian threshold Most scientists place the LUCA at the root of the tree of life. From this root depart two Prokaryotic Domains: the Bacteria and the Archaea. Just after this first split, one of the branches, going towards the Archaea, splits again and gives rise to a third branch which is that of the Eukaryotes so that now there are three Domains of life. Carl Woese thought that even during the era around the origin of the LUCA, the root and the first branches were very blurred since the cells were not very well defined yet and HGT was still quite important. Some authors maintain LUCA was a mesophilic eukaryote. According to these authors the Domains that derived from LUCA through a process of reductive evolution or "streamlining" were Prokaryotes; mesophilic and thermophilic Bacteria and thermophilic Archaea. The term "prokaryote" should therefore be abandoned, since it suggests that "prokaryotes" preceded "eukaryotes" in their evolution from LUCA towards complexity. See also Horizontal gene transfer Horizontal gene transfer in evolution Evolution Carl Woese Last universal common ancestor References Origin of life Evolutionary biology Genetic genealogy Phylogenetics Hypothetical life forms Most recent common ancestors
Darwinian threshold
Biology
700
935,041
https://en.wikipedia.org/wiki/Urbanism
Urbanism is the study of how inhabitants of urban areas, such as towns and cities, interact with the built environment. It is a direct component of disciplines such as urban planning, a profession focusing on the design and management of urban areas, and urban sociology, an academic field which studies urban life. Many architects, planners, geographers, and sociologists investigate the way people live in densely populated urban areas. There is a wide variety of different theories and approaches to the study of urbanism. However, in some contexts internationally, urbanism is synonymous with urban planning, and urbanist refers to an urban planner. The term urbanism originated in the late nineteenth century with the Spanish civil engineer Ildefons Cerdà, whose intent was to create an autonomous activity focused on the spatial organization of the city. Urbanism's emergence in the early 20th century was associated with the rise of centralized manufacturing, mixed-use neighborhoods, social organizations and networks, and what has been described as "the convergence between political, social and economic citizenship". Urbanism can be understood as placemaking and the creation of place identity at a citywide level, however as early as 1938 Louis Wirth wrote that it is necessary to stop 'identify[ing] urbanism with the physical entity of the city', go 'beyond an arbitrary boundary line' and consider how 'technological developments in transportation and communication have enormously extended the urban mode of living beyond the confines of the city itself.' Concepts Network-based theories Gabriel Dupuy applied network theory to the field of urbanism and suggests that the single dominant characteristic of modern urbanism is its networked character, as opposed to segregated conceptions of space (i.e. zones, boundaries and edges). Stephen Graham and Simon Marvin argue that we are witnessing a post-urban environment where decentralized, loosely connected neighborhoods and zones of activity assume the former organizing role played by urban spaces. Their theory of splintering urbanism involves the "fragmentation of the social and material fabric of cities" into "cellular clusters of globally connected high-service enclaves and network ghettos" driven by electronic networks that segregate as much as they connect. Dominique Lorrain argues that the process of splintering urbanism began towards the end of the 20th century with the emergence of the gigacity, a new form of a networked city characterised by three-dimensional size, network density and the blurring of city boundaries. Manuel Castells suggested that within a network society, "premium" infrastructure networks (high-speed telecommunications, "smart" highways, global airline networks) selectively connect together the most favored users and places and bypass the less favored. Graham and Marvin argue that attention to infrastructure networks is reactive to crises or collapse, rather than sustained and systematic, because of a failure to understand the links between urban life and urban infrastructure networks. Other modern theorists Douglas Kelbaugh identifies three paradigms within urbanism: New Urbanism, Everyday Urbanism, and Post-Urbanism. Paul L. Knox refers to one of many trends in contemporary urbanism as the "aestheticization of everyday life". Alex Krieger states that urban design is less a technical discipline than a mind-set based on a commitment to cities. Other contemporary urbanists such as Edward Soja and Liz Ogbu focus on urbanism as a field for applying principles of community building and spatial justice. See also New urbanism Ecological urbanism extends on concepts of Landscape urbanism Feminist urbanism Green urbanism Landscape urbanism, an urbanism where cities are seen though the lens of landscape architecture and ecology Latino urbanism Principles of Intelligent Urbanism Sustainable Urbanism Unitary urbanism, a critique of urbanism as a technology of power by the situationists Urban economics, the application of economic models and tools to analyse the urban issues such as crime, house and public transit Urban geography Urbanate, a living environment envisioned by the Technocracy movement Urban vitality World Urbanism Day Endnotes External links International Forum on Urbanism Urban planning 2010s fads and trends 2020s fads and trends
Urbanism
Engineering
827
39,597,334
https://en.wikipedia.org/wiki/Edmond%20Bonan
Edmond Bonan (born 27 January 1937 in Haifa, Mandatory Palestine) is a French mathematician, known particularly for his work on special holonomy. Although not a single example of G2 manifold or Spin(7) manifold had been discovered until thirty years later, Edmond Bonan nonetheless made a useful contribution by showing in 1966, that such manifolds would carry at least a parallel 4-form, and would necessarily be Ricci-flat, propelling them as candidates for string theory. Biography After completing his undergraduate studies at the École polytechnique, Bonan went on to write his 1967 University of Paris doctoral dissertation in Differential geometry under the supervision of André Lichnerowicz. From 1968 to 1997, he held the post of lecturer and then professor at the University of Picardie Jules Verne in Amiens, where he currently holds the title of professor emeritus. Early in his career, from 1969 to 1981, he also lectured at the École Polytechnique. See also G2 manifold G2 structure Spin(7) manifold Holonomy Quaternion-Kähler manifold Calibrated geometry Hypercomplex manifold Hyperkähler manifold Uniform polyhedron References 20th-century French mathematicians 21st-century French mathematicians Differential geometers Topologists Relativity theorists Academic staff of the University of Paris École Polytechnique alumni 1937 births Living people
Edmond Bonan
Physics,Mathematics
274
55,217,025
https://en.wikipedia.org/wiki/NGC%204630
NGC 4630 is an irregular galaxy located about 54 million light-years away in the constellation of Virgo. NGC 4630 was discovered by astronomer William Herschel on February 2, 1786. NGC 4630 is part of the Virgo II Groups which form a southern extension of the Virgo Cluster. See also List of NGC objects (4001–5000) NGC 1427A References External links Irregular galaxies Virgo (constellation) 4630 42688 7871 Astronomical objects discovered in 1786
NGC 4630
Astronomy
102
38,628,033
https://en.wikipedia.org/wiki/109%20Tauri
109 Tauri, or n Tauri, is a single, yellow-hued star in the zodiac constellation of Taurus. It has an apparent visual magnitude of 4.96 and is faintly visible to the naked eye. The star has an annual parallax shift of , putting it around 247 light years from the Sun. At that distance, the visual magnitude is diminished by an extinction of 0.24 due to interstellar dust. It is moving further from the Sun with a heliocentric radial velocity of +19 km/s. This is an evolved giant star with a stellar classification of G8 III, having consumed the hydrogen at its core and moved off the main sequence. At the age of 600 million years, it has become a red clump giant, indicating that it is on the horizontal branch and is generating energy through helium fusion at its core. The star has an estimated 2.47 times the mass of the Sun and has expanded to around eight times the Sun's radius. It is radiating about 60 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 5,035 K. References G-type giants Horizontal-branch stars Taurus (constellation) Tauri, n Durchmusterung objects Tauri, 109 034559 024822 1739
109 Tauri
Astronomy
268
54,898,827
https://en.wikipedia.org/wiki/NGC%204340
NGC 4340 is a double-barred lenticular galaxy located about 55 million light-years away in the constellation of Coma Berenices. NGC 4340 was discovered by astronomer William Herschel on March 21, 1784. NGC 4340 is a member of the Virgo Cluster. NGC 4340 is generally thought to be in a pair with the galaxy NGC 4350. Physical characteristics NGC 4340 has a small inner bar embedded in a luminous stellar nuclear ring. Even though the ring is luminous, there are no star-forming regions. Instead, the ring is made of mostly old stars in a gas-poor environment. The color of the ring is the same as the color of the surrounding bulge suggesting that it is probably an old, “fossil” remnant of an earlier episode of star-formation. Crossing the inner ring, there is a larger primary bar with ansae at the ends. Careful inspection shows that the two bars are slightly misaligned, which suggests they may be independently rotating. The larger primary bar connects to another ring that surrounds the central regions of the galaxy. Supernova One supernova has been observed in NGC 4340: SN1977A (type unknown, mag. 16.2) was discovered by Piotr Grigor'evich Kulikovsky on 27 January 1977. See also List of NGC objects (4001–5000) References External links Barred lenticular galaxies Coma Berenices 4340 Virgo Cluster 40245 7467 17840321 Discoveries by William Herschel
NGC 4340
Astronomy
310
37,887,735
https://en.wikipedia.org/wiki/RIBA%20Product%20Selector
RIBA Product Selector was a directory of construction product manufacturers and advisory organisations used by architects and other construction industry professionals to specify building products. The product was retired in July 2020, and how now been replaced by NBS Source. Background RIBA Product Selector was a two-volume hardback directory of construction product manufacturers, service providers and advisory organisations for specifying building materials published by the commercial arm of the RIBA, RIBA Enterprises (now known as NBS Enterprises Ltd). It was published on an annual basis and distributed to construction professionals who register. It had an ABC audited circulation for the 2012 edition of 20,077 (Visit www.a.org.uk for further details) and was categorised according to the CI/SfB classification system. It contained approximately 700 structured technical pages of building product information organised according to BS 4940 structure for technical literature. RIBA Product Selector had a corresponding website which was aimed building products library aimed at UK construction industry professionals looking to research and source products, product catalogues, technical documents and contact information from approximately 10,000 manufacturers, suppliers, distributors and trade associations. It also contained detailed product information linked to the National Building Specification (NBS), as well as case studies and images of key construction products, which it charges manufacturers to list. In July 2020, RIBA Product Selector was replaced by NBS Source, which merged three of the NBS's flagship products: RIBA Product Selector, NBS Plus and NBS National BIM Library to create a single source for building product information with improved search functionality, structured data and synchronisation to NBS's flagship product for specification writing, NBS Chorus. References www.abc.org.uk www.ribaproductselector.com External links https://source.thenbs.com/ https://www.ribaproductselector.com Royal Institute of British Architects Architecture websites
RIBA Product Selector
Engineering
398
4,022,664
https://en.wikipedia.org/wiki/List%20of%20Chinese%20administrative%20divisions%20by%20life%20expectancy
Global Data Lab (2019–2022) This is a list of the first-level administrative divisions of the People's Republic of China (P.R.C.) in order of their life expectancy in 2019–2022, including all provinces and autonomous regions, but not including special administrative regions. Chinese Center for Disease Control and Prevention (2019) Life expectancy in Chinese regions in 2019 according to an article in the journal CCDC Weekly published by CCDC: World Bank Group (2022) Life expectancy in the special administrative regions of China according to the World Bank Group: Charts See also List of Chinese cities by life expectancy List of Asian countries by life expectancy List of countries by life expectancy Administrative divisions of China Demographics of China References National Bureau of Statistics of the People's Republic of China The World FactBook List of Chinese cities by life expectancy Life expectancy Life expectancy Life expectancy China, life expectancy China China health-related lists
List of Chinese administrative divisions by life expectancy
Biology
200
216,410
https://en.wikipedia.org/wiki/Daisyworld
Daisyworld (originally "Daisy World"), a term of reference in evolutionary and population ecology, derives from research on aspects of "coupling" between an ecosphere's biota and its planetary environment, in particular via mathematical modeling and computer simulation, research dating to a series of 1982-1983 symposia presentations and primary research reports by James E. Lovelock and colleagues aimed to address the plausibility of the Gaia hypothesis. Also later referred to as a modeling of geosphere–biosphere interactions, Lovelock's 1983 reports focused on a hypothetical planet with biota (in the original work, daisies) whose growth fluctuates as the planet's exposure to its sun's rays fluctuate, i.e., a pair of daisy varieties, whose differing colours drive a difference in interaction with their environment (in particular, the sun). Reference to Daisyworld types of experiments have come to more broadly refer to extensions of that early work, and to further hypothetical systems involving similar and unrelated species. More specifically, given the impossibility of mathematically modeling the interactions of the full array of the biota of Earth with the full array of their environmental inputs, Lovelock introduced the idea of (and mathematical models and simulations approach to) a far simpler ecosystem—a planet at the lowest limit of its biota orbiting a star whose radiant energy was slowly changing—as a means to mimic a fundamental element of the interaction of all of the Earth's biota with the Sun. In the original 1983 works, Daisyworld made a wide variety of simplifying assumptions, and had white and black daisies as its only organisms, which were presented for their abilities to reflect or absorb light, respectively. The original simulation modeled the two daisy populations—which combined to determine the planet's overall reflective power (fraction of incident radiation reflected by its surface)—and Daisyworld's surface temperature, as a function of changes in the hypothetical star's luminosity; in doing so Lovelock demonstrated that the surface temperature of the simple Daisyworld system remained nearly constant over a broad range of solar fluctuations, a result of shifts in the populations of the two plant varieties. Synopsis, 1983 simulation Wood and colleagues, in a 2008 review citing the two 1983 Lovelock primary research papers on Daisyworld (still Daisy World or the same in lower case, at that point), describe it as being formulated in response to early criticism of Lovelock's Gaia hypothesis, specifically, being a model "invented to demonstrate that planetary self-regulation can emerge automatically from physically realistic feedback between life and its environment, without any need for foresight or planning on the part of the organisms", Given the impossibility of fully representing the "coupling" of the whole of the Earth's biota and its environment, the hypothetical modelis an imaginary grey world orbiting, at a similar distance to the Earth, a star like our Sun that gets brighter with time. The environment... is reduced to one variable, temperature, and the biota consist of two types of life, black and white daisies, which share the same optimum temperature for growth and limits to growth. The soil of Daisyworld is sufficiently well watered and laden with nutrients for temperature alone to determine the growth rate of the daisies. The planet has a negligible atmospheric greenhouse, so its surface temperature is simply determined by... [the hypothetical star's] luminosity and its [the planet's] overall albedo [reflective power, the fraction of incident radiation reflected by the surface], which is, in turn, influenced by the coverage of the two daisy types. This hypothetical construction produces, in its mathematical modeling, a nonlinear system "with interesting self-regulating properties". Purpose and impact The purpose of the model is to demonstrate that feedback mechanisms can evolve from the actions or activities of self-interested organisms, rather than through classic group selection mechanisms. Daisyworld examines the energy budget of a planet populated by two different types of plants, black daisies and white daisies. The colour of the daisies influences the albedo of the planet such that black daisies absorb light and warm the planet, while white daisies reflect light and cool the planet. Competition between the daisies (based on temperature-effects on growth rates) leads to a balance of populations that tends to favour a planetary temperature close to the optimum for daisy growth. Lovelock sought to demonstrate the stability of Daisyworld by making its sun evolve along the main sequence, taking it from low to high solar constant. This perturbation of Daisyworld's receipt of solar radiation caused the balance of daisies to gradually shift from black to white but the planetary temperature was always regulated back to this optimum (except at the extreme ends of solar evolution). This situation is very different from the corresponding abiotic world, where temperature is unregulated and rises linearly with solar output. Criticism Daisyworld was designed to refute the idea that there was something inherently mystical about the Gaia hypothesis that Earth's surface displays homeostatic and homeorhetic properties similar to those of a living organism; specifically, thermoregulation was addressed. Wood and colleagues noted in 2008 that a key element in the hypothetical construct of Daisyworld was that the species of focus,"the daisies alter the same environmental variable (temperature) in the same direction at the local level and the global level. Hence what is selected for at the individual level is directly linked to its global effects. This makes the original model a special case (and it is one that is not particularly prevalent in the real world). Evolutionary biologists often criticize the original model for this reason." The Gaia hypothesis has otherwise attracted a substantial amount of criticism from scientists, e.g., Richard Dawkins, who argued that planet-level thermoregulation was impossible without planetary natural selection, which might involve evidence of dead planets that did not thermoregulate. W. Ford Doolittle rejected the notion of planetary regulation because it seemed to require a "secret consensus" among organisms, thus some sort of inexplicable purpose on a planetary scale. Others countered the criticism that some "secret consensus" would be required for planetary regulation, suggesting that thermoregulation of a planet beneficial to the two species arises naturally. Later criticism of Daisyworld centers on the fact that although it is often used as an analogy for Earth, the original simulation leaves out many important details of the true Earth system. For example, the hypothetical system requires an ad-hoc death rate (γ) to sustain homeostasis, and does not take into account the difference between species-level phenomena and individual level phenomena. Detractors of the simulation believed inclusion of these details would cause the system to become unstable, making it a false analogy. These criticisms were countered by Timothy Lenton and James Lovelock in 2001, who argued that including further factors can improve climate regulation on later versons of Daisyworld. Subsequent research Later versions of Daisyworld, identifying the research area as "tutorial modelling of geosphere–biosphere interactions", introduced a range of grey daisies, as well as populations of grazers and predators, and found that these further increased the stability of the homeostasis. More recently, other research, modeling real biochemical cycles of Earth, and using various types of organisms (e.g. photosynthesisers, decomposers, herbivores and primary and secondary carnivores) also argues to have produced Daisyworld-like regulation and stability, in support of ideas related to planetary biological diversity. This enables nutrient recycling within a regulatory framework derived by natural selection amongst species, where one being's harmful waste becomes low energy food for members of another guild. For instance, research on the Redfield ratio of nitrogen to phosphorus suggests that local biotic processes might regulate global systems. Later extension of the Daisyworld simulations which included rabbits, foxes and other species, led to the proposal that the larger the number of species, the greater thermoregulartory improvement for the entire planet, results suggesting that such a hypothetical system was robust and stable even when perturbed. Daisyworld simulations where environments were stable gradually became less diverse over time; in contrast gentle perturbations led to bursts of species richness, lending support to the idea that biodiversity is valuable. This finding was supported by a 1994 primary research report on species composition, dynamics, and diversity in successional and native grasslands in Minnesota by David Tilman and John A. Downing, which concluded that "primary productivity in more diverse plant communities is more resistant to, and recovers more fully from, a major drought". They go on to add that their "results support the diversity stability hypothesis but not the alternative hypothesis that most species are functionally redundant". Relevance to Earth Because Daisyworld is so simplistic, having for example, no atmosphere, no animals, only one species of plant life, and only the most basic population growth and death models, it should not be directly compared to Earth. This was stated very clearly by the original authors. Even so, it provided a number of useful predictions of how Earth's biosphere may respond to, for example, human interference. Later adaptations of Daisyworld (discussed below), which added many layers of complexity, still showed the same basic trends of the original model. One prediction of the simulation is that the biosphere works to regulate the climate, making it habitable over a wide range of solar luminosity. Many examples of these regulatory systems have been found on Earth. See also Gaia hypothesis Gaia philosophy SimEarth Further reading One review providing a 25-year retrospective of the original and subsequent related research. . This work was cited as one of the two original 1983 publications by Lovelock, of the Daisyworld construct, by Wood et al (2008), op. cit. This work was cited as one of the two original 1983 publications by Lovelock, of the Daisyworld construct, by Wood et al (2008), op. cit. . This is not the first report of Daisyworld, rather, it is a followup study designed to test a specific additional question. As described carefully by Wood et al., op. cit., "Watson and Lovelock [1983] reversed the sign of interaction between daisy color and planetary temperature by assuming that convection generated over the warm spots of the black daisy clumps generates white clouds above them. In this case the black daisies are still locally warmer than the white daisies, but both daisy types now cool the planet. Hence the black daisies always have a selective advantage over their white compatriots, which they drive to extinction. Yet planetary temperature is still regulated, albeit on the cold side of the optimum for growth. See also this author-presented web source of the full article. An interview presenting the history of several topics relevant to this article, from Lovelock's perspective (with respectful reference made to W.F. Doolittle's objections). A more recent, brief retrospective from Doolittle, on Gaia and related studies. References External links Online DaisyWorld simulator, with many options (HTML5/Javascript) Java Applet for Daisyworld on a 2D space Spatial Daisyworld Model Java Applet and explanation of Daisyworld with evolution A Unix/X11 simulation of Daisyworld. Modeling the Gaia Hypothesis: DaisyWorld A test applet of a basic Daisyworld model using a 2D cellular automata. A NetLogo version of the Daisyworld model. Climate modeling Ecological experiments Homeostasis Articles containing video clips
Daisyworld
Biology
2,384
22,539,338
https://en.wikipedia.org/wiki/Communication%20rights
Communication rights involve freedom of opinion and expression, democratic media governance, media ownership and media control, participation in one's own culture, linguistic rights, rights to education, privacy, assemble, and self-determination. They are also related inclusion and exclusion, quality and accessibility to means of communication. A "right to communicate" and "communication rights" are closely related, but not identical. The former is more associated with the New World Information and Communication Order debate, and points to the need for a formal legal acknowledgment of such a right, as an overall framework for more effective implementation. The latter emphasizes the fact that an array of international rights underpinning communication already exists, but many are often ignored and require active mobilization and assertion. History The concept of the right to communicate began in 1969 with Jean D’Arcy, a pioneer in French and European television in the 1950s and by 1969 Director of the United Nations Radio and Visual Services Division, where he was involved in international policy discussions arising out of the recent innovations in satellite global communications. He recognized that the communication rights relating to freedom of expression embodied in the U. N. Universal Declaration of Human Rights (UDHR) adopted in 1948 would need to be re-examined in the context of global, interactive communication between individuals and communities. He called for the need for the recognition of a human right to communicate that would encompass earlier established rights. He thus was the first to link communication and universal human rights. His call was taken up by academics, policy experts, and public servants who evolved into the Right to Communicate Group, the many non-governmental and civil society organisations that made up the Platform for Co-operation on Communication and Democratisation, and the Communication Rights in the Information Society (CRIS) Campaign. The first broad-based debate on media and communication globally, limited mainly to governments, ran for a decade from the mid-1970s. Governments of the South, by then a majority in the UN, began voicing demands in UNESCO concerning media concentration, the flow of news, and ‘cultural imperialism’. The MacBride Report (1981) studied the problem, articulating a general ‘right to communicate’. The debate was compromised, however, by Cold War rhetoric, and fell apart after the US and the UK pulled out of UNESCO. The MacBride Report became unavailable until the World Association for Christian Communication (WACC) sponsored its republication in 1988. WACC held the secretariat of the CRIS Campaign 2000–05. Interest in the right to communicate languished during the 1980s as there was no mass movement to promote it for the simple reason few people had direct experience with interactive communication over global electronic networks. This situation changed dramatically in the 1990s with a cluster of innovations that included the Internet, the World Wide Web, search engines, availability of personal computers, and social networking. As more people participated in interactive communication and the many challenges it raised in regard to such communication rights as free of speech, privacy, and freedom of information, they began to develop a growing consciousness of the importance of such rights to their ability to communicate. A result of this growing communicative consciousness is a renewed research interest in and political advocacy for a right to communicate (see references). From the 1990s onwards, NGOs and activists became increasingly active in a variety of communication issues, from community media, to language rights, to copyright, to Internet provision and free and open source software. These coalesced in a number of umbrella groups tackling inter-related issues from which the pluralistic notion of communication rights began to take shape, this time from the ground up. More recently, the International Journal of Speech-Language Pathology published a special issue on communication rights stating "Communication rights address both “freedom of opinion and expression” and rights and freedoms “without distinction of … language” (United Nations, 1948)" The special issue addressed communication rights from four perspectives: (1) communication rights of all people; (2) communication rights of people with communication disabilities; (3) communication rights of children and (4) communication rights relating to language. The Universal Declaration of Communication Rights (International Communication Project, 2014) has been signed by over 10 000 people and states: "We recognise that the ability to communicate is a basic human right. We recognise that everyone has the potential to communicate. By putting our names to this declaration, we give our support to the millions of people around the world who have communication disorders that prevent them from experiencing fulfilling lives and participating equally and fully in their communities. We believe that people with communication disabilities should have access to the support they need to realise their full potential." Four pillars Each Pillar [of Communication Rights] relates to a different domain of social existence, experience and practice, in which communication is a core activity and performs key functions. The for the four [pillars is,] that each involves a relatively autonomous sphere of social action, yet depends on the others for achieving its ultimate goal - they are necessary interlocking blocks in the struggle to achieve communication rights. Action can be coherently pursued under, each, often in collaboration with other social actors concerned with the area more generally; while bridges can and must be built to the other areas if the goal is to be achieved. Communicating in the public sphere "The role of communication and media in exercising democratic political participation in society. But while the fake and concocted news are broadcast by different media to take the financial favor of the state is highly dangerous. This tendency is developed in 21st century irrespective of any nations around the world. And the legal provision and its implementation part is also very much weak and governed by the will of the state. Free and fair journalism does not refer and mean to publish and broadcast untrue and purposefully concocted news." Communication knowledge "The terms and means by which knowledge generated by society is communicated, or blocked, for use by different groups." Civil rights in communication "The exercise of civil rights relating to the processes of communication in society." Cultural rights in communication "The communication of diverse cultures, cultural forms and identities at the individual and social levels." Right to communicate vs. communication rights A ‘right to communicate’ and ‘communication rights’ are closely related, but not identical, in their history and usage. In the Cold War tensions of the 1970s and 1980s, the former became associated with the New World Information and Communication Order (NWICO) debate, thus, efforts within UNESCO to formulate such a right were abandoned. The latter emphasizes the fact that an array of international rights underpinning communication already exists, but many are too often ignored and require active mobilisation and assertion. While some, especially within the mass media sector, still see the right to communicate as a "code word" for state censorship, the technological innovations in interactive electronic, global communication of recent decades are seen by others as challenging the traditional mass media structures and formulations of communication rights values arising from them, thereby renewing the need to re-consider the need for a right to communicate. Notes References Birdsall, William. F. (2006). "A right to communicate as an open work." Media Development. 53(1): 41–46. d’Arcy, Jean. (1969). "Direct broadcast satellites and the right to communicate". EBU Review. 118(1969) 14–18; reprinted in L.S. Harms, J. Richstad, K. A. Kie. Editors. The Right to Communicate: Collected Papers. Honolulu: University of Hawaii at Manoa, 1977. Dakroury, Aiaa., Eid Mahmoud, & Yahya R. Kamalipour, (Eds.), (2009). The right to communicate: Historical hopes, global debates, and future premises. Dubuque, IA: Kendall Hunt. Fisher, D (1982) The Right to Communicate: A Status Report. Reports and Papers on Mass Communication, n° 94. Paris: Unesco, 1982. Fisher, D. (2002). A New Beginning. The Right to Communicate. Hicks, D. (2007). The Right to communicate: Past mistakes and future possibilities. Dalhousie Journal of Information and Management 3(1). McIver, W. Jr., Birdsall, W., & Rasmussen, M. (2003). The Internet and the right to communicate. First Monday 8, (12) Raboy, M. & Shtern, J. et al. (2010). Media divides: Communication rights and the right to communicate in Canada. "Introduction". "Histories, Contexts, and Controversies". UBC Press: Vancouver, BC, pp Further reading Padovani, Claudia; Calabrese, Andrew (ed.) Communication Rights and Social Justice: Historical Accounts of Transnational Mobilizations. Palgrave Macmillan, 2014. See also Communication Data transmission Cross-cultural communication Intercultural communication External links CRIS Campaign homepage Centre for Communication Rights Human communication Human rights
Communication rights
Biology
1,835
7,531,185
https://en.wikipedia.org/wiki/Outline%20of%20space%20exploration
The following outline is provided as an overview of and topical guide to space exploration. Space exploration – use of astronomy and space technology to explore outer space. Physical exploration of space is conducted both by human spaceflights and by robotic spacecraft. Essence of space exploration Space exploration Branches of space exploration Uncrewed spaceflight – Autonomous space travel without human History of space exploration Remote sensing of Earth Exploration of the Moon Apollo program Moon landings Robotic exploration of the Moon Exploration of Mercury Exploration of Venus Exploration of Mars Mars landings Mars rovers Mars Rotorcrafts Mars flyby Exploration of Jupiter Exploration of Saturn Exploration of Uranus Exploration of Neptune History of human spaceflight Project Mercury Project Gemini Apollo program Space Shuttle program Vostok program Voskhod program Soyuz program Shenzhou program List of human spaceflights List of Space Shuttle missions Spaceflight records Emergence of market forces in spaceflight Timeline of artificial satellites and space probes Timeline of astronauts by nationality Timeline of first orbital launches by country Timeline of rocket and missile technology Timeline of space exploration Timeline of space travel by nationality Timeline of spaceflight Timeline of the Space Race Timeline of Solar System exploration Space agencies List of government space agencies Space agencies capable of human spaceflight (as of January 2024) NASA (USA) CNSA (China) RFSA (Russia) Space agencies with full launch capability NASA (USA) RFSA (Russia) CNSA (China) ESA (Europe) JAXA (Japan) ISRO (India) ISA (Israel) KARI (South Korea) Active deep space missions and space stations International Space Station Europa Clipper (NASA) Tiangong space station (CNSA) Hakuto-R Mission 2 (ispace Inc.) Blue Ghost M1 (Firefly Aerospace) Chandrayaan-2 (ISRO) Hera (ESA) Chang'e 6 service module (CNSA) Tiandu-1 and 2 (CNSA) Queqiao relay satellite (CNSA) Queqiao-2 relay satellite (CNSA) ICUBE-Q (SUPARCO) Chang'e 5 service module (CNSA) DRO A/B (CAS) Chang'e 4 (CNSA) CAPSTONE (NASA) Danuri (KARI) Advanced Composition Explorer – NASA mission to observe solar wind Deep Space Climate Observatory – NOAA observatory for space weather Aditya-L1 (ISRO) Mars Odyssey (NASA) Mars Express – ESA satellite orbiting Mars Mars Reconnaissance Orbiter (NASA) Mars Science Laboratory – NASA rover to Mars MAVEN – NASA satellite orbiting Mars ExoMars Trace Gas Orbiter (ESA / Roscosmos) – Mars satellite Tianwen-1 (CNSA) Mars 2020 Perseverance rover – NASA rover to Mars Akatsuki – JAXA satellite orbiting Venus BepiColombo (ESA / JAXA) STEREO – NASA mission to observe the Sun Parker Solar Probe – NASA probe to Sun Solar Orbiter – ESA probe to Sun Hayabusa2♯ (JAXA) – sample return mission to asteroid Ryugu OSIRIS-APEX (NASA) – probe to asteroid Apophis Lucy – NASA probe to multiple Jupiter trojans Psyche (NASA) – probe to asteroid Psyche Juno – NASA satellite orbiting Jupiter Jupiter Icy Moons Explorer – ESA probe to Jupiter and its moons New Horizons – probe to Pluto Voyager 1 and Voyager 2 (NASA) – probes to outer Solar System and interstellar space Future of space exploration Lunar (the Moon) Future lunar missions Colonization of the Moon Lunar outpost (NASA) Sun Sundiver (space mission) Mercury Colonization of Mercury Venus Exploration of Venus Mars Colonization of Mars Human mission to Mars Mars to Stay Outer Solar System Colonization of the outer Solar System Colonization of Titan Beyond the Solar System Interstellar travel Nuclear rocket Fusion rocket Solar sail Einstein-Rosen bridge Alcubierre drive Intergalactic travel General space exploration concepts Space exploration scholars Leaders in space exploration Yuri Gagarin – first man in space Neil Armstrong and Buzz Aldrin – first men to walk on the Moon John Glenn – oldest man in orbit See also Outline of space science Outline of aerospace Timeline of Solar System exploration Scientific research on the International Space Station Lists List of spacecraft List of crewed spacecraft List of Solar System probes List of active Solar System probes List of lunar probes List of Mars landers List of Mars orbiters List of space telescopes List of proposed space observatories List of cargo spacecraft List of Falcon 9 first-stage boosters List of heaviest spacecraft List of spacecraft called Sputnik List of spacecraft powered by non-rechargeable batteries List of spacecraft with electric propulsion List of spaceplanes List of upper stages List of spacecraft deployed from the International Space Station Assembly of the International Space Station Space Shuttle crews List of Apollo astronauts List of Apollo missions List of Artemis missions List of artificial objects on extra-terrestrial surfaces List of astronauts by name List of astronauts by selection List of communication satellite companies List of communications satellite firsts List of Constellation missions List of Cosmos satellites List of crewed spacecraft List of cumulative spacewalk records List of Earth observation satellites List of human spaceflight programs List of human spaceflights List of human spaceflights to the International Space Station List of interplanetary voyages List of ISS spacewalks List of International Space Station expeditions List of International Space Station visitors List of landings on extraterrestrial bodies List of launch vehicles List of Mir expeditions List of Mir spacewalks List of NASA missions List of objects at Lagrangian points List of private spaceflight companies List of probes by operational status List of rockets Lists of rocket launches List of Ariane launches List of Atlas launches List of Black Brant launches List of Falcon 9 and Falcon Heavy launches List of Long March launches List of Proton launches List of R-7 launches List of Scout launches List of Space Launch System launches List of Thor and Delta launches List of Titan launches List of V-2 test launches List of Zenit launches List of Russian human spaceflight missions List of satellites in geosynchronous orbit List of Solar System probes List of Soviet human spaceflight missions List of space agencies List of Space Shuttle missions List of space travelers by name List of space travelers by nationality List of spacecraft and crews that visited Mir List of spacecraft manufacturers List of spaceflight records List of spaceports List of spacewalks and moonwalks List of the largest fixed satellite operators List of uncrewed spacecraft by program Uncrewed spaceflights to the International Space Station Lists of astronomical objects Lists of telescopes List of government space agencies Lists of astronauts Lists of space scientists References External links Space related news NASA's website on human space travel ESA: Building the ISS Unofficial Shuttle Launch Manifest ISS Assembly Animation List of All Spacecraft Ever Launched, accessed 02/10/2019 Space Missions and Space Craft, accessed 02/10/2019 26 Types of Spacecraft, accessed 02/10/2019 Space exploration Space exploration Space exploration Space exploration Outline Space exploration
Outline of space exploration
Astronomy
1,385
72,663,291
https://en.wikipedia.org/wiki/84%20Ursae%20Majoris
84 Ursae Majoris, also known as HD 120198, is a star about 300 light years from the Earth, in the constellation Ursa Major. It is a 5th magnitude star, making it faintly visible to the naked eye of an observer far from city lights. It is an Ap star with an 1,100 gauss magnetic field, and an α2 CVn variable star, varying in brightness from magnitude 5.65 to 5.70, over a period of 1.37996 days. 84 Ursae Majoris is located just 70 arcseconds from the star LDS 2914, but that star is believed to be a background star not physically associated with 84 Ursae Majoris. Gerhard Jackisch discovered that 84 Ursae Majoris is a variable star, with a period greater than one day, in 1972. It was given the variable star designation CR Ursae Majoris in 1974. In 1994 John Rice and William Wehlau used Doppler imaging to map the distribution of iron and chromium on the surface of 84 Ursae Majoris. They found that the distribution of those elements across the surface was similar, and the abundances of those elements varied by a factor of 15 across the surface. Chromium was found to be about 600 times more abundant than on the Sun in the regions of the 84 Ursae Majoris surface with the minimum chromium abundance. The size of 84 Ursae Majoris was measured in red light during 2015 and 2016, using the CHARA array. The limb darkened angular diameter was milliarcseconds. References Ursa Major 67231 120198 Durchmusterung objects Ursae Majoris, CR Ursae Majoris, 84 Alpha2 Canum Venaticorum variables B-type main-sequence stars
84 Ursae Majoris
Astronomy
378
34,159,624
https://en.wikipedia.org/wiki/Multiple%20Console%20Time%20Sharing%20System
The Multiple Console Time Sharing System (MCTS) was an operating system developed by General Motors Research Laboratories in the 1970s for the Control Data Corporation STAR-100 supercomputer. MCTS was built to support GM's computer-aided design (CAD) applications. MCTS was designed starting in 1968. It was written in a high-level systems programming language "Malus", a dialect of PL/I. A superset of Malus called Apple became the primary application language. MCTS was based on Multics. All access to data was through the virtual memory system. Only the system paging support module was concerned about the physical location of the data. See also GM-NAA I/O SHARE Operating System Timeline of operating systems References Further reading Discontinued operating systems Multics-like Proprietary operating systems Time-sharing operating systems Supercomputer operating systems
Multiple Console Time Sharing System
Technology
177
6,855,178
https://en.wikipedia.org/wiki/Arches%20Cluster
The Arches Cluster is the densest known star cluster in the Milky Way, about 100 light-years from its center in the constellation Sagittarius (The Archer), 25,000 light-years from Earth. Its discovery was reported by Nagata et al. in 1995, and independently by Cotera et al. in 1996. Due to extremely heavy optical extinction by dust in this region, the cluster is obscured in the visual bands, and is observed in the X-ray, infrared and radio bands. It contains approximately 135 young, very hot stars that are many times larger and more massive than the Sun, plus many thousands of less massive stars. The star cluster is estimated to be around two and a half million years old. Although larger and denser than the nearby Quintuplet Cluster, it appears to be slightly younger. Only stars earlier and more massive than O5 have evolved away from the main sequence while the Quintuplet Cluster includes a number of hot supergiants as well as a red supergiant and three luminous blue variables. The most prominent members of the Arches Cluster are hot emission line stars: thirteen Wolf–Rayet stars, all massive hydrogen-rich types; and eight class O hypergiants. One of these is an eclipsing binary with a Wolf–Rayet primary and a class O supergiant secondary. X-ray emission from the cluster suggests that many other members are also in close binary systems with two hot luminous members, but there is little evidence of the evolution of these stars being affected by binary mass exchange. The spectral classes and their properties merge smoothly from the main sequence to normal class O giants and supergiants, to class O hypergiants, to the presumed most evolved Wolf–Rayets. One star is intermediate between WN8–9h and O4–6 Ia+. There are no cooler evolved stars. Work by Donald Figer, an astronomer at the Rochester Institute of Technology suggests that 150 solar masses () is the upper limit of stellar mass in the current era of the universe. He used the Hubble Space Telescope to observe about a thousand stars in the Arches cluster and found no stars over that limit despite a statistical expectation that there should be several. However, later research demonstrated a very high sensitivity of the calculated star masses upon the extinction laws used for mass derivation, which can affect the upper mass limit by about 30% using different extinction laws (possibly from to about ). The limit of 150 solar masses was previously deduced by Carsten Weidner & Pavel Kroupa using observations of the cluster R136. See also List of most massive stars References External links The Arches Cluster — ESO Image Gallery Open clusters Sagittarius (constellation) Wolf–Rayet stars
Arches Cluster
Astronomy
560
14,052,067
https://en.wikipedia.org/wiki/Neuropeptide%20B/W%20receptor
The neuropeptide B/W receptors are members of the G-protein coupled receptor superfamily of integral membrane proteins which bind the neuropeptides B and W. These receptors are predominantly expressed in the CNS and have a number of functions including regulation of the secretion of cortisol. References External links G protein-coupled receptors
Neuropeptide B/W receptor
Chemistry
72
14,891,304
https://en.wikipedia.org/wiki/Abstract%20family%20of%20acceptors
An abstract family of acceptors (AFA) is a grouping of generalized acceptors. Informally, an acceptor is a device with a finite state control, a finite number of input symbols, and an internal store with a read and write function. Each acceptor has a start state and a set of accepting states. The device reads a sequence of symbols, transitioning from state to state for each input symbol. If the device ends in an accepting state, the device is said to accept the sequence of symbols. A family of acceptors is a set of acceptors with the same type of internal store. The study of AFA is part of AFL (abstract families of languages) theory. Formal definitions AFA Schema An AFA Schema is an ordered 4-tuple , where and are nonempty abstract sets. is the write function: (N.B. * is the Kleene star operation). is the read function, a mapping from into the finite subsets of , such that and is in if and only if . (N.B. is the empty word). For each in , there is an element in satisfying for all such that is in . For each u in I, there exists a finite set ⊆ , such that if ⊆ , is in , and , then is in . Abstract family of acceptors An abstract family of acceptors (AFA) is an ordered pair such that: is an ordered 6-tuple (, , , , , ), where (, , , ) is an AFA schema; and and are infinite abstract sets is the family of all acceptors = (, , , , ), where and are finite subsets of , and respectively, ⊆ , and is in ; and (called the transition function) is a mapping from into the finite subsets of such that the set | ≠ ø for some and is finite. For a given acceptor, let be the relation on defined by: For in , if there exists a and such that is in , is in and . Let denote the transitive closure of . Let be an AFA and = (, , , , ) be in . Define to be the set . For each subset of , let . Define to be the set . For each subset of , let . Informal discussion AFA Schema An AFA schema defines a store or memory with read and write function. The symbols in are called storage symbols and the symbols in are called instructions. The write function returns a new storage state given the current storage state and an instruction. The read function returns the current state of memory. Condition (3) insures the empty storage configuration is distinct from other configurations. Condition (4) requires there be an identity instruction that allows the state of memory to remain unchanged while the acceptor changes state or advances the input. Condition (5) assures that the set of storage symbols for any given acceptor is finite. Abstract family of acceptors An AFA is the set of all acceptors over a given pair of state and input alphabets which have the same storage mechanism defined by a given AFA schema. The relation defines one step in the operation of an acceptor. is the set of words accepted by acceptor by having the acceptor enter an accepting state. is the set of words accepted by acceptor by having the acceptor simultaneously enter an accepting state and having an empty storage. The abstract acceptors defined by AFA are generalizations of other types of acceptors (e.g. finite-state automata, pushdown automata, etc.). They have a finite state control like other automata, but their internal storage may vary widely from the stacks and tapes used in classical automata. Results from AFL theory The main result from AFL theory is that a family of languages is a full AFL if and only if for some AFA . Equally important is the result that is a full semi-AFL if and only if for some AFA . Origins Seymour Ginsburg of the University of Southern California and Sheila Greibach of Harvard University first presented their AFL theory paper at the IEEE Eighth Annual Symposium on Switching and Automata Theory in 1967. References Formal languages Applied mathematics
Abstract family of acceptors
Mathematics
855
8,527,028
https://en.wikipedia.org/wiki/Potassium%20phosphate
Potassium phosphate is a generic term for the salts of potassium and phosphate ions including: Monopotassium phosphate (KH2PO4) (Molar mass approx: 136 g/mol) Dipotassium phosphate (K2HPO4) (Molar mass approx: 174 g/mol) Tripotassium phosphate (K3PO4) (Molar mass approx: 212.27 g/mol) As food additives, potassium phosphates have the E number E340. References E-number additives Potassium compounds Phosphates
Potassium phosphate
Chemistry
117
39,655,079
https://en.wikipedia.org/wiki/Bergamot%20essential%20oil
Bergamot essential oil is a cold-pressed essential oil produced by cells inside the rind of a bergamot orange fruit. It is a common flavouring and top note in perfumes. The scent of bergamot essential oil is similar to a sweet light orange peel oil with a floral note. Production The sfumatura or slow-folding process was the traditional technique for manually extracting the bergamot oil. By more modern methods, the oil is extracted mechanically with machines called peelers, which scrape the outside of the fruit under running water to get an emulsion channeled into centrifuges for separating the essence from the water. The rinds of 100 bergamot oranges yield about of bergamot oil. Uses Bergamot essential oil has been used in cosmetics, aromatherapy, and as a flavoring in food and beverages. Its citrus scent makes it a natural flavoring and deodorizing agent. The volatile oils of the bergamot orange are described as flavoring agents in the USP Food Chemicals Codex and are generally recognized as safe for human consumption by the Food and Drug Administration. For example, Earl Grey tea is a type of black tea that may contain bergamot essential oil as a flavoring agent. Historically, bergamot essential oil was an ingredient in Eau de Cologne, a perfume originally concocted by Johann Maria Farina at the beginning of the 18th century. The first record of bergamot oil used as a fragrance in perfume is from 1714, found in the Farina Archive in Cologne. Constituents A clear liquid (sometimes there is a deposit consisting of waxes) in color from green to greenish yellow, bergamot essential oil consists of a volatile fraction (average 95%) and a non-volatile fraction (5% or residual). Chemically, it is a complex mixture of many classes of organic substances, particularly in the volatile fraction, including terpenes, esters, alcohols and aldehydes, and for the non-volatile fraction, oxygenated heterocyclic compounds as coumarins and furanocoumarins. Volatile fraction The main compounds in the oil are limonene, linalyl acetate, linalool, γ-terpinene and β-pinene, and in smaller quantities geranial and β-bisabolene. Non-volatile fraction The main non-volatile compounds are coumarins (citropten, 5-Geranyloxy-7-methoxycoumarin) and furanocoumarins (bergapten, bergamottin). Adulteration The bergamot essential oil is particularly subject to adulteration being an essential oil produced in relatively small quantities. Generally adulteration is to "cut" the oil, i.e. adding distilled essences of poor quality and low cost, for example of bitter orange and bergamot mint and/or mixtures of terpenes, natural or synthetic, or "reconstruct" the essence from synthetic chemicals, coloring it with chlorophyll. Worldwide, each year, around three thousand tonnes of declared essence of bergamot are marketed, while the genuine essence of bergamot produced annually amounts to no more than one hundred tons. Natural source analysis based on the Carbon-14 method can identify adulterated essences by detecting synthetic chemicals manufactured from petroleum that are used to mimic the chemical profile of bergamot oil and other essential oils. Gas chromatography with columns having a chiral stationary phase allows analyzing mixtures of enantiomers. The analysis of the enantiomeric distribution of various compounds, such as linalyl acetate and linalool, allows the characterization of the bergamot oil according to the manufacturing process and allows for the detection of possible adulteration. The combined use of isotope ratio mass spectrometry and SNIF-NMR (Site-Specific Natural Isotope Fractionation-Nuclear Magnetic Resonance) allows discovering adulteration otherwise undetectable even allowing for the identification of the geographical origin of the essential oil. The GC-C-IRMS (Gas Chromatography-Combustion – Isotope Ratio Mass Spectrometer) technique, the most recently used, allows obtaining similar results. Reference analytical values Analytical values take as reference for genuinity evaluation of bergamot essential oil by the Experimental Station for the Industry of the Essential oils and Citrus products, in Reggio Calabria, Italy. Toxicity The phototoxic effects of bergamot essential oil have been known for more than a century. In 1925, Rosenthal coined the term "Berloque dermatitis" (from the French word "breloque" meaning trinket or charm) to describe the pendant-like streaks of pigmentation observed on the neck, face, and arms of patients. He was unaware that, in 1916, Freund had correctly observed that these pigmentation effects were due to sun exposure after the use of Eau de Cologne, a perfume infused with bergamot oil. Use of bergamot aromatherapy oil, followed by exposure to ultraviolet light (either sunlight or a tanning bed), has been reported to cause phytophotodermatitis, a serious skin inflammation indicated by painful erythema and bullae on exposed areas of the skin. In one case, six drops of bergamot aromatherapy oil in a bath followed by 20–30 minutes exposure of ultraviolet light from a tanning bed caused a severe burn-like reaction. Bergamot essential oil contains a significant amount of bergapten, a phototoxic substance that gets its name from the bergamot orange. Bergapten, a linear furanocoumarin derived from psoralen, is often found in plants associated with phytophotodermatitis. Note that bergamot essential oil has a higher concentration of bergapten (3000–3600 mg/kg) than any other Citrus-based essential oil. When bergamot essential oil is applied directly to the skin via a patch test, followed by exposure to ultraviolet light, a concentration-dependent phototoxic effect is observed. However, if the oil is twice rectified (and therefore bergapten-free), no phototoxic response is observed. The International Fragrance Association (IFRA) restricts the use of bergamot essential oil due to its phototoxic effects. Specifically, IFRA recommends that leave-on skin products be limited to 0.4% bergamot oil, which is more restrictive than any other Citrus-based essential oil. Although generally recognized as safe for human consumption, bergamot essential oil contains a significant amount of bergamottin, one of two furanocoumarins believed to be responsible for a number of grapefruit–drug interactions. There are no direct reports of Earl Grey tea causing drug interactions. In one case study, a patient who consumed four liters of Earl Grey tea per day suffered paresthesias, fasciculations and muscle cramps. The patient did not show these reactions when drinking the same amount of plain black tea daily; drinking no tea at all; or drinking only one liter of Earl Grey tea daily. The presumed culprit is bergapten, a potassium channel blocker found in bergamot oil. Notes Bibliography Alp Kunkar and Ennio Kunkar, "Bergamotto e le sue essenze", Edizioni A Z A. Kunkar, C. Kunkar: Supercritical CO2 extraction of bergamot oil from peel ; Int. Cong. Medicinal plants and essential oils- Anadolu üniversıtesi-Eskişehir Turkey External links Essential oils
Bergamot essential oil
Chemistry
1,592
24,154,306
https://en.wikipedia.org/wiki/C11H14N2
{{DISPLAYTITLE:C11H14N2}} The molecular formula C11H14N2 (molar mass: 174.24 g/mol, exact mass: 174.115698) may refer to: 6-(2-Aminopropyl)indole Gramine 5-IT α-Methylisotryptamine 5-Methyltryptamine α-Methyltryptamine N-Methyltryptamine
C11H14N2
Chemistry
94
42,116,377
https://en.wikipedia.org/wiki/NGC%201807
NGC 1807 is an asterism at the border of the constellations Orion and Taurus near the open cluster NGC 1817. NGC 1807 has an apparent size of 5.4' and an apparent magnitude of 7.0. References External links 1807 Orion (constellation) Taurus (constellation)
NGC 1807
Astronomy
60
62,534,177
https://en.wikipedia.org/wiki/Local%20structure
The local structure is a term in nuclear spectroscopy that refers to the structure of the nearest neighbours around an atom in crystals and molecules. E.g. in crystals the atoms order in a regular fashion on wide ranges to form even gigantic highly ordered crystals (Naica Mine). However, in reality, crystals are never perfect and have impurities or defects, which means that a foreign atom resides on a lattice site or in between lattice sites (interstitials). These small defects and impurities cannot be seen by methods such as X-ray diffraction or neutron diffraction, because these methods average in their nature of measurement over a large number of atoms and thus are insensitive to effects in local structure. Methods in nuclear spectroscopy use specific nuclei as probe. The nucleus of an atom is about 10,000 to 150,000 times smaller than the atom itself. It experiences the electric fields created by the atom's electrons that surround the nucleus. In addition, the electric fields created by neighbouring atoms also influence the fields that the nucleus experiences. The interactions between the nucleus and these fields are called hyperfine interactions that influence the nucleus' properties. The nucleus therefore becomes very sensitive to small changes in its hyperfine structure, which can be measured by methods of nuclear spectroscopy, such as e.g. nuclear magnetic resonance, Mössbauer spectroscopy, and perturbed angular correlation. With the same methods, the local magnetic fields in a crystal structure can also be probed and provide a magnetic local structure. This is of great importance for the understanding of defects in magnetic materials, which have wide range of applications such as modern magnetic materials or the giant magnetoresistance effect, that is used in materials in the reader heads of harddrives. Research of the local structure of materials has become an important tool for the understanding of properties especially in functional materials, such as used in electronics, chips, batteries, semiconductors, or solar cells. Many of those materials are defect materials and their specific properties are controlled by defects. References Electrostatics Atomic physics Quantum chemistry Electric and magnetic fields in matter
Local structure
Physics,Chemistry,Materials_science,Engineering
424
341,782
https://en.wikipedia.org/wiki/Scarsdale%20diet
The Scarsdale diet, a high-protein low-carbohydrate fad diet designed for weight loss, created in the 1970s by Herman Tarnower and named for the town in New York where he practiced cardiology, is described in the book The Complete Scarsdale Medical Diet Plus Dr. Tarnower's Lifetime Keep-Slim Program. Tarnower wrote the book together with self-help author Samm Sinclair Baker. Overview The diet is similar to the Atkins Diet and Stillman diet in calling for a high protein low-carbohydrate diet, but also emphasizes the importance of fruits and vegetables. The diet restricts certain foods but allows an unrestricted amount of animal protein, especially eggs, fish, lean meats and poultry. To eat on Sundays, the diet recommends "plenty of steak" with tomatoes, celery or brussels sprouts. The Scarsdale diet is low-calorie, restricted to 1,000 calories per day and lasts between seven and fourteen days. The book was originally published in 1978 and received an unexpected boost in popular sales when its author, Herman Tarnower, was murdered in 1980 by his jilted lover Jean Harris. During her trial, Harris' lawyer argued that she had been the book's "primary author". Health risks Medical experts have listed the Scarsdale diet as an example of a fad diet, as it carries potential health risks and does not instill the kind of healthy eating habits required for sustainable weight loss. It is unbalanced because of the high amount of meat consumed. The diet's high fat ratio may increase the risk of heart disease. People following the diet can lose much weight at first, but this loss is generally not sustained any better than with normal calorie restriction. Nutritionist Elaine B. Feldman has commented that high-protein low-carbohydrate diets such as the Atkins and Scarsdale diets are nutritionally deficient, produce diuresis and are "clearly unphysiologic and may be hazardous". The Scarsdale diet was criticized by Henry Buchwald and colleagues for "serious nutritional deficiencies". Negative effects of the diet include constipation, nausea, weakness and bad breath due to ketosis. The diet has also been criticized for being deficient in vitamin A and riboflavin. See also List of fad diets References Fad diets High-protein diets Low-carbohydrate diets
Scarsdale diet
Chemistry
509
48,962,287
https://en.wikipedia.org/wiki/G%C3%A4u
In the south German language (of the Alemannic-speaking area, or in Switzerland), a gäu landscape (gäulandschaft) refers to an area of open, level countryside. These regions typically have fertile soils resulting from depositions of loess (an exception is the Arme Gäue ["Poor Gäus"] of the Baden-Württemberg Gäu). The intensive use of the Gäu regions for crops has displaced the originally wooded countryside (→climax vegetation – in contrast with the steppe heath theory and disputed megaherbivore hypothesis). The North German equivalent of such landscapes is börde. See also Gau (territory) – also gives the etymology and language history of Gäu Gäu – regions with the name Natural regions referred to as Gäu plateaus: Neckar and Tauber Gäu Plateaus Gäu Plateaus in the Main Triangle Werra Gäu Plateaus Gäuboden Altsiedelland | Altsiedel landscape Soil science Toponymy Ecosystems Rural geography Human habitats Agriculture in Germany Agriculture in Switzerland
Gäu
Biology
217
16,818,862
https://en.wikipedia.org/wiki/USAF%20Stability%20and%20Control%20DATCOM
The United States Air Force Stability and Control DATCOM is a collection, correlation, codification, and recording of best knowledge, opinion, and judgment in the area of aerodynamic stability and control prediction methods. It presents substantiated techniques for use (1) early in the design or concept study phase, (2) to evaluate changes resulting from proposed engineering fixes, and (3) as a training on crosstraining aid. It bridges the gap between theory and practice by including a combination of pertinent discussion and proven practical methods. For any given configuration and flight condition, a complete set of stability and control derivatives can be determined without resort to outside information. A spectrum of methods is presented, ranging from very simple and easily applied techniques to quite accurate and thorough procedures. Comparatively simple methods are presented in complete form, while the more complex methods are often handled by reference to separate treatments. Tables which compare calculated results with test data provide indications of method accuracy. Extensive references to related material are also included. The report was compiled from September 1975 to September 1977 by the McDonnell Douglas Corporation in conjunction with the engineers at the Flight Dynamics Laboratory at Wright-Patterson Air Force Base. Methodology Fundamentally, the purpose of the DATCOM (Data Compendium), is to provide a systematic summary of methods for estimating basic stability and control derivatives. The DATCOM is organized in such a way that it is self-sufficient. For any given flight condition and configuration the complete set of derivatives can be determined without resort to outside information. The book is intended to be used for preliminary design purposes before the acquisition of test data. The use of reliable test data in lieu of the DATCOM is always recommended. However, there are many cases where the DATCOM can be used to advantage in conjunction with test data. For instance, if the lift-curve slope of a wing-body combination is desired, the DATCOM recommends that the lift-curve slopes of the isolated wing and body, respectively, be estimated by methods presented and that appropriate wing-body interference factors (also presented) be applied. If wing-alone test data are available, it is obvious that these test data should be substituted in place of the estimated wing-alone characteristics in determining the lift-curve slope of the combination. Also, if test data are available on a configuration similar to a given configuration, the characteristics of the similar configuration can be corrected to those for the given configuration by judiciously using the DATCOM material. Sections The DATCOM Manual is divided into 9 sections: Guide to DATCOM and Methods Summary General Information Effects of External Stores Characteristics at Angle of Attack Characteristics in Sideslip Characteristics of High-Lift and Control Devices Dynamic Derivatives Mass and Inertia Characteristics of VTOL-STOL Aircraft Implementation Many textbooks utilized in universities implement the DATCOM method of stability and control. Shortly before compilation of the DATCOM was completed, a computerized version called Digital DATCOM was created. The USAF S&C Digital DATCOM implements the DATCOM methods in an easy to use manner. References Hoak, D. E., et al., "The USAF Stability and Control DATCOM," Air Force Wright Aeronautical Laboratories, TR-83-3048, Oct. 1960 (Revised 1978). Aerodynamics Wright-Patterson Air Force Base
USAF Stability and Control DATCOM
Chemistry,Engineering
664
18,021,843
https://en.wikipedia.org/wiki/Neera
Neera, also called palm nectar, is a sap extracted from the inflorescence of various species of toddy palms and used as a drink. Neera extraction is generally performed before sunrise. It is sweet, translucent in colour. It is susceptible to natural fermentation at ambient temperature within a few hours of extraction, and is also known as palm wine. Once fermented, Neera becomes toddy. Neera is widely consumed in India, Sri Lanka, Africa, Malaysia, Indonesia, Thailand, and Myanmar. Neera is not the juice made from palm fruit. Neera requires neither mechanical crushing, as in the case of sugarcane, nor leaching, like beet-root; it is obtained by slicing the spathes of the coconut, sago, and Palmyra (Borassus flabellifer L.) palm, and scraping the tendermost part, just below the crown. In Goa though, the word surr is used for toddy of the coconut palm, and nirau for the sweet juice extracted last from the cashew apple. The words are not interchangeable. Composition Neera is rich in carbohydrates, mildly alcoholic, mostly sucrose, and has a nearly neutral pH. It has a specific gravity ranging from 1.058 to 1.077. The chemical percentage composition of neera varies, depending on such factors as place, type of palm, mode and season of collection. Typical values are: Fermentation Neera is highly susceptible to natural fermentation at ambient temperature within a few hours of extraction from the palm source. Once fermented, it transforms into toddy with 4% alcohol. Using several technologies developed by various research institutes, neera is processed and preserved in its natural form to retain the vitamins, sugar, and other nutrients beneficial for health. To extend the shelf life of neera, heat preservation techniques such as pasteurization are used. A team of experts from SCMS Institute of BioSciences and Biotechnology, Cochin, India have successfully developed filtration and preservation techniques for neera and collaborated with Coconut Development Board to commercialize the drink among the public. A special filtration technique to enhance the shelf life of neera was developed by the National Chemical Laboratory in Pune, India. Technologies for the preservation and processing of neera were also developed by the Central Food Technological Research Institute in Mysore, India. By-products Palmgur (jaggery), palm sugar, coconut nectar and neera syrup are produced by heating fresh neera and concentrating it. Caramelization turns the heated neera from milky white to transparent brown. West Bengal and Orissa are the Indian states where most of the neera is converted into palmgur. Palmgur is also produced from neera in the states of Gujarat and Maharashtra. In India In Gujarat, Neera-producing societies have formed the Federation of Gujarat Neera And Tadpadarth Gramodyog Sangh. This organisation has set up a filtration plant that processes Neera to increase its shelf life. The Gujarat Neera and Tadpadarth Gramodyog Sangh, established in 1991, aims to improve living conditions of the workers engaged in the production of Neera. It is also trying to increase the production of neera in the state by planting more palm trees, and investing in the training of tapper-workers. In Andhra Pradesh, unlike other states, there is no state government sponsorship/support to promote neera or its by-products at retail outlets. Only the Khadi and Village Industries Commission (KVIC) promotes neera as a health drink. In Gujarat and Maharashtra, neera is made available through various outlets known as "Neera Vikri Kendra" (Neera sale centre). The Neera Palm Product Cooperative Society had set up small green kiosks that sold neera in major railway stations, but they are now only to be found alongside highways and expressways outside the Mumbai city area. In the above two states the neera is extracted from date palm and pulmyrah trees. In the state of Karnataka, where there are abundant coconut trees, neera is tapped from coconut trees . In Karnataka, neera is extracted and sold by the Ediga and Billava castes. The state government constituted the Neera Board, comprising farmers, provincial government officials, and neera training institutes, to inspect and control the quality of neera and its products, give approvals to labels, and develop various schemes for selling in the international market. The Central Food Technological Research Institute developed a technology to preserve neera for two months, and the government plans to promote neera as an energy drink with medicinal value, packaged in sachets and bottles. In Kerala, the state government, as part of Kerala Vision 2010, set up three units to manufacture neera. In Odisha, the state government established a cooperative organisation known as the Odisha State Palmgur Cooperative Federation to provide technological support in the processing and production of neera and its associated by-products such as jaggery and candy. In Tamil Nadu, Neera is also called "Padaneer" in Tamil. KVIC and Tamil Nadu Palm Products Development Board sell refrigerated Padaneer at their outlets. Neera syrup is used as a drink in Ayurveda. See also Palm wine Coconut sugar References Fermented drinks Indian alcoholic drinks Non-alcoholic drinks
Neera
Biology
1,102
21,733,898
https://en.wikipedia.org/wiki/Journal%20of%20Chemical%20%26%20Engineering%20Data
The Journal of Chemical & Engineering Data is a peer-reviewed scientific journal, published since 1956 by the American Chemical Society. JCED is currently indexed in: Chemical Abstracts Service (CAS), SCOPUS, EBSCOhost, ProQuest, British Library, PubMed, Ovid, Web of Science, and SwetsWise. The current Editor is J. Ilja Siepmann. According to the Journal Citation Reports, the journal has a 2022 impact factor of 2.6. References External links American Chemical Society academic journals Academic journals established in 1956 Monthly journals Chemical engineering journals English-language journals
Journal of Chemical & Engineering Data
Chemistry,Engineering
124
330,604
https://en.wikipedia.org/wiki/Monoidal%20category
In mathematics, a monoidal category (or tensor category) is a category equipped with a bifunctor that is associative up to a natural isomorphism, and an object I that is both a left and right identity for ⊗, again up to a natural isomorphism. The associated natural isomorphisms are subject to certain coherence conditions, which ensure that all the relevant diagrams commute. The ordinary tensor product makes vector spaces, abelian groups, R-modules, or R-algebras into monoidal categories. Monoidal categories can be seen as a generalization of these and other examples. Every (small) monoidal category may also be viewed as a "categorification" of an underlying monoid, namely the monoid whose elements are the isomorphism classes of the category's objects and whose binary operation is given by the category's tensor product. A rather different application, for which monoidal categories can be considered an abstraction, is a system of data types closed under a type constructor that takes two types and builds an aggregate type. The types serve as the objects, and ⊗ is the aggregate constructor. The associativity up to isomorphism is then a way of expressing that different ways of aggregating the same data—such as and —store the same information even though the aggregate values need not be the same. The aggregate type may be analogous to the operation of addition (type sum) or of multiplication (type product). For type product, the identity object is the unit , so there is only one inhabitant of the type, and that is why a product with it is always isomorphic to the other operand. For type sum, the identity object is the void type, which stores no information, and it is impossible to address an inhabitant. The concept of monoidal category does not presume that values of such aggregate types can be taken apart; on the contrary, it provides a framework that unifies classical and quantum information theory. In category theory, monoidal categories can be used to define the concept of a monoid object and an associated action on the objects of the category. They are also used in the definition of an enriched category. Monoidal categories have numerous applications outside of category theory proper. They are used to define models for the multiplicative fragment of intuitionistic linear logic. They also form the mathematical foundation for the topological order in condensed matter physics. Braided monoidal categories have applications in quantum information, quantum field theory, and string theory. Formal definition A monoidal category is a category equipped with a monoidal structure. A monoidal structure consists of the following: a bifunctor called the monoidal product, or tensor product, an object called the monoidal unit, unit object, or identity object, three natural isomorphisms subject to certain coherence conditions expressing the fact that the tensor operation: is associative: there is a natural (in each of three arguments , , ) isomorphism , called associator, with components , has as left and right identity: there are two natural isomorphisms and , respectively called left and right unitor, with components and . Note that a good way to remember how and act is by alliteration; Lambda, , cancels the identity on the left, while Rho, , cancels the identity on the right. The coherence conditions for these natural transformations are: for all , , and in , the pentagon diagram commutes; for all and in , the triangle diagram commutes. A strict monoidal category is one for which the natural isomorphisms α, λ and ρ are identities. Every monoidal category is monoidally equivalent to a strict monoidal category. Examples Any category with finite products can be regarded as monoidal with the product as the monoidal product and the terminal object as the unit. Such a category is sometimes called a cartesian monoidal category. For example: Set, the category of sets with the Cartesian product, any particular one-element set serving as the unit. Cat, the category of small categories with the product category, where the category with one object and only its identity map is the unit. Dually, any category with finite coproducts is monoidal with the coproduct as the monoidal product and the initial object as the unit. Such a monoidal category is called cocartesian monoidal R-Mod, the category of modules over a commutative ring R, is a monoidal category with the tensor product of modules ⊗R serving as the monoidal product and the ring R (thought of as a module over itself) serving as the unit. As special cases one has: K-Vect, the category of vector spaces over a field K, with the one-dimensional vector space K serving as the unit. Ab, the category of abelian groups, with the group of integers Z serving as the unit. For any commutative ring R, the category of R-algebras is monoidal with the tensor product of algebras as the product and R as the unit. The category of pointed spaces (restricted to compactly generated spaces for example) is monoidal with the smash product serving as the product and the pointed 0-sphere (a two-point discrete space) serving as the unit. The category of all endofunctors on a category C is a strict monoidal category with the composition of functors as the product and the identity functor as the unit. Just like for any category E, the full subcategory spanned by any given object is a monoid, it is the case that for any 2-category E, and any object C in Ob(E), the full 2-subcategory of E spanned by {C} is a monoidal category. In the case E = Cat, we get the endofunctors example above. Bounded-above meet semilattices are strict symmetric monoidal categories: the product is meet and the identity is the top element. Any ordinary monoid is a small monoidal category with object set , only identities for morphisms, as tensor product and as its identity object. Conversely, the set of isomorphism classes (if such a thing makes sense) of a monoidal category is a monoid w.r.t. the tensor product. Any commutative monoid can be realized as a monoidal category with a single object. Recall that a category with a single object is the same thing as an ordinary monoid. By an Eckmann-Hilton argument, adding another monoidal product on requires the product to be commutative. Properties and associated notions It follows from the three defining coherence conditions that a large class of diagrams (i.e. diagrams whose morphisms are built using , , , identities and tensor product) commute: this is Mac Lane's "coherence theorem". It is sometimes inaccurately stated that all such diagrams commute. There is a general notion of monoid object in a monoidal category, which generalizes the ordinary notion of monoid from abstract algebra. Ordinary monoids are precisely the monoid objects in the cartesian monoidal category Set. Further, any (small) strict monoidal category can be seen as a monoid object in the category of categories Cat (equipped with the monoidal structure induced by the cartesian product). Monoidal functors are the functors between monoidal categories that preserve the tensor product and monoidal natural transformations are the natural transformations, between those functors, which are "compatible" with the tensor product. Every monoidal category can be seen as the category B(∗, ∗) of a bicategory B with only one object, denoted ∗. The concept of a category C enriched in a monoidal category M replaces the notion of a set of morphisms between pairs of objects in C with the notion of an M-object of morphisms between every two objects in C. Free strict monoidal category For every category C, the free strict monoidal category Σ(C) can be constructed as follows: its objects are lists (finite sequences) A1, ..., An of objects of C; there are arrows between two objects A1, ..., Am and B1, ..., Bn only if m = n, and then the arrows are lists (finite sequences) of arrows f1: A1 → B1, ..., fn: An → Bn of C; the tensor product of two objects A1, ..., An and B1, ..., Bm is the concatenation A1, ..., An, B1, ..., Bm of the two lists, and, similarly, the tensor product of two morphisms is given by the concatenation of lists. The identity object is the empty list. This operation Σ mapping category C to Σ(C) can be extended to a strict 2-monad on Cat. Specializations If, in a monoidal category, and are naturally isomorphic in a manner compatible with the coherence conditions, we speak of a braided monoidal category. If, moreover, this natural isomorphism is its own inverse, we have a symmetric monoidal category. A closed monoidal category is a monoidal category where the functor has a right adjoint, which is called the "internal Hom-functor" . Examples include cartesian closed categories such as Set, the category of sets, and compact closed categories such as FdVect, the category of finite-dimensional vector spaces. Autonomous categories (or compact closed categories or rigid categories) are monoidal categories in which duals with nice properties exist; they abstract the idea of FdVect. Dagger symmetric monoidal categories, equipped with an extra dagger functor, abstracting the idea of FdHilb, finite-dimensional Hilbert spaces. These include the dagger compact categories. Tannakian categories are monoidal categories enriched over a field, which are very similar to representation categories of linear algebraic groups. Preordered monoids A preordered monoid is a monoidal category in which for every two objects , there exists at most one morphism in C. In the context of preorders, a morphism is sometimes notated . The reflexivity and transitivity properties of an order, defined in the traditional sense, are incorporated into the categorical structure by the identity morphism and the composition formula in C, respectively. If and , then the objects are isomorphic which is notated . Introducing a monoidal structure to the preorder C involves constructing an object , called the monoidal unit, and a functor , denoted by "", called the monoidal multiplication. and must be unital and associative, up to isomorphism, meaning: and . As · is a functor, if and then . The other coherence conditions of monoidal categories are fulfilled through the preorder structure as every diagram commutes in a preorder. The natural numbers are an example of a monoidal preorder: having both a monoid structure (using + and 0) and a preorder structure (using ≤) forms a monoidal preorder as and implies . The free monoid on some generating set produces a monoidal preorder, producing the semi-Thue system. See also Skeleton (category theory) Spherical category Monoidal category action References External links
Monoidal category
Mathematics
2,355
5,292,415
https://en.wikipedia.org/wiki/Lead%28II%29%20fluoride
Lead(II) fluoride is the inorganic compound with the formula PbF2. It is a white solid. The compound is polymorphic, at ambient temperatures it exists in orthorhombic (PbCl2 type) form, while at high temperatures it is cubic (Fluorite type). Preparation Lead(II) fluoride can be prepared by treating lead(II) hydroxide or lead(II) carbonate with hydrofluoric acid: Pb(OH)2 + 2 HF → PbF2 + 2 H2O Alternatively, it is precipitated by adding hydrofluoric acid to a lead(II) salt solution, or by adding a fluoride salt to a lead salt, such as potassium fluoride to a lead(II) nitrate solution, 2 KF + Pb(NO3)2 → PbF2 + 2 KNO3 or sodium fluoride to a lead(II) acetate solution. 2 NaF + Pb(CH3COO)2 → PbF2 + 2 NaCH3COO It appears as the very rare mineral fluorocronite. Uses Lead(II) fluoride is used in low melting glasses, in glass coatings to reflect infrared rays, in phosphors for television-tube screens, and as a catalyst for the manufacture of picoline. The Muon g−2 experiment uses scintillators in conjunction with silicon photomultipliers. It also serves as a oxygen scavenger in high-temperature fluorine chemistry, as plumbous oxide is relatively volatile. References Fluorides Lead(II) compounds Metal halides Phosphors and scintillators Reagents for organic chemistry Glass compositions Fluorite crystal structure
Lead(II) fluoride
Chemistry
371
21,655
https://en.wikipedia.org/wiki/Nitric%20acid
Nitric acid is an inorganic compound with the formula . It is a highly corrosive mineral acid. The compound is colorless, but samples tend to acquire a yellow cast over time due to decomposition into oxides of nitrogen. Most commercially available nitric acid has a concentration of 68% in water. When the solution contains more than 86% , it is referred to as fuming nitric acid. Depending on the amount of nitrogen dioxide present, fuming nitric acid is further characterized as red fuming nitric acid at concentrations above 86%, or white fuming nitric acid at concentrations above 95%. Nitric acid is the primary reagent used for nitration – the addition of a nitro group, typically to an organic molecule. While some resulting nitro compounds are shock- and thermally-sensitive explosives, a few are stable enough to be used in munitions and demolition, while others are still more stable and used as synthetic dyes and medicines (e.g. metronidazole). Nitric acid is also commonly used as a strong oxidizing agent. History Medieval alchemy The discovery of mineral acids such as nitric acid is generally believed to go back to 13th-century European alchemy. The conventional view is that nitric acid was first described in pseudo-Geber's De inventione veritatis ("On the Discovery of Truth", after ). However, according to Eric John Holmyard and Ahmad Y. al-Hassan, the nitric acid also occurs in various earlier Arabic works such as the ("Chest of Wisdom") attributed to Jabir ibn Hayyan (8th century) or the attributed to the Fatimid caliph al-Hakim bi-Amr Allah (985–1021). The recipe in the attributed to Jabir has been translated as follows:Take five parts of pure flowers of nitre, three parts of Cyprus vitriol and two parts of Yemen alum. Powder them well, separately, until they are like dust and then place them in a flask. Plug the latter with a palm fibre and attach a glass receiver to it. Then invert the apparatus and heat the upper portion (i.e. the flask containing the mixture) with a gentle fire. There will flow down by reason of the heat an oil like cow's butter. Nitric acid is also found in post-1300 works falsely attributed to Albert the Great and Ramon Llull (both 13th century). These works describe the distillation of a mixture containing niter and green vitriol, which they call "eau forte" (aqua fortis). Modern era In the 17th century, Johann Rudolf Glauber devised a process to obtain nitric acid by distilling potassium nitrate with sulfuric acid. In 1776 Antoine Lavoisier cited Joseph Priestley's work to point out that it can be converted from nitric oxide (which he calls "nitrous air"), "combined with an approximately equal volume of the purest part of common air, and with a considerable quantity of water." In 1785 Henry Cavendish determined its precise composition and showed that it could be synthesized by passing a stream of electric sparks through moist air. In 1806, Humphry Davy reported the results of extensive distilled water electrolysis experiments concluding that nitric acid was produced at the anode from dissolved atmospheric nitrogen gas. He used a high voltage battery and non-reactive electrodes and vessels such as gold electrode cones that doubled as vessels bridged by damp asbestos. The industrial production of nitric acid from atmospheric air began in 1905 with the Birkeland–Eyde process, also known as the arc process. This process is based upon the oxidation of atmospheric nitrogen by atmospheric oxygen to nitric oxide with a very high temperature electric arc. Yields of up to approximately 4–5% nitric oxide were obtained at 3000 °C, and less at lower temperatures. The nitric oxide was cooled and oxidized by the remaining atmospheric oxygen to nitrogen dioxide, and this was subsequently absorbed in water in a series of packed column or plate column absorption towers to produce dilute nitric acid. The first towers bubbled the nitrogen dioxide through water and non-reactive quartz fragments. About 20% of the produced oxides of nitrogen remained unreacted so the final towers contained an alkali solution to neutralize the rest. The process was very energy intensive and was rapidly displaced by the Ostwald process once cheap ammonia became available. Another early production method was invented by French engineer Albert Nodon around 1913. His method produced nitric acid from electrolysis of calcium nitrate converted by bacteria from nitrogenous matter in peat bogs. An earthenware pot surrounded by limestone was sunk into the peat and staked with tarred lumber to make a compartment for the carbon anode around which the nitric acid is formed. Nitric acid was pumped out from an earthenware pipe that was sunk down to the bottom of the pot. Fresh water was pumped into the top through another earthenware pipe to replace the fluid removed. The interior was filled with coke. Cast iron cathodes were sunk into the peat surrounding it. Resistance was about 3 ohms per cubic meter and the power supplied was around 10 volts. Production from one deposit was 800 tons per year. Once the Haber process for the efficient production of ammonia was introduced in 1913, nitric acid production from ammonia using the Ostwald process overtook production from the Birkeland–Eyde process. This method of production is still in use today. Physical and chemical properties Commercially available nitric acid is an azeotrope with water at a concentration of 68% . This solution has a boiling temperature of 120.5 °C (249 °F) at 1 atm. It is known as "concentrated nitric acid". The azeotrope of nitric acid and water is a colourless liquid at room temperature. Two solid hydrates are known: the monohydrate or oxonium nitrate and the trihydrate . An older density scale is occasionally seen, with concentrated nitric acid specified as 42 Baumé. Contamination with nitrogen dioxide Nitric acid is subject to thermal or light decomposition and for this reason it was often stored in brown glass bottles: This reaction may give rise to some non-negligible variations in the vapor pressure above the liquid because the nitrogen oxides produced dissolve partly or completely in the acid. The nitrogen dioxide () and/or dinitrogen tetroxide () remains dissolved in the nitric acid coloring it yellow or even red at higher temperatures. While the pure acid tends to give off white fumes when exposed to air, acid with dissolved nitrogen dioxide gives off reddish-brown vapors, leading to the common names "red fuming nitric acid" and "white fuming nitric acid". Nitrogen oxides () are soluble in nitric acid. Fuming nitric acid Commercial-grade fuming nitric acid contains 98% and has a density of 1.50 g/cm3. This grade is often used in the explosives industry. It is not as volatile nor as corrosive as the anhydrous acid and has the approximate concentration of 21.4 M. Red fuming nitric acid, or RFNA, contains substantial quantities of dissolved nitrogen dioxide () leaving the solution with a reddish-brown color. Due to the dissolved nitrogen dioxide, the density of red fuming nitric acid is lower at 1.490 g/cm3. An inhibited fuming nitric acid, either white inhibited fuming nitric acid (IWFNA), or red inhibited fuming nitric acid (IRFNA), can be made by the addition of 0.6 to 0.7% hydrogen fluoride (HF). This fluoride is added for corrosion resistance in metal tanks. The fluoride creates a metal fluoride layer that protects the metal. Anhydrous nitric acid White fuming nitric acid, pure nitric acid or WFNA, is very close to anhydrous nitric acid. It is available as 99.9% nitric acid by assay, or about 24 molar. One specification for white fuming nitric acid is that it has a maximum of 2% water and a maximum of 0.5% dissolved . Anhydrous nitric acid is a colorless, low-viscosity (mobile) liquid with a density of 1.512–3 g/cm3 that solidifies at to form white crystals. Its dynamic viscosity under standard conditions is 0.76 cP. As it decomposes to and water, it obtains a yellow tint. It boils at . It is usually stored in a glass shatterproof amber bottle with twice the volume of head space to allow for pressure build up, but even with those precautions the bottle must be vented monthly to release pressure. Structure and bonding The two terminal N–O bonds are nearly equivalent and relatively short, at 1.20 and 1.21 Å. This can be explained by theories of resonance; the two major canonical forms show some double bond character in these two bonds, causing them to be shorter than N–O single bonds. The third N–O bond is elongated because its O atom is bonded to H atom, with a bond length of 1.41 Å in the gas phase. The molecule is slightly aplanar (the and NOH planes are tilted away from each other by 2°) and there is restricted rotation about the N–OH single bond. Reactions Acid-base properties Nitric acid is normally considered to be a strong acid at ambient temperatures. There is some disagreement over the value of the acid dissociation constant, though the pKa value is usually reported as less than −1. This means that the nitric acid in diluted solution is fully dissociated except in extremely acidic solutions. The pKa value rises to 1 at a temperature of 250 °C. Nitric acid can act as a base with respect to an acid such as sulfuric acid: ;Equilibrium constant: K ≈ 22 The nitronium ion, , is the active reagent in aromatic nitration reactions. Since nitric acid has both acidic and basic properties, it can undergo an autoprotolysis reaction, similar to the self-ionization of water: Reactions with metals Nitric acid reacts with most metals, but the details depend on the concentration of the acid and the nature of the metal. Dilute nitric acid behaves as a typical acid in its reaction with most metals. Magnesium, manganese, and zinc liberate : Nitric acid can oxidize non-active metals such as copper and silver. With these non-active or less electropositive metals the products depend on temperature and the acid concentration. For example, copper reacts with dilute nitric acid at ambient temperatures with a 3:8 stoichiometry: The nitric oxide produced may react with atmospheric oxygen to give nitrogen dioxide. With more concentrated nitric acid, nitrogen dioxide is produced directly in a reaction with 1:4 stoichiometry: Upon reaction with nitric acid, most metals give the corresponding nitrates. Some metalloids and metals give the oxides; for instance, Sn, As, Sb, and Ti are oxidized into , , , and respectively. Some precious metals, such as pure gold and platinum-group metals do not react with nitric acid, though pure gold does react with aqua regia, a mixture of concentrated nitric acid and hydrochloric acid. However, some less noble metals (Ag, Cu, ...) present in some gold alloys relatively poor in gold such as colored gold can be easily oxidized and dissolved by nitric acid, leading to colour changes of the gold-alloy surface. Nitric acid is used as a cheap means in jewelry shops to quickly spot low-gold alloys (< 14 karats) and to rapidly assess the gold purity. Being a powerful oxidizing agent, nitric acid reacts with many non-metallic compounds, sometimes explosively. Depending on the acid concentration, temperature and the reducing agent involved, the end products can be variable. Reaction takes place with all metals except the noble metals series and certain alloys. As a general rule, oxidizing reactions occur primarily with the concentrated acid, favoring the formation of nitrogen dioxide (). However, the powerful oxidizing properties of nitric acid are thermodynamic in nature, but sometimes its oxidation reactions are rather kinetically non-favored. The presence of small amounts of nitrous acid () greatly increases the rate of reaction. Although chromium (Cr), iron (Fe), and aluminium (Al) readily dissolve in dilute nitric acid, the concentrated acid forms a metal-oxide layer that protects the bulk of the metal from further oxidation. The formation of this protective layer is called passivation. Typical passivation concentrations range from 20% to 50% by volume. Metals that are passivated by concentrated nitric acid are iron, cobalt, chromium, nickel, and aluminium. Reactions with non-metals Being a powerful oxidizing acid, nitric acid reacts with many organic materials, and the reactions may be explosive. The hydroxyl group will typically strip a hydrogen from the organic molecule to form water, and the remaining nitro group takes the hydrogen's place. Nitration of organic compounds with nitric acid is the primary method of synthesis of many common explosives, such as nitroglycerin and trinitrotoluene (TNT). As very many less stable byproducts are possible, these reactions must be carefully thermally controlled, and the byproducts removed to isolate the desired product. Reaction with non-metallic elements, with the exceptions of nitrogen, oxygen, noble gases, silicon, and halogens other than iodine, usually oxidizes them to their highest oxidation states as acids with the formation of nitrogen dioxide for concentrated acid and nitric oxide for dilute acid. Concentrated nitric acid oxidizes , , and into , , and , respectively. Although it reacts with graphite and amorphous carbon, it does not react with diamond; it can separate diamond from the graphite that it oxidizes. Xanthoproteic test Nitric acid reacts with proteins to form yellow nitrated products. This reaction is known as the xanthoproteic reaction. This test is carried out by adding concentrated nitric acid to the substance being tested, and then heating the mixture. If proteins that contain amino acids with aromatic rings are present, the mixture turns yellow. Upon adding a base such as ammonia, the color turns orange. These color changes are caused by nitrated aromatic rings in the protein. Xanthoproteic acid is formed when the acid contacts epithelial cells. Respective local skin color changes are indicative of inadequate safety precautions when handling nitric acid. Production Industrial nitric acid production uses the Ostwald process. The combined Ostwald and Haber processes are extremely efficient, requiring only air and natural gas feedstocks. The Ostwald process' technical innovation is the proper conditions under which anhydrous ammonia burns to nitric oxide (NO) instead of dinitrogen (). The nitric oxide is then oxidized, often with atmospheric oxygen, to nitrogen dioxide (): The dioxide then disproportionates in water to nitric acid and the nitric oxide feedstock: The net reaction is maximal oxidation of ammonia: Dissolved nitrogen oxides are either stripped (in the case of white fuming nitric acid) or remain in solution to form red fuming nitric acid. Commercial grade nitric acid solutions are usually between 52% and 68% nitric acid by mass, the maximum distillable concentration. Further dehydration to 98% can be achieved with concentrated . Historically, higher acid concentrations were also produced by dissolving additional nitrogen dioxide in the acid, but the last plant in the United States ceased using that process in 2012. More recently, electrochemical means have been developed to produce anhydrous acid from concentrated nitric acid feedstock. Laboratory synthesis Laboratory-scale nitric acid syntheses abound. Most take inspiration from the industrial techniques. A wide variety of nitrate salts metathesize with sulfuric acid () — for example, sodium nitrate: Distillation at nitric acid's 83 °C boiling point then separates the solid metal-salt residue. The resulting acid solution is the 68.5% azeotrope, and can be further concentrated (as in industry) with either sulfuric acid or magnesium nitrate. Alternatively, thermal decomposition of copper(II) nitrate gives nitrogen dioxide and oxygen gases; these are then passed through water or hydrogen peroxide as in the Ostwald process: or Uses The main industrial use of nitric acid is for the production of fertilizers. Nitric acid is neutralized with ammonia to give ammonium nitrate. This application consumes 75–80% of the 26 million tonnes produced annually (1987). The other main applications are for the production of explosives, nylon precursors, and specialty organic compounds. Precursor to organic nitrogen compounds In organic synthesis, industrial and otherwise, the nitro group is a versatile functional group. A mixture of nitric and sulfuric acids introduces a nitro substituent onto various aromatic compounds by electrophilic aromatic substitution. Many explosives, such as TNT, are prepared this way: Either concentrated sulfuric acid or oleum absorbs the excess water. The nitro group can be reduced to give an amine group, allowing synthesis of aniline compounds from various nitrobenzenes: Use as an oxidant The precursor to nylon, adipic acid, is produced on a large scale by oxidation of "KA oil"—a mixture of cyclohexanone and cyclohexanol—with nitric acid. Rocket propellant Nitric acid has been used in various forms as the oxidizer in liquid-fueled rockets. These forms include red fuming nitric acid, white fuming nitric acid, mixtures with sulfuric acid, and these forms with HF inhibitor. IRFNA (inhibited red fuming nitric acid) was one of three liquid fuel components for the BOMARC missile. Niche uses Metal processing Nitric acid can be used to convert metals to oxidized forms, such as converting copper metal to cupric nitrate. It can also be used in combination with hydrochloric acid as aqua regia to dissolve noble metals such as gold (as chloroauric acid). These salts can be used to purify gold and other metals beyond 99.9% purity by processes of recrystallization and selective precipitation. Its ability to dissolve certain metals selectively or be a solvent for many metal salts makes it useful in gold parting processes. Analytical reagent In elemental analysis by ICP-MS, ICP-AES, GFAA, and Flame AA, dilute nitric acid (0.5–5.0%) is used as a matrix compound for determining metal traces in solutions. Ultrapure trace metal grade acid is required for such determination, because small amounts of metal ions could affect the result of the analysis. It is also typically used in the digestion process of turbid water samples, sludge samples, solid samples as well as other types of unique samples which require elemental analysis via ICP-MS, ICP-OES, ICP-AES, GFAA and flame atomic absorption spectroscopy. Typically these digestions use a 50% solution of the purchased mixed with Type 1 DI Water. In electrochemistry, nitric acid is used as a chemical doping agent for organic semiconductors, and in purification processes for raw carbon nanotubes. Woodworking In a low concentration (approximately 10%), nitric acid is often used to artificially age pine and maple. The color produced is a grey-gold very much like very old wax- or oil-finished wood (wood finishing). Etchant and cleaning agent The corrosive effects of nitric acid are exploited for some specialty applications, such as etching in printmaking, pickling stainless steel or cleaning silicon wafers in electronics. A solution of nitric acid, water and alcohol, nital, is used for etching metals to reveal the microstructure. ISO 14104 is one of the standards detailing this well known procedure. Nitric acid is used either in combination with hydrochloric acid or alone to clean glass cover slips and glass slides for high-end microscopy applications. It is also used to clean glass before silvering when making silver mirrors. Commercially available aqueous blends of 5–30% nitric acid and 15–40% phosphoric acid are commonly used for cleaning food and dairy equipment primarily to remove precipitated calcium and magnesium compounds (either deposited from the process stream or resulting from the use of hard water during production and cleaning). The phosphoric acid content helps to passivate ferrous alloys against corrosion by the dilute nitric acid. Nitric acid can be used as a spot test for alkaloids like LSD, giving a variety of colours depending on the alkaloid. Nuclear fuel reprocessing Nitric acid plays a key role in PUREX and other nuclear fuel reprocessing methods, where it can dissolve many different actinides. The resulting nitrates are converted to various complexes that can be reacted and extracted selectively in order to separate the metals from each other. Safety Nitric acid is a corrosive acid and a powerful oxidizing agent. The major hazard posed by it is chemical burns, as it carries out acid hydrolysis with proteins (amide) and fats (ester), which consequently decomposes living tissue (e.g. skin and flesh). Concentrated nitric acid stains human skin yellow due to its reaction with the keratin. These yellow stains turn orange when neutralized. Systemic effects are unlikely, and the substance is not considered a carcinogen or mutagen. The standard first-aid treatment for acid spills on the skin is, as for other corrosive agents, irrigation with large quantities of water. Washing is continued for at least 10–15 minutes to cool the tissue surrounding the acid burn and to prevent secondary damage. Contaminated clothing is removed immediately and the underlying skin washed thoroughly. Being a strong oxidizing agent, nitric acid can react violently with many compounds. Use in acid attacks Nitric acid is one of the most common types of acid used in acid attacks. Notes References External links NIOSH Pocket Guide to Chemical Hazards National Pollutant Inventory – Nitric Acid Fact Sheet Calculators: surface tensions , and densities, molarities and molalities of aqueous nitric acid Pnictogen oxoacids Nitrogen oxoacids Mineral acids Photographic chemicals Drug testing reagents Oxidizing acids Nitrogen(V) compounds
Nitric acid
Chemistry
4,754
76,122,044
https://en.wikipedia.org/wiki/SmithKline%20Beecham%20Clinical%20Laboratories
SmithKline Beecham Clinical Laboratories (SBCL) was an American-based medical laboratory company that was acquired by Quest Diagnostics in 1999 for $1.3 billion. Controversies In 1989, SBCL had to pay a $1.5 million fine for illegal laboratory referral kickbacks. In 1997, Operation LabScam forced SBCL to agree to pay a $325 million settlement for billing Medicare and Medicaid for tests that physicians were misled into believing were free, violating the 1863 False Claims Act. In 1998, a phlebotomist at an SBCL facility in Palo Alto, California was exposed as reusing needles to save money. As a result, over 3,600 patients had to receive testing and counseling for HIV and hepatitis. The incident led to phlebotomy licensure in California. References Life sciences industry
SmithKline Beecham Clinical Laboratories
Biology
172
75,213,407
https://en.wikipedia.org/wiki/Maria%20Pereira
Maria Pereira (born 1986, Leiria, Portugal) is a Portuguese bioengineering scientist, creator of a glue to close open wounds without damaging tissues. Biography She was born in 1986 in Leiria. She holds a degree in Pharmaceutical Sciences from the University of Coimbra, in Portugal, and a PhD in Bioengineering from the Massachusetts Institute of Technology (MIT), in the United States, thanks to the scholarship she was awarded by the MIT-Portugal Program in 2007. She is known for having created a glue to close open wounds without damaging tissue, which is used, for example, for delicate heart operations and to treat babies with congenital heart defect, one in 100, which happens to be the leading cause of infant death in the United States. Pereira worked on her project to develop a glue that could be used anywhere in the body, including the heart. The glue needed to meet many conditions at once: withstand humidity and dynamic conditions, be elastic to expand and contract with each heartbeat, be hydrophobic (to repel blood away from the surface), biodegradable and non-toxic. In 2012, she succeeded and met even more criteria: the glue she invented only adheres where it is intended, when the surgeon shines a light on it, thus giving him total control over the process. She has been an in-house researcher at Gecko Biomedical in biotechnology and medicine in Paris since October 2013. Political career On December 28, 2015, at the age of 29, she was presented for Marcelo Rebelo de Sousa's national representative in his candidacy for the 2016 presidential elections. Awards In 2012, Novartis considered her one of four world leaders in her field. The MIT Technology Review magazine in 2014 included her in its annual list of "innovators under 35". In early 2015, she was recognized by Forbes magazine as one of the world's 30 promising talents under the age of 30. In September 2015, Time magazine considered her a "next generation leader". References External links Meetup with María Pereira at Labiotech 2017, on YouTube Maria Pereira at Gecko Biomedical. Living people 1986 births 21st-century Portuguese scientists Portuguese women scientists Bioengineers Women bioengineers Women inventors Massachusetts Institute of Technology alumni
Maria Pereira
Engineering,Biology
460
65,869,118
https://en.wikipedia.org/wiki/Economic%20evaluation%20of%20time
In organizational behavior and psychology, Economic evaluation of time refers to perceiving of time in terms of money. (Other forms of evaluation of time are concerned with costs and benefits to the general community of changes in time-dependent activities.) When a person evaluates their time in monetary terms, time is viewed as a scarce resource that should be used as efficiently as possible to maximize the perceived monetary gains. Therefore, people who evaluate their time in terms of money are more likely to trade their time for money (i.e., workers provide their time to organizations in exchange for money)—as illustrated by research examining time and money trade-offs. Trading time for money is revealed through people's time use decisions. Across both mundane and major life decisions, people who evaluate their time in terms of money tend to spend their time in ways that give them more money at the expense of acquiring more time (e.g., driving to a cheaper, yet farther away gas station). Research found that, across these decisions, choosing to get more money at the expense of getting more time is associated with lower subjective well-being. Furthermore, the activation of economic evaluation of time has primarily been studied in organizational behavior research with hourly payment schedules and performance incentives, which are robust predictors of economic evaluation of time. The psychological effects of receiving hourly payment and performance incentives promote the economic evaluations of time, and in turn lead employees to spend their time in ways that maximize personal success and economic gains, such as working more hours, socializing less with loved ones, and volunteering less. Time and Money Time is money The idea that time can be evaluated in monetary terms was first introduced by Benjamin Franklin in his 1748 essay Advice to a Young Tradesman. His famous adage 'time is money', that appeared in this essay, was intended to convey that wasting time in frivolous pursuits results in lost money. He believed that wasting time wasted money in two ways. First, by not earning money. Second, by spending money during non-working time. A great number of researchers argue that this aphorism is true in Western societies. Jean-Claude Usunier noted that "the United States is quite emblematic of the 'time is money' cultures, where time is an economic good. Since time is a scarce resource, or at least perceived as such, people should try to reach its optimal allocation, between competing ways of using it." Consistent with this line of thought, literature on economic evaluation of time views that people can treat time and money in similar ways (and they are tradeable) in certain contexts. Specifically, organizational practices, such as hourly payment schedules and exposure to the concept of 'money', are significant activators of economic evaluation of time. Differences in time and money However, a different line of research provides contrasting arguments by showing that people evaluate time and money very differently. In particular, money has a readily exchangeable market where people can buy, sell, borrow, and save, which is impossible to do with time. A lost dollar has potential to be earned back tomorrow, yet a lost minute cannot be recouped. In a study done by LeClerc, Schmitt, and Dube, people were more risk-averse to uncertainties that involved losses of time compared to money (whereas, according to prospect theory, people are risk-seeking under decisions that involve losses of money). For example, people were less likely to choose to wait 90 minutes over 60 minutes for sure than they were to choose the chance of losing $15 over $10 for sure. Okada and Hoch also found systematic differences in how people spent time versus money, and these differences in spending patternIn organizational behavior and psychology, Economic evaluation of time refers to perceiving of time in terms of money. (Other forms of evaluation of time are concerned with costs and benefits to the general community of changes in time-dependent activities.) When a person evaluates their time in monetary terms, time is viewed as a scarce resource that should be used as efficiently as possible to maximize the perceived monetary gains. Therefore, people who evaluate their time in terms of money are more likely to trade their time for money (i.e., workers provide their time to organizations in exchange for money)—as illustrated by research examining time and money trade-offs. Trading time for money is revealed through people's time use decisions. Across both mundane and major life decisions, people who evaluate their time in terms of money tend to spend their time in ways that give them more money at the expense of acquiring more time (e.g., driving to a cheaper, yet farther away gas station). Research found that, across these decisions, choosing to get more money at the expense of getting more time is associated with lower subjective well-being. Furthermore, the activation of economic evaluation of time has primarily been studied in the organizational behavior research with hourly payment schedules and performance incentives, which are robust predictors of economic evaluation of time. The psychological effects of receiving hourly payment and performance incentives promote the economic evaluations of time, and in turn lead employees to spend their time in ways that maximize personal success and economic gains, such as working more hours, socializing less with loved ones, and volunteering less. Time and Money Time is money The idea that time can be evaluated in monetary terms was first introduced by Benjamin Franklin in his 1748 essay Advice to a Young Tradesman. His famous adage 'time is money', that appeared in this essay, was intended to convey that wasting time in frivolous pursuits results in lost money. He believed that wasting time wasted money in two ways. First, by not earning money. Second, by spending money during non-working time. A great number of researchers argue that this aphorism is true in Western societies. Jean-Claude Usunier noted that "the United States is quite emblematic of the 'time is money' cultures, where time is an economic good. Since time is a scarce resource, or at least perceived as such, people should try to reach its optimal allocation, between competing ways of using it." Consistent with this line of thought, literature on economic evaluation of time views that people can treat time and money in similar ways (and they are tradeable) in certain contexts. Specifically, organizational practices, such as hourly payment schedules and exposure to the concept of 'money', are significant activators of economic evaluation of time. Differences in time and money However, a different line of research provides contrasting arguments by showing that people evaluate time and money very differently. In particular, money has a readily exchangeable market where people can buy, sell, borrow, and save, which is impossible to do with time. A lost dollar has a potential to be earned back tomorrow, yet a lost minute cannot be recouped. In a study done by LeClerc, Schmitt, and Dube, people were more risk-averse to uncertainties that involved losses of time compared to money (whereas, according to prospect theory, people are risk-seeking under decisions that involve losses of money). For example, people were less likely to choose to wait 90 minutes over 60 minutes for sure than they were to choose the chance of losing $15 over $10 for sure. Okada and Hoch also found systematic differences in how people spent time versus money, and these differences in spending patterns were explained by the ambiguity in the value of time in contrast to money that was perceived as more fungible. For example, people believed that they will have more time in the future than now (which leads to greater slack and procrastination), yet people did not overestimate the amount of money they will have in the future than now. Time and money also differ in their connections to people's self-concepts. People perceive that their temporal expenditures, such as spending leisure time, are more reflective of their self-concept, as compared to their monetary expenditures. For instance, Reed and his colleagues found that people view donations of time (e.g., volunteering) as higher in moral value and more self-expressive than monetary donations. Similarly, Carter and Gilovich found that people’s experiences are more critical to their personal narrative than material goods. Therefore, although people do express aspects of their self-identity through purchasing of material goods as well, expenditures of time may constitute people’s lives more strongly. Together, the fact that people can view their time in terms of money may not be true across all contexts. Factors that Promote Economic Evaluation of Time Money Research looking at the relationship between time and money found that activating the concept of money can heighten people's focus on the goal of maximizing economic gains. Thinking about their time in terms of money (economic evaluation of time), subsequently impacts people's decisions about time-use and attitude toward others (see 'Consequences' section). The focus on money can be induced in laboratory settings, as well as in organizational contexts, such as under hourly payment schedules and performance incentives, which are explained in detail below. Laboratory tasks People can be primed to think about money through simple laboratory procedures. Studies found that people who were asked to formulate sentences using money-relevant words (e.g., price) versus time-relevant words (e.g., clock) primed people to think about money, and they became more self-focused in their decisions about time use. For example, participants who were primed to think about money spent more time working and less time socializing with friends. They were also far less likely to help others or seek help. There are various other manipulation techniques used to prime the concept of money. The 'descrambling task' consists of 30 sets of five jumbled words, where participants are asked to formulate sensible phrases using four of the five words. In the control conditions, all 30 of the phrases primed neutral concepts (e.g., “cold it desk outside is” descrambled to “it is cold outside”). In the money-prime condition, 15 of the phrases primed the concept of money (e.g., “high a salary desk paying” descrambled to “a high-paying salary”). Other studies presented participants with money bills versus paper sheets to prime the concept of money. However, some of the recent studies in the money-priming research failed to replicate these results. Across several experiments, the same manipulation (e.g., showing an image of a $100 bill) did reliably activate the concept of money; however, it did not have consistent effects on several dependent measures including subjective wealth, self-sufficiency, agency, and communion, which are theorized to be influenced by the thought of money. Furthermore, socioeconomic factors such as gender, socioeconomic status, and political ideology did not moderate these effects of money primes. Since variance in study population and methods are inevitable across experiments, these laboratory studies should therefore be interpreted with caution. Caruso and his colleagues suggest that using large-scale pre-registered experiment and assessing wide-ranging individual factors within the same heterogeneous sample will be helpful in identifying meaningful variations among the dependent variables. Performance incentive The research found that certain organizational practices promote economic evaluation of time. One such factor is performance incentives, a ubiquitous payment system used in various domains including education, health, and management. The main alternative to performance incentives is task-based incentive (also known as fixed incentive)—a fixed amount of payment for completing a task. Performance incentives, as compared to task-based incentives, increase people's attention to reward objects, which in turn heighten their desire for money. This desire then motivates people's focus on earning monetary and material rewards, and decreases prosocial spending like making donations. Hourly payment One of the most salient features in organizations that induce the economic evaluation of time is hourly pay, a type of payment schedule that approximately 58% of employees work under in the United States. Time and money connection is particularly salient under hourly payment because people's income is a direct function of the number of hours they worked, multiplied by their rate of pay. Sanford DeVoe and Jeffery Pfeffer found that workers who were paid by the hour showed more similarity in how they evaluated time and money, as compared to workers who were paid by salary. Specifically, people who were paid by the hour (vs. salary) applied mental accounting rules to time that are typically only applied to money. Participants were asked to rate their endorsement on a mental accounting questionnaire (e.g., "If I have wasted money [time] on a particular activity or item, I try to save it on another activity or item."), where DeVoe and Pfeffer found that hourly wage participants showed high similarity in how they applied mental accounting rules to both time and money, whereas salaried participants did not apply mental accounting rules to time. Furthermore, research looking at the economic evaluation of time proved that 'economic mindset' can be induced in laboratory settings through hourly wage calculations. Although not everyone is paid by the hour, every worker has an implicit hourly wage—their total income divided by the number of hours they worked. Therefore, participants who calculated their hourly wage in an experiment versus who did not calculate their hourly wage were more likely to adopt an economic mindset and were more willing to trade their time for money. DeVoe and Pfeffer also showed that the mechanism for how hourly wage payment activates economic evaluation of time is the people's viewing of themselves as the economic evaluator in their decision-making. This suggests that the mere activation of an economic concept, such as hourly wage in general or of another person, itself cannot activate economic evaluation of time. Rather, a person's own prior experience with hourly payment or calculating one's own hourly wage (vs. another person's hourly wage) is what activates the economic evaluation. Therefore, the degree to which hourly payment impacts an individual's attitudes and behaviors depends on the extent to which the economic evaluation becomes more central to one's self-concept. Consequences Devaluing non-compensated time Economic evaluation of time impacts people's decisions about time use. A salient outcome for adopting an economic mindset, or thinking about time in terms of money, is devaluing of non-compensated time. Results from a survey of nationally representative sample of Americans from the May 2001 Current Population Survey (CPS) Work Schedule Supplement showed that people who were paid by the hour, compared to those not paid by the hour, weighed the monetary returns more strongly when making decisions about time use. Therefore, they showed greater willingness to give up their free time to earn more money ("Work more hours but earn more money" vs. "Work fewer hours but earn less money"). Another study demonstrated that technical contractors who sold their services by the hour came to evaluate their time in terms of money, which led the contractors to devalue non-compensated time (e.g., volunteering). These non-compensated time use domains are discussed below. Volunteering People who are paid by the hour (vs. salary) volunteer less. In the laboratory, participants who calculated their hourly wage (vs. those who did not calculate their hourly wage), volunteered less and also reported that they are less willing to volunteer their time. Pro-environmental behavior were explained by the ambiguity in the value of time in contrast to money that was perceived as more fungible. For example, people believed that they will have more time in the future than now (which leads to greater slack and procrastination), yet people did not overestimate the amount of money they will have in the future than now. Time and money also differ in their connections to people's self-concepts. People perceive that their temporal expenditures, such as spending leisure time, are more reflective of their self-concept, as compared to their monetary expenditures. For instance, Reed and his colleagues found that people view donations of time (e.g., volunteering) as higher in moral value and more self-expressive than monetary donations. Similarly, Carter and Gilovich found that people’s experiences are more critical to their personal narrative than material goods. Therefore, although people do express aspects of their self-identity through purchasing of material goods as well, expenditures of time may constitute people’s lives more strongly. Together, the fact that people can view their time in terms of money may not be true across all contexts. Factors that Promote Economic Evaluation of Time Money Research looking at the relationship between time and money found that activating the concept of money can heighten people's focus on the goal of maximizing economic gains. Thinking about their time in terms of money (economic evaluation of time), subsequently impacts people's decisions about time-use and attitude toward others (see 'Consequences' section). The focus on money can be induced in laboratory settings, as well as in organizational contexts, such as under hourly payment schedules and performance incentives, which are explained in detail below. Laboratory tasks People can be primed to think about money through simple laboratory procedures. Studies found that people who were asked to formulate sentences using money-relevant words (e.g., price) versus time-relevant words (e.g., clock) primed people to think about money, and they became more self-focused in their decisions about time use. For example, participants who were primed to think about money spent more time working and less time socializing with friends. They were also far less likely to help others or seek help. There are various other manipulation techniques used to prime the concept of money. The 'descrambling task' consists of 30 sets of five jumbled words, where participants are asked to formulate sensible phrases using four of the five words. In the control conditions, all 30 of the phrases primed neutral concepts (e.g., “cold it desk outside is” descrambled to “it is cold outside”). In the money-prime condition, 15 of the phrases primed the concept of money (e.g., “high a salary desk paying” descrambled to “a high-paying salary”). Other studies presented participants with money bills versus paper sheets to prime the concept of money. However, some of recent studies in the money-priming research failed to replicate these results. Across several experiments, the same manipulation (e.g., showing an image of a $100 bill) did reliably activate the concept of money; however, it did not have consistent effects on several dependent measures including subjective wealth, self-sufficiency, agency, and communion, which are theorized to be influenced by the thought of money. Furthermore, socioeconomic factors such as gender, socioeconomic status, and political ideology did not moderate these effects of money primes. Since variance in study population and methods are inevitable across experiments, these laboratory studies should therefore be interpreted with caution. Caruso and his colleagues suggest that using large-scale pre-registered experiment and assessing wide-ranging individual factors within the same heterogeneous sample will be helpful in identifying meaningful variations among the dependent variables. Performance incentive Research found that certain organizational practices promote economic evaluation of time. One such factor is performance incentives, a ubiquitous payment system used in various domains including education, health, and management. The main alternative to performance incentives is task-based incentive (also known as fixed incentive)—a fixed amount of payment for completing a task. Performance incentives, as compared to task-based incentives, increase people's attention to reward objects, which in turn heighten their desire for money. This desire then motivates people's focus on earning monetary and material rewards, and decreases prosocial spending like making donations. Hourly payment One of the most salient features in organizations that induce the economic evaluation of time is hourly pay, a type of payment schedule that approximately 58% of employees work under in the United States. Time and money connection is particularly salient under hourly payment because people's income is a direct function of the number of hours they worked, multiplied by their rate of pay. Sanford DeVoe and Jeffery Pfeffer found that workers who were paid by the hour showed more similarity in how they evaluated time and money, as compared to workers who were paid by salary. Specifically, people who were paid by the hour (vs. salary) applied mental accounting rules to time that are typically only applied to money. Participants were asked to rate their endorsement on a mental accounting questionnaire (e.g., "If I have wasted money [time] on a particular activity or item, I try to save it on another activity or item."), where DeVoe and Pfeffer found that hourly wage participants showed high similarity in how they applied mental accounting rules to both time and money, whereas salaried participants did not apply mental accounting rules to time. Furthermore, research looking at the economic evaluation of time proved that an 'economic mindset' can be induced in laboratory settings through hourly wage calculations. Although not everyone is paid by the hour, every worker has an implicit hourly wage—their total income divided by the number of hours they work. Therefore, participants who calculated their hourly wage in an experiment versus those who did not calculate their hourly wage were more likely to adopt an economic mindset and were more willing to trade their time for money. DeVoe and Pfeffer also showed that the mechanism for how hourly wage payment activates economic evaluation of time is the people's viewing of themselves as the economic evaluator in their decision-making. This suggests that the mere activation of an economic concept, such as hourly wage in general or of another person, itself cannot activate the economic evaluation of time. Rather, a person's own prior experience with hourly payment or calculating one's own hourly wage (vs. another person's hourly wage) is what activates the economic evaluation. Therefore, the degree to which hourly payment impacts an individual's attitudes and behaviors depends on the extent to which the economic evaluation becomes more central to one's self-concept. Consequences Devaluing non-compensated time Economic evaluation of time impacts people's decisions about time use. A salient outcome for adopting an economic mindset, or thinking about time in terms of money, is devaluing of non-compensated time. Results from a survey of nationally representative sample of Americans from the May 2001 Current Population Survey (CPS) Work Schedule Supplement showed that people who were paid by the hour, compared to those not paid by the hour, weighed the monetary returns more strongly when making decisions about time use. Therefore, they showed greater willingness to give up their free time to earn more money ("Work more hours but earn more money" vs. "Work fewer hours but earn less money"). Another study demonstrated that technical contractors who sold their services by the hour came to evaluate their time in terms of money, which led the contractors to devalue non-compensated time (e.g., volunteering). These non-compensated time use domains are discussed below. Volunteering People who are paid by the hour (vs. salary) volunteer less. In the laboratory, participants who calculated their hourly wage (vs. those who did not calculate their hourly wage), volunteered less and also reported that they are less willing to volunteer their time. Pro-environmental behavior People who are paid by the hour are less likely to engage in pro-environmental behaviors, such as recycling. Simply asking participants to calculate their hourly wage lowered their willingness to engage in environmental behaviors as well as their actual behaviors in recycling scrap papers in a laboratory experiment. This is due to the hourly participants' spontaneous recognition of the trade-offs they are making with every minute of their time. People feel as if they are losing money when engaging in environmental activities because these are non-compensated. Social interaction Economic evaluation of time undermines social interactions. Thinking about money increases people's willingness to work and reduces their willingness to spend time with others. In an experiment done by Cassie Mogilner, participants who thought about money-related words (e.g., price), compared to participants who thought about time-related words (e.g., clock), were significantly more likely to spend time working more and socializing less with loved ones. As such, people who are focused on money are less interpersonally attuned—they are less caring and warm and rather in a business mindset. Well-being Economic evaluation of time has multiple negative implications for well-being. Economic evaluation of time activates the human motivation system that is associated with self-focused values. People with an economic mindset therefore tend to prioritize personal achievement more than the wellbeing of others and spend time in ways that maximize personal gains. This tendency negatively contributes to well-being. First, evaluating time in terms of money motivates people to work more because every hour they put into non-compensated activities is lost money. Although this may be useful when trying to meet a short deadline at work, work time does not typically translate into happiness. However, spending time with loved ones, such as family and friends, spending time volunteering, and engaging in pro-environmental behaviors have been found to contribute to greater happiness. Daniel Kahneman also demonstrated that being prosocial and socializing with friends are known to be the happiest part of most people's days. Economic evaluation of time that decreases these happiness-promoting activities may therefore have grave consequences on well-being. References Economics and time Psychological effects Organizational behavior Personal finance
Economic evaluation of time
Physics,Biology
5,251
1,569,338
https://en.wikipedia.org/wiki/HD%20192263
HD 192263 is a star with an orbiting exoplanet in the equatorial constellation of Aquila. The system is located at a distance of 64 light years from the Sun based on parallax measurements, and is drifting further away with a radial velocity of −10.7 km/s. It has an absolute magnitude of 6.36, but at that distance the apparent visual magnitude is 7.79. It is too faint to be viewed with the naked eye, but with good binoculars or small telescope it should be easy to spot. In the late 1990s, Klaus G. Strassmeier et al. discovered that HD 192263 is a variable star while conducting a search for stars that would be good candidates for Doppler imaging. It was given its variable star designation, V1703 Aquilae, in 2006. The spectrum of HD 192263 matches a K-type main-sequence star, an orange dwarf, with a stellar classification of K1/2 V This is a BY Draconis variable, with variations in luminosity being caused by star spots on a rotating stellar atmosphere. It has a high level of magnetic activity in its chromosphere. The star is being viewed almost equator-on, with a projected rotational velocity of 2 km/s. It has 65% of the mass of the Sun, 74% of the Sun's radius, and is roughly 6.6 billion years old. The star is radiating 30% of the luminosity of the Sun from its photosphere at an effective temperature of 4,955 K. The star HD 192263 is named Phoenicia. The name was selected in the NameExoWorlds campaign by Lebanon, during the 100th anniversary of the IAU. Phoenicia was an ancient thalassocratic civilisation of the Mediterranean that originated from the area of modern-day Lebanon. Various companions for the star have been reported, but all of them are probably line-of-sight optical components or just spurious observations. Planetary system On 28 September 1999, an exoplanet around HD 192263 was found by the Geneva Extrasolar Planet Search team using the CORALIE spectrograph on the 1.2m Euler Swiss Telescope at La Silla Observatory, discovered independently by Vogt et al. The exoplanet is named Beirut after the capital and largest city of Lebanon. See also List of exoplanets discovered before 2000 - HD 192263 b / Beirut References External links K-type main-sequence stars BY Draconis variables Planetary systems with one confirmed planet Aquila (constellation) Durchmusterung objects 192263 099711 Aquilae, V1703
HD 192263
Astronomy
562
33,559,182
https://en.wikipedia.org/wiki/Anatoly%20Logunov
Anatoly Alekseyevich Logunov (; December 30, 1926 – March 1, 2015) was a Soviet and Russian theoretical physicist, academician of the USSR Academy of Sciences and Russian Academy of Sciences. He was awarded the Bogolyubov Prize in 1996. Biography Anatoly Logunov was born in Obsharovka village, now in Privolzhsky District, Samara Oblast, Russia. In 1951 he graduated from Moscow University where he studied theoretical physics. From 1954 to 1956 he worked in Moscow University, later worked at Joint Institute for Nuclear Research (Dubna). He became doktor nauk in 1959 and professor in 1961. In 1968 he was elected a corresponding member of The Academy of Sciences of USSR. In 1971 the department of quantum theory and high energy physics was founded on faculty of physics of Moscow University. Anatoly Logunov was the head of this department right from the start at least until 2006. In 1972 Anatoly Logunov was elected an academician in the field of nuclear physics. From 1977 till 1992 he was the Rector of Moscow University. Anatoly Logunov died on 1 March 2015 in Moscow, Russia. He was buried at Troyekurovskoye Cemetery in Moscow. Research Logunov made a notable contribution to elementary particle physics and quantum field theory. In 1956 he built generalized finite multiplicative renormalization groups and functional and differential renormalization group equations of electrodynamics in case of arbitrary calibration. Jointly with Piotr Isayev (Russian: Пётр Степанович Исаев), Lev Soloviov (Russian: Лев Дмитриевич Соловьев), Albert Tavkhelidze (Russian: Альберт Никифорович Тавхелидзе) and Ivan Todorov (Bulgarian: Иван Тодоров) et al. he derived dispersion relations for different processes of elementary particle interactions, among them the processes of photobirth of -mesons in nucleons. He studied Bell's spaceship paradox, the ideas of Henri Poincaré. Relativistic theory of gravitation After studying works of Poincare, Lorentz, Hilbert and Einstein in great detail, Logunov and his colleagues developed the relativistic theory of gravitation (RTG), a theory of gravitation alternative to that of the general theory of relativity. RTG is constructed in the framework of the special theory of relativity. It asserts that gravitational field, like all other physical fields, develops in Minkowski space, while the source of this field is the conserved energy-momentum tensor of matter, including the gravitational field itself. This approach permits constructing, in a unique and unambiguous manner, the theory of gravitational field as a gauge theory. Here, there arises an effective Riemannian space, which literally has a field nature. Unlike General Relativity (GR), according to which space is considered to be Riemannian owing to presence of matter and gravity is considered a consequence of space-time exhibiting curvature, the RTG gravitational field has spins 2 and 0 and represents a physical field in Faraday-Maxwell spirit. In RTG, unlike GR, the energy-momentum and the angular momentum conservation laws are fulfilled. Moreover, analyses of the development of a homogeneous and isotropic Universe within RTG leads to the conclusion that the Universe is infinite, and that it is "flat". It evolves cyclically from a certain maximum density down to a minimum and so on. Thus, no pointlike Big Bang occurred in the past. There existed a state of high density and high temperature at each point in space. The theory also predicts the existence in the Universe of a large hidden mass of "dark matter" and impossibility of infinite gravitational collapse (no black holes). Positions Director of the Institute for High Energy Physics (1963—1974 and 1993—2003) Scientific director of the Institute for High Energy Physics (1974–2015) Vice-president of Academy of Sciences of USSR (26 November 1974–19 December 1991) Rector of Moscow State University (1977—1992) Deputy of the USSR Supreme Soviet (1979–1989) Candidate member of the CPSU Central Committee (1981–1986) Member of the CPSU Central Committee (1986) Head of editorial board of a series "Materials to bibliography of scientists (Russian: "Материалы к библиографии ученых") Awards and recognition Orders and medals The incomplete list follows. Hero of Socialist Labour (1980) Order "For Merit to the Fatherland" (2nd degree, 2002) Order "For Merit to the Fatherland" (3rd degree, 1995) Order of Lenin (1971, 1975, 1986) Order of the Badge of Honour (1962) Jubilee Medal "In Commemoration of the 100th Anniversary of the Birth of Vladimir Ilyich Lenin" (1970) Prizes Lenin Prize (1970) USSR State Prize in the field of engineering (1973, 1974) Bogolyubov Prize (1996) Russian Federation Presidential Certificate of Honour (2012) Anatoly Logunov was elected a doctor emeritus of Humboldt University of Berlin, Comenius University in Bratislava, University of Havana, Charles University in Prague, Sofia University, University of Helsinki and a number of universities of Japan, full professor of Department of theorerical physics of Institute of Fundamental Research (Molise, Italy). He is a foreign member of Academy of sciences of Bulgaria (1978), Academy of sciences of Eastern Germany (1978), Academy of sciences of Georgia (1996). Works . Bogolubov, N.N., Logunov, A.A., Oksak, A.I., Todorov, I. (1990) General Principles of Quantum Field Theory. References External links Soviet physicists 20th-century Russian physicists Full Members of the USSR Academy of Sciences Full Members of the Russian Academy of Sciences 1926 births 2015 deaths Candidates of the Central Committee of the 26th Congress of the Communist Party of the Soviet Union Members of the Central Committee of the 27th Congress of the Communist Party of the Soviet Union Foreign members of the Bulgarian Academy of Sciences Burials in Troyekurovskoye Cemetery Rectors of Moscow State University Relativity critics
Anatoly Logunov
Physics
1,329
28,452,312
https://en.wikipedia.org/wiki/Sea%20ice%20emissivity%20modelling
With increased interest in sea ice and its effects on the global climate, efficient methods are required to monitor both its extent and exchange processes. Satellite-mounted, microwave radiometers, such SSMI, AMSR and AMSU, are an ideal tool for the task because they can see through cloud cover, and they have frequent, global coverage. A passive microwave instrument detects objects through emitted radiation since different substance have different emission spectra. To detect sea ice more efficiently, there is a need to model these emission processes. The interaction of sea ice with electromagnetic radiation in the microwave range is still not well understood. In general is collected information limited because of the large-scale variability due to the emissivity of sea ice. General Satellite microwave data (and visible, infrared data depending on the conditions) collected from sensors assumes that ocean surface is a binary (ice covered or ice free) and observations are used to quantify the radiative flux. During the melt seasons in spring and summer, sea ice surface temperature goes above freezing. Thus, passive microwave measurements are able to detect rising brightness temperatures, as the emissivity increases to almost that of a blackbody, and as liquid starts to form around the ice crystals, but when melting continues, slush forms and then melt ponds and the brightness temperature goes down to that of ice free water. Because the emissivity of sea ice changes over time and often in short time spans, data and algorithms used to interpret findings are crucial. Effective permittivity As established in the previous section, the most important quantity in radiative transfer calculations of sea ice is the relative permittivity. Sea ice is a complex composite composed of pure ice and included pockets of air and highly saline brine. The electro-magnetic properties of such a mixture will be different from, and normally somewhere in between (though not always—see, for instance, metamaterial), those of its constituents. Since it is not just the relative composition that is important, but also the geometry, the calculation of effective permittivities introduces a high level of uncertainty. Vant et al. have performed actual measurements of sea ice relative permittivities at frequencies between 0.1 and 4.0 GHz which they have encapsulated in the following formula: where is the real or imaginary effective relative permittivity, Vb is the relative brine volume—see sea ice growth processes—and a and b are constants. This empirical model shows some agreement with dielectric mixture models based on Maxwell's equations in the low frequency limit, such as this formula from Sihvola and Kong where is the relative permittivity of the background material (pure ice), is the relative permittivity of the inclusion material (brine) and P is a depolarization factor based on the geometry of the brine inclusions. Brine inclusions are frequently modelled as vertically oriented needles for which the depolarization factor is P=0.5 in the vertical direction and P=0. in the horizontal. The two formulas, while they correlate strongly, disagree in both relative and absolute magnitudes. Pure ice is an almost perfect dielectric with a real permittivity of roughly 3.15 in the microwave range which is fairly independent of frequency while the imaginary component is negligible, especially in comparison with the brine which is extremely lossy. Meanwhile, the permittivity of the brine, which has both a large real part and a large imaginary part, is normally calculated with a complex formula based on Debye relaxation curves. Electromagnetic properties of ice When scattering is neglected, sea ice emissivity can be modelled through radiative transfer. The diagram to the right shows a ray passing through an ice sheet with several layers. These layers represent the air above the ice, the snow layer (if applicable), ice with different electro-magnetic properties and the water below the ice. Interfaces between the layers may be continuous (in the case of ice with varying salt content along the vertical axis, but formed in the same way and in the same time period), in which case the reflection coefficients, Ri will be zero, or discontinuous (in the case of the ice-snow interface), in which case reflection coefficients must be calculated—see below. Each layer is characterized by its physical properties: temperature, Ti, complex permittivity, and thickness, , and will have an upwards component of the radiation, , and a downwards component, , passing through it. Since we assume plane-parallel geometry, all reflected rays will be at the same angle and we need only account for radiation along a single line-of-sight. Summing the contributions from each layer generates the following sparse system of linear equations: where Ri is the ith reflection coefficient, calculated via the Fresnel equations and is the ith transmission coefficient: where is the transmission angle in the ith layer, from Snell's law, is the layer thickness and is the attenuation coefficient: where is the frequency and c is the speed of light—see Beer's law. The most important quantity in this calculation, and also the most difficult to establish with any certainty, is the complex refractive index, ni. Since sea ice is non-magnetic, it can be calculated from relative permittivity alone: Scattering Emissivity calculations based strictly on radiative transfer tend to underestimate the brightness temperatures of sea ice, especially in the higher frequencies, because both included brine and air pockets within the ice will tend to scatter the radiation. Indeed, as ice becomes more opaque with higher frequency, radiative transfer becomes less important while scattering processes begin to dominate. Scattering in sea ice is frequently modelled with a Born approximation such as in strong fluctuation theory. Scattering coefficients calculated at each layer must also be vertically integrated. The Microwave Emission Model of Layered Snowpack (MEMLS) uses a six-flux radiative transfer model to integrate both the scattering coefficients and the effective permittivities with scattering coefficients calculated either empirically or with a distorted Born approximation. Scattering processes in sea ice are relatively poorly understood and scattering models poorly validated empirically. Other factors There are many other factors not accounted for in the models described above. Mills and Heygster, for instance, show that sea ice ridging may have a significant effect on the signal. In such case, the ice can no longer be modelled using plane-parallel geometry. In addition to ridging, surface scattering from smaller-scale roughness must also be considered. Since the microstructural properties of sea ice tend to be anisotropic, permittivity is ideally modelled as a tensor. This anisotropy will also affect the signal in the higher Stokes components, relevant for polarimetric radiometers such as WINDSAT. Both a sloping ice surface, as in the case of ridging—see polarization mixing, as well as scattering, especially from non-symmetric scatterers, will cause a transfer of intensity between the different Stokes components—see vector radiative transfer. See also Arctic sea ice decline Metamaterial Sea ice growth processes Sea ice concentration Sea ice thickness References Sea ice Remote sensing Radiometry Electromagnetic radiation Climate modeling
Sea ice emissivity modelling
Physics,Engineering
1,478
2,924,960
https://en.wikipedia.org/wiki/Odious%20number
In number theory, an odious number is a positive integer that has an odd number of 1s in its binary expansion. Nonnegative integers that are not odious are called evil numbers. In computer science, an odious number is said to have odd parity. Examples The first odious numbers are: Properties If denotes the th odious number (with ), then for all , . Every positive integer has an odious multiple that is at most . The numbers for which this bound is tight are exactly the Mersenne numbers with even exponents, the numbers of the form , such as 3, 15, 63, etc. For these numbers, the smallest odious multiple is exactly . Related sequences The odious numbers give the positions of the nonzero values in the Thue–Morse sequence. Every power of two is odious, because its binary expansion has only one nonzero bit. Except for 3, every Mersenne prime is odious, because its binary expansion consists of an odd prime number of consecutive nonzero bits. Non-negative integers that are not odious are called evil numbers. The partition of the non-negative integers into the odious and evil numbers is the unique partition of these numbers into two sets that have equal multisets of pairwise sums. References External links Integer sequences
Odious number
Mathematics
272
7,620,308
https://en.wikipedia.org/wiki/Elementary%20reaction
An elementary reaction is a chemical reaction in which one or more chemical species react directly to form products in a single reaction step and with a single transition state. In practice, a reaction is assumed to be elementary if no reaction intermediates have been detected or need to be postulated to describe the reaction on a molecular scale. An apparently elementary reaction may be in fact a stepwise reaction, i.e. a complicated sequence of chemical reactions, with reaction intermediates of variable lifetimes. In a unimolecular elementary reaction, a molecule dissociates or isomerises to form the products(s) At constant temperature, the rate of such a reaction is proportional to the concentration of the species In a bimolecular elementary reaction, two atoms, molecules, ions or radicals, and , react together to form the product(s) The rate of such a reaction, at constant temperature, is proportional to the product of the concentrations of the species and The rate expression for an elementary bimolecular reaction is sometimes referred to as the law of mass action as it was first proposed by Guldberg and Waage in 1864. An example of this type of reaction is a cycloaddition reaction. This rate expression can be derived from first principles by using collision theory for ideal gases. For the case of dilute fluids equivalent results have been obtained from simple probabilistic arguments. According to collision theory the probability of three chemical species reacting simultaneously with each other in a termolecular elementary reaction is negligible. Hence such termolecular reactions are commonly referred as non-elementary reactions and can be broken down into a more fundamental set of bimolecular reactions, in agreement with the law of mass action. It is not always possible to derive overall reaction schemes, but solutions based on rate equations are often possible in terms of steady-state or Michaelis-Menten approximations. Notes Chemical kinetics Physical chemistry
Elementary reaction
Physics,Chemistry
392
13,747,216
https://en.wikipedia.org/wiki/Showman%27s%20road%20locomotive
A showman's road locomotive or showman's engine is a steam-powered road-going 'locomotive' designed to provide power and transport for a travelling funfair or circus. Similar to other road-going traction engines, showman's engines were normally distinguished by the addition of a full-length canopy, a dynamo mounted in front of the chimney, and brightly coloured paintwork with ornate decorations. The dynamo was used to generate electricity to illuminate and power various fairground rides. Although originally the ride's motion was powered by an internal steam engine, some later rides were driven direct from the showman's engine via a belt drive. Showman's road locomotives were built in varying sizes, from relatively small 5, 6 and 7 NHP engines, right up to large 8 or 10 NHP engines. Probably the most popular design was the Burrell 8NHP single-crank compound design. The far greater distances involved meant they never caught on in the United States, where a combination of trains and horses was preferred. History A Bray engine featured as part of a fairground parade but this doesn’t appear to have had any wider impact. Prior to the introduction of the showman’s engine some fairs used lightweight steam engines with generators mounted in them to provide power for lighting. These were marketed as Electric Light Engines. One of the earliest engines ordered directly from the manufacturers by a showman was a Burrell No.1451 Monarch, built in 1889. Before the advent of these showman's road locomotives all of the rides were drawn in transit by teams of horses. This was very labour-intensive, and substantially restricted the size of the rides. Production of showman's engines tailed off in the late-1920s, with the last Burrell 'Simplicity' being built by Garrett's of Leiston in 1930. The decline was due to the rise of diesel trucks initially helped by the availability of large numbers of former military vehicles at the end of World War 1. A 1927 act of parliament reduced road tax payments if the engine had rubber tyres which resulted in them becoming nearly universal. The last showman's engine to be built was Fowler's 'Supreme', one of the 'Super Lions'; it was completed for Mrs A. Deakin (who also bought 'Simplicity') in March 1934. Fowler attempted to continue with a similar line of diesel engines, but it was not a success and only a single example was built. Showmen’s engines saw the end of non-preservation use in the 1950s. 107 have survived into preservation. Characteristic features In general, showman's road locomotives share much the same design and technology as other road-going traction engines; however, certain features set the showman's engine apart: Ornate painting Most were painted in bright colours; the Burrell standard was 'Lake Crimson' with 'Deep yellow' wheels. George Tuby's engines were distinctively painted Great Eastern blue with yellow wheels and lining. Other embellishments included elaborate scroll paintings, this was especially popular around the start of the 20th century. Typically the sideboards had the name of either the proprietor or of the ride the engines were working with picked out in gold. Brass decoration Most engines have simple steel rods for roof supports, but showman's engines employ a more flamboyant 'twisted' design usually of polished brass. Brass stars and other decoration were often mounted on the motion covers and water tanks. Dynamo This was driven by a belt from the engine's flywheel and powered the lighting on the rides and stalls, and often the rides themselves. The power varied with the NHP of the engine, typically a smaller 'five horse' (5NHP) engine would have a small 110 V Dynamo, with the larger scenic engines having a large 110 V DC (Direct Current) dynamo and smaller 80 V 'exciter'. Full-length canopy Most road locomotives have some kind of roof or canopy fitted, covering the man stand (where the driver operates the controls) and the crankshaft area. The canopy of a showman's engine extends forward of the chimney to protect the dynamo from rain ingress. They are often fitted with a string of lights along the perimeter to enhance the effect at night, this being more common in preservation than before. Extension chimney An extra tube is carried for extending the chimney when stationary. This tube could be between long, depending on the size of the engine. The chimney tube is carried on purpose-made brackets on the roof. The extra length of chimney improves the draft through the fire, and reduces the risk of smoke and smuts being blown around nearby fair-goers. Crane Many of the scenic engines were built with, or at sometime had fitted, a large boom crane fitted to the tender. This was used for erecting the rides and moving items, such as gondola cars, from place to place. Disk flywheel Most road locomotives were fitted with disc flywheels, the idea of this being if they encountered horses en route, the horse would be less startled by the spinning disc. This theory was pretty much ruined when showmen began to decorate the flywheels, worsening the startling effect. Sub-types Showman's tractors Showman's tractors were basically miniaturized versions of their larger counterparts. Many were constructed following government legislation increasing weight limits at seven tons, so at between 5 and 7 tons, these engines were very popular. Again Burrell was a prolific manufacturer, as was William Foster, but the market leader was probably Garrett's of Leiston with a showman's engine based on their popular 4CD tractor design. "Special Scenic" engines Special Scenic engines were perhaps the ultimate development of the showman's road locomotive. Built almost solely by Burrell's of Thetford (Fowler built just one experimental engine) these were developed for the heavier rides that were emerging. In particular Scenic railways. Basically a 'Special Scenic' engine has a second dynamo located behind the chimney, known as an exciter. This extra dynamo helped to start the heavy new scenic rides. In order to drive the belt for the extra dynamo the flywheel was made wider. The first engine to be built new as a 'Special Scenic' was No. 3827 Victory. Supplied to Charles Thurston of Norwich in 1920, this engine is now preserved in the Thursford Collection in Norfolk. Showman's steam wagons Although less common than the tractors or larger locomotives, showmen sometimes converted the conventional steam wagons for showland use. Foden's were probably the most popular choice, Burrell's only ever sold one wagon specifically built for a showman: no. 3883 Electra was built in 1921 for Charles Summers of Norwich, it was later sold to an operator in Plymouth, but was later destroyed in World War II by the Nazi Blitz of the city. Manufacturers One of the most prolific manufacturers of these vehicles was Charles Burrell of Thetford Norfolk. Their later 8nhp engines were held in very high regard by their operators. Other major manufacturers included John Fowler & Co. of Leeds and William Foster & Co. of Lincoln. Other manufacturers made lesser ventures into the showman's engine market; these included Wallis and Steevens of Basingstoke, Foden's of Sandbach and Aveling and Porter of Rochester, Kent. Fowler B6 "Super Lion" In the early 1930s, when steam on the roads was in decline, Fowler's, under advice from Sidney Harrison of Burrell's, produced four of the most sophisticated showman's road locomotives ever constructed. Incorporating many features of the popular Burrell design, they were steam's finale. The first was No. 19782 The Lion, built in March 1932 for Anderton and Rowland of Bristol. In April of the same year No. 19783 King Carnival II was supplied to Frank Mcconville of West Hartlepool. The third engine, No. 19989 Onward, was built for Samuel Ingham of Cheshire. The last of the four, and indeed the last showman's engine ever constructed, was No. 20223 Supreme, built in March 1934 for Mrs A. Deakin. Three of these engines survived into preservation, with Supreme and King Carnival II on road haulage duties for their last days in commercial use. Onward was the unlucky engine, being cut up in 1946; however a faithful replica of Onward was completed in 2016. Conversions As well as genuine factory-built engines, a great number of engines were converted from conventional road locomotives to full showman's engines by both the showmen, and by private concerns, like Openshaw's. Most of the converted engines were ex-War Dept Fowlers and McLarens. Others were powerful 'contractor's' type road locomotives, many of these were a cheap and powerful alternative to factory models, and they were plentiful following World War I. This extended beyond British manufactures with one showman using a Kemna engine in the 1920s. As well as full conversions, showmen were also experts in adding extra dynamos, or fitting their own designs of crane and canopies. This led to a world of variation in the engines. Some 'home-made' designs were better than others, but many have survived. Due to the demand and prestige attached to showman's engines, in recent years a number of engines, mainly road rollers, have been converted by preservationists. This practice is causing concern amongst some enthusiasts, as in some cases unique examples of some models have been lost. In a few cases owners have converted engines back during restoration to their original form. Famous showmen owners Although hundreds of showland families operated showman's engines a few are worthy of note. Pat Collins and family operated well over 25 showman's engines, although predominantly Burrell's he also owned various Fowler's and other makes Charles Thurston and family operated a large number of engines from both Burrell's and Foster's. A number of their engines have been preserved. Foster's Admiral Beatty and Burrell's Britannia were owned by William Thurston. A unique set of four of Charles Thurston's engines have been preserved at the Thursford Collection in Norfolk. These are all Burrell's: King Edward VII of 1905, Victory of 1920, Unity and Alexandra. George Thomas Tuby operated a fleet of seven Burrell showman's engines, most of which carried names according to the position of Tuby in the local government. These included Councillor, Alderman, Mayor and surviving Ex-Mayor Preservation The last showman's engine in commercial showland use was in 1958, before this engines were being sold for scrap for next-to-nothing. George Cushing, Founder of the Thursford Collection bought Victory, Alexandra and Unity for around £40 each, (For comparison, a similar engine No. 3865 No. 1 was sold at auction in 2003 for £320,000.) Towards the end of the 1930s engines were simply becoming out-of-date. With the ending of the Second World War came hundreds of cheap and powerful ex-Army lorries replaced the showman's engines, making them obsolete. Although many of these engines were scrapped, a good number of them have survived into preservation. Many appear at rallies all over the UK, others are in museums such as Thursford, or the Hollycombe Collections. See also Carousel Steam fair References "Showmans Road Locomotives" by National Traction Engine Club Ltd. (1981 edition) External links Showman's locomotives (The Fairground Heritage Trust)   Steam road vehicles Road transport Tractors
Showman's road locomotive
Engineering
2,389
3,758,628
https://en.wikipedia.org/wiki/Shrub%E2%80%93steppe
Shrub-steppe is a type of low-rainfall natural grassland. While arid, shrub-steppes have sufficient moisture to support a cover of perennial grasses or shrubs, a feature which distinguishes them from deserts. The primary ecological processes historically at work in shrub-steppe ecosystems are drought and fire. Shrub-steppe plant species have developed particular adaptations to low annual precipitation and summer drought conditions. Plant adaptations to different soil moisture regimes influence their distribution. A frequent fire regime in the shrub-steppe similarly adds to the patchwork pattern of shrub and grass that characterizes shrub-steppe ecosystems. North America The shrub-steppes of North America occur in the western United States and western Canada, in the rain shadow between the Cascades and Sierra Nevada on the west and the Rocky Mountains on the east. They extend from south-central British Columbia down into south central and south-eastern Washington, eastern Oregon, and eastern California, and across through Idaho, Nevada, and Utah into western Wyoming and Colorado, and down into northern and central New Mexico and northern Arizona. Growth is dominated primarily by low-lying shrubs, such as big sagebrush (Artemisia tridentata) and bitterbrush (Purshia tridentata), with too little rainfall to support the growth of forests, though some trees do occur. Other important plants are bunchgrasses such as Pseudoroegneria spicata, which have historically provided forage for livestock as well as wildlife, but are quickly being replaced by nonnative annual species like cheatgrass (Bromus tectorum), tumble mustard (Sisymbrium altissimum), and Russian thistle (Salsola kali). There is also a suite of animals that call the shrub-steppe home, including sage grouse, pygmy rabbit, Western rattlesnake, and pronghorn. Historically, much of the shrub-steppe in Washington state was referred to as scabland because of the deep channels cut into pure basalt rock by cataclysmic floods more than 10,000 years ago. Major threats to the ecosystem include overgrazing, fires, invasion by nonnative species, development (since much of it is at lower elevations), conversion to cropland, and energy development. Less than 50% of the state of Washington's historic shrub-steppe remains; according to some estimates, only 12 to 15% remains. Shrub-steppe ecoregions of North America include: Great Basin shrub steppe in eastern California, central Nevada, western Utah, and southeastern Idaho. Snake–Columbia shrub steppe in south-central Washington state, eastern Oregon, northeastern California, northern Nevada, and southern Idaho. Wyoming Basin shrub steppe in central Wyoming, reaching into south-central Montana, northeastern Utah, southeastern Idaho, and northwestern Colorado. Okanagan shrub steppe in the Okanagan Valley in south-central British Columbia, and the southern Similkameen Valley in south-central British Columbia and north-central Washington state. The South Central Rockies forests, Montana valley and foothill grasslands, and Blue Mountains forests also contain shrub-steppes. See also Arid Lands Ecology Reserve (in Washington state in the US) Artemisia tridentata Deserts and xeric shrublands Rangeland Steppe Temperate grasslands, savannas, and shrublands References External links U.S. Government article: "Shrub-steppe" Bioimages.vanderbilt.edu: Index to Deserts & Xeric Shrublands Washington Dept. of Fish and Game- Species & Ecosystem Science, Shrubsteppe Ecology Temperate grasslands, savannas, and shrublands Ecoregions Grasslands
Shrub–steppe
Biology
724
51,910,723
https://en.wikipedia.org/wiki/Conservation%20banking
Conservation banking is an environmental market-based method designed to offset adverse effects, generally, to species of concern, are threatened, or endangered and protected under the United States Endangered Species Act (ESA) through the creation of conservation banks. Conservation banking can be viewed as a method of mitigation that allows permitting agencies to target various natural resources typically of value or concern, and it is generally contemplated as a protection technique to be implemented before the valued resource or species will need to be mitigated. The ESA prohibits the "taking" of fish and wildlife species which are officially listed as endangered or threatened in their populations. However, under section 7(a)(2) for Federal Agencies, and under section 10(a) for private parties, a take may be permissible for unavoidable impacts if there are conservation mitigation measures for the affected species or habitat. Purchasing “credits” through a conservation bank is one such mitigation measure to remedy the loss. Conservation banks are permanently protected parcels of land with inherent abilities to harbor, preserve, and manage the survival of endangered and threatened species, along with their critical habitat. This allows the acquisition and protection of the parcels of land prior to future loss or disturbance to valued resources. Banks are often considered to be the more ecologically efficient option for mitigation because they generally incorporate larger tracts of land that enables higher quality habitat and range connectivity, thereby creating a stronger chance of survival and sustainability for the species. Rather than have developments offset their effects by conserving small areas of habitat, conservation banking allows pooling multiple mitigation resources into a larger reserve. The intention of conservation banking is to create a no-net loss of the intended resources. Conservation banking may be used by various entitles as a method of species and habitat protection, as long as it is approved by the permitting agency. Background Mitigation Mitigation is the preservation of natural resources in order to offset unavoidable impacts to similar resources. Conservation banking mitigation is specific to species and their habitat which are protected under the Endangered Species Act. There are two other forms of mitigation besides conservation banking, including in-lieu fee and permittee-responsible programs. In-Lieu fee programs allows a permittee to contribute money into a United States Fish and Wildlife Service (USFWS) approved fund in lieu of implementing their own mitigation. To date, In-Lieu fee programs have only applied to wetlands. The sponsor of the fund then implements an appropriate mitigation project when enough money has been collected through the fund. In these situations, the fund sponsor is fully liable for the success of the mitigation. The second alternative form of mitigation is the Permittee-Responsible program, which allows the permittee takes on implementation and assumes liability for their own mitigation project to offset effects. Benefits There is generally greater security associated with a conservation bank. This is due to the stringent performance standards imposed on bank owners by the USFWS, which also requires them to have adequate funding into perpetuity, and to have long term management plans. Purchasing of credits by the easement holder from the landowner creates a legal contract, known as a conservation easement. The conservation easement binds the landowner to uphold the requirements of the conservation bank. Another advantage is that purchasing credits from a conservation bank ensures that species and/or habitat protection is already in place before the impact occurs. In addition, liability for habitat and species mitigation success is shifted to the conservation bank owner is a benefit to the developer or permittee. History Conservation banking is derived from wetland mitigation banks that were created in the early 1990s. Through Federal agency efforts, mitigation banks were created to focus on preserving wetlands, streams, and other aquatic habitats or resources and offered compensatory mitigation credits to offset unavoidable effects on the habitats or resources under Section 404 of the Clean Water Act. After the “Federal Guidance for the Establishment, Use, and Operation of Mitigation Banks (60 FR 5860558614)” was published in 1995, California contemporaneously led efforts to create conservation banks as to further increase regional conservation due to growing development threatening species and their habitat. Approval of conservation banks for various federally-listed species by the USFWS, in conjunction with other Federal agencies, began throughout the early 1990s. Collectively, the nation’s 130 conservation banks is equivalent to over 160,000 acres of permanently protected land. Endangered Species Act Connection Under the Endangered Species Act of 1973, endangered or threatened species and their respective critical habitat and geographical range are protected for conservation with efforts made to restore the species and habitat back to well-being. Under ESA Section 10(b), takings are permitted only if the taking is incidental and otherwise lawful activity but requires that effects be minimized and mitigated to the maximum extent practicable. For projects and development that will damage an at-risk species’ habitat, such as reducing, modifying, or degrading its habitat, its permittees are required to mitigate the impact. Conservation banks act as a mechanism for compensation when a species or habitat is affected during development by providing credits that can be purchased by permitees to offset their negative impact. Function Conservation banking is a market program that increases the bank owner or landowner's stewardship and incentive for permanently protecting their land by providing them a set number of habitat or species credits that the respective owners are able to sell. In order to satisfy the requirements of a species or habitat conservation measures, these conservation credits can be sold to projects or developments that result in unavoidable and adverse impacts to species. Essentially, conservation banks offsite the cost to mitigate the loss or damage to a species and/or their habitat. Traditionally, preservation of some habitat area of an at-risk species were required by a project permitee during development. This could result in habitat that became isolated, small, with reduced connectivity or functionality, and was more costly to maintain. Comparatively, conservation banks are more cost effective as they are able to maintain larger blocks of land with greater functionality for a species, such as allowing habitat connectivity. For purchasers, this is also time-effective by allowing them to forgo their responsibility of handling on-or-off mitigation measures that can run into administrative delays due to the USFWS review and approval process. After the public or private party purchases credits, a bank transfer occurs between the project party and banker. The banker is then perpetually bound to conserve and manage the conservation bank. Creation Process In California, a multi-agency process oversees the review and approval of conservation banks by the Inter-agency Review Team, which can be composed of all or some of the following agencies; typically the U.S. Fish and Wildlife Service, National Oceanic and Atmospheric Administration's National Marine Fisheries Service, and the California Department of Fish and Wildlife. After review and approval, the Inter-agency Review Team and conservation bank sponsor signs a legally-binding conservation bank enabling instrument, which details the responsibilities of each party and includes a management plan, endowment funding agreement, and other documents detailing the operations of the conservation bank. Unless the lands have been previously listed or designated for other conservation purposes, Private, Tribal, State, and local government lands are all considered eligible to be conservation banks. Agricultural lands, such as used for farming, ranching, timber operations, croplands or related, may be suitable for the establishment of a conservation bank if the special-status species habitat on-site is intact or restored; however, agricultural and forestry activities may need to be modified, reduced, or stopped entirely if necessary to protect the conservation values of the land. Management Plan Establishment of a bank requires a management plan that outlines necessary management activities and endowment funds. The intention of the plan is to describe the long-term management activities of the conservation bank. The plan describes restricted and allowed activities and provides guidance on all monitoring and reporting requirements. The minimal requirements of a management plan includes: A full geophysical description of the site, including the area, geographical setting, neighboring land uses, and any relevant cultural or historic features located in the site. Identifies the biological resources within the bank, such as a vegetation map. Describes restricted and permitted activities that can occur on-site The objectives and biological goals of the conservation bank is described. All management activities are fully described for the conservation bank in order to meet the objectives and biological goals, such as necessary ecological restoration of its habitats, incorporation of public use and access, and budgeting requirements. Necessary monitoring schedules, including special management plan activities. If necessary, outlines how future management will occur, such as decision trees or similar. Creation of the bank must also include plans for remedial action in case bank owners are unable to fulfill their agreements. Remedial actions can include forfeiting the property to a third party to uphold the requirements of the bank or posting a bond valued equivalent to the property. Typically acts of nature, including earthquakes, floods, or fires, are excluded from liability of the bank owner. Credits Credits are essentially the currency of the conservation value associated with the habitat and/or species which may be affected by development. It is the ecological value of a species or habitat. The permitting agency is responsible for determining the credits available at any given bank, based on the number of species and the habitat characteristic for those species, on the land owned by the bank. They then allocate the appropriate number of credits to the bank owner, who can then establish the price through negotiation with agencies. Pricing of conservation credits are variable based on the type of species impacted through a developmental take. Additionally, the market forces of supply and demand largely dictate the price of any given credit, and the value may fluctuate based on many other economic factors such as land value, competition, and speculation about development in a certain habitat area. Current data suggests that conservation credits range in price from a low of $1500 per mitigation of a Gopher Tortoise to as much as $325,000 for vernal pool preservation. Locations currently used There are currently fourteen states and Saipan, the largest of the Northern Mariana Islands, with approved USFWS conservation banks. These states include Arizona, California, Colorado, Florida, Kansas, Maryland, Mississippi, Oklahoma, Oregon, South Carolina, Texas, Utah, Washington, and Wyoming. Nationally, some species with the largest respective habitat coverage include: American burying beetle, California tiger salamander, California red-legged frog, calippe silverspot butterfly, Florida panther, golden-cheeked warbler, lesser prairie chicken, Utah prairie dog, valley elderberry longhorn beetle, vernal pool fairy shrimp, vernal pool tadpole shrimp. In 1995, California was the first state to create a conservation bank and continues to be the national leader in number of conservation banks, with over 30 established banks. Species benefited in these banks include the burrowing owl, coastal sage scrub, delta smelt, California giant garter snake, longfin smelt, California salmonids, San Bernardino kangaroo rat, San Joaquin kit fox, Santa Ana River Woollystar, Swainson's Hawk, and valley elderberry longhorn beetle. Examples of Californian habitats include ephemeral drainages, riparian zones, vernal pools, and wetlands. Future outlook Two pieces of recent legislation were created, which will likely affect the future of conservation banking. A draft of the Endangered Species Act Compensatory Mitigation Policy was proposed by the Department of Fish and Wildlife Service in September, 2016 with the intention to create a mechanism for the US Department of the Interior to comply with Executive order (80 FR 68743), which directs Federal agencies that manage natural resources “to avoid and then minimize harmful effects to land, water, wildlife, and other ecological resources (natural resources) caused by land- or -water-disturbing activities…” This policy would provide guidance to the USFWS about planning and implementation of compensatory mitigation strategies. If adopted, the policy would require a shift from project-by-project compensatory mitigation approaches to broader, landscape oriented approaches such as conservation banking. In addition, the California legislature passed Assembly Bill 2087, which will enable large conservation goals to be achieved through the creation of advance mitigation credits associated with FWS Regional Conservation Investment Strategies (RCIS). This is important for the future of conservation banking because the bill allows for consideration of mitigation for impacts to wildlife and habitat in conservation strategy planning and decision making. See also Endangered Species Act US Fish and Wildlife Service Mitigation banking Environmental mitigation Biodiversity banking Biodiversity offsetting References Banking Environmental law Environmental mitigation
Conservation banking
Chemistry,Engineering
2,594
22,734,942
https://en.wikipedia.org/wiki/History%20of%20science%20and%20technology%20in%20Africa
Africa has the world's oldest record of human technological achievement: the oldest surviving stone tools in the world have been found in eastern Africa, and later evidence for tool production by humans' hominin ancestors has been found across West, Central, Eastern and Southern Africa. The history of science and technology in Africa since then has, however, received relatively little attention compared to other regions of the world, despite notable African developments in mathematics, metallurgy, architecture, and other fields. Early humans The Great Rift Valley of Africa provides critical evidence for the evolution of early hominins. The earliest tools in the world can be found there as well: An unidentified hominin, possibly Australopithecus afarensis or Kenyanthropus platyops, created stone tools dating to 3.3 million years ago at Lomekwi in the Turkana Basin, eastern Africa. Homo habilis, residing in eastern Africa, developed another early toolmaking industry, the Oldowan, around 2.3 million years ago. Homo erectus developed the Acheulean stone tool industry, specifically hand-axes, at 1.5 million years ago. This tool industry spread to the Middle East and Europe around 800,000 to 600,000 years ago. Homo erectus also begins using fire. Homo sapiens, or modern humans, created bone tools and backed blades around 90,000 to 60,000 years ago, in southern and eastern Africa. The use of bone tools and backed blades eventually became characteristic of Later Stone Age tool industries. The first appearance of abstract art is during the Middle Stone Age, however. The oldest abstract art in the world is a shell necklace dated to 82,000 years ago from the Cave of Pigeons in Taforalt, eastern Morocco. The second oldest abstract art and the oldest rock art is found at Blombos Cave in South Africa, dated to 77,000 years ago. There are evidences that stone age humans around 100,000 years ago had an elementary knowledge of chemistry in Southern Africa, and that they used a specific recipe to create a liquefied ochre-rich mixture. According to Henshilwood, "This isn't just a chance mixture, it is early chemistry. It suggests conceptual and probably cognitive abilities which are the equivalent of modern humans". Education Northern Africa and the Nile Valley In 295 BCE, the Library of Alexandria was founded by Greeks in Egypt. It was considered the largest library in the classical world. Al-Azhar University, founded in 970~972 as a madrasa, is the chief centre of Arabic literature and Sunni Islamic learning in the world. The oldest degree-granting university in Egypt after the Cairo University, its establishment date may be considered 1961 when non-religious subjects were added to its curriculum. West Africa and the Sahel Three madrasas or Islamic schools existed in Mali during the country's "golden age" from the 14th to the 16th centuries: Sankore Madrasah, Sidi Yahya Mosque, and Djinguereber Mosque, all in Timbuktu. The schools consisted of independent scholars who gave instruction to individuals or small groups of students, with special lectures sometimes given in the mosques. There was no overall school administration or prescribed course of study, and libraries consisted of individual private collections of manuscripts. Scholars were drawn from the city's wealthiest families, and instruction was explicitly religious. The main subjects studied by advanced scholars and students were Qur'anic studies, Arabic language, Muhammad, theology, mysticism, and law. In the 16th century, Timbuktu also housed as many as 150–180 maktabs (Qur'anic schools), where basic reading and recitation of the Qur'an were taught. These schools had an estimated peak enrollment of 4,000–5,000 pupils, including pupils from the surrounding areas. Within West Africa Timbuktu was a major center of book copying, religious groups, the Islamic sciences, and arts. Books were imported from North Africa and paper was imported from Europe. Books/manuscripts were written primarily in Arabic. The most famous scholar from Timbuktu was Ahmad Baba (1556–1627), who wrote primarily about Islamic law. Astronomy Three types of calendars can be found in Africa: lunar, solar, and stellar. Most African calendars are a combination of the three. African calendars include the Akan calendar, Egyptian calendar, Berber calendar, Ethiopian calendar, Igbo calendar, Yoruba calendar, Shona calendar, Somali calendar, Swahili calendar, Xhosa calendar, Borana calendar, and Luba calendar and Ankole calendar. Northern Africa and the Nile Valley A stone circle located in the Nabta Playa basin may be one of the world's oldest known archeoastronomical devices. Built by the ancient Nubians about 4800 BCE, the device may have approximately marked the summer solstice. Since the first modern measurements of the precise cardinal orientations of the Egyptian pyramids were taken by Flinders Petrie, various astronomical methods have been proposed as to how these orientations were originally established. Ancient Egyptians may have observed, for example, the positions of two stars in the Plough / Big Dipper which was known to Egyptians as the thigh. It is thought that a vertical alignment between these two stars checked with a plumb bob was used to ascertain where North lay. The deviations from true North using this model reflect the accepted dates of construction of the pyramids. Egyptians were the first to develop a 365-day, 12 month calendar. It was a stellar calendar, created by observing the stars. During the 12th century, the astrolabic quadrant was invented in Egypt. West Africa and the Sahel Based on the translation of 14 Timbuktu manuscripts, the following points can be made about astronomical knowledge in Timbuktu during the 14th–16th centuries: They made use of the Julian Calendar. Generally speaking, they had a geocentric view of the Solar System. Some manuscripts included diagrams of planets and orbits along with mathematical calculations. They were able to accurately orient prayer towards Mecca. They recorded astronomical events, including a meteor shower in August 1583. At this time, Mali also had a number of astronomers including the emperor and scientist Askia Mohammad I. Eastern Africa Megalithic "pillar sites", known as "namoratunga", date to as early as 5,000 years ago and can be found surrounding Lake Turkana in Kenya. Although somewhat controversial today, initial interpretations suggested that they were used by Cushitic speaking people as an alignment with star systems tuned to a lunar calendar of 354 days. Southern Africa Today, South Africa has cultivated a burgeoning astronomy community. It hosts the Southern African Large Telescope, the largest optical telescope in the southern hemisphere. South Africa is currently building the Karoo Array Telescope as a pathfinder for the $20 billion Square Kilometer Array project. South Africa is a finalist, with Australia, to be the host of the SKA. Due to archeological findings it has been speculated that the kingdoms of Zimbabwe such as Great Zimbabwe and mapungubwe used astronomy. Monolith stones with special engravings thought to be used to track Venus were found. They were compared to Mayan calendars and were found to be more accurate than them Mathematics According to Paulus Gerdes, the development of geometrical thinking started early in African history, as early humans learned to "geometricize" in the context of their labor activities. For example, the hunter-gatherers of the Kalahari Desert in southern Africa learned to track animals, learned to recognize and interpret spoors. They got to know that the shape of the spoor provided information on what animal passed by, how long ago, if it was hungry or not, etc. Such developments propelled Louis Liebenberg to posit that the critical attitude of contemporary Kalahari Desert trackers and the role of critical discussion in tracking suggest that the rationalist tradition of science may well have been practiced by hunter-gatherers long before the advent of the Greek philosophic schools. Rock paintings and engravings from all over Africa have been reported. Some of these artifacts date back to several hundreds of years, and others several thousands. They often have geometric structures. Other archaeological finds that indicate geometrical explorations by African hunters, farmers and artisans are stone and metal tools and ceramics. Particularly exceptional are archaeological finds of perishable materials such as baskets, textiles, and wooden objects. The finds from the Tellem are extremely important, as they provide ideas of earlier geometrical explorations. Clear evidence of the exploration of forms, shapes and symmetries exists in the archaeological finds from caves in the Cliff of Bandiagara in the center of Mali. The earliest buildings in the caves are cylindrical granaries made of mud coils that date from the 3rd to the 2nd century BCE. Central and Southern Africa The Lebombo bone from the mountains between Swaziland and South Africa may be the oldest known mathematical artifact. It dates from 35,000 BCE and consists of 29 distinct notches that were deliberately cut into a baboon's fibula. The Ishango bone is a bone tool from the Democratic Republic of Congo dated to the Upper Paleolithic era, about 18,000 to 20,000 BCE. It is also a baboon's fibula, with a sharp piece of quartz affixed to one end, perhaps for engraving or writing. It was first thought to be a tally stick, as it has a series of tally marks carved in three columns running the length of the tool, but some scientists have suggested that the groupings of notches indicate a mathematical understanding that goes beyond counting. Various functions for the bone have been proposed: it may have been a tool for multiplication, division, and simple mathematical calculation, a six-month lunar calendar, or it may have been made by a woman keeping track of her menstrual cycle. The Bushong people can distinguish graphs that have Eulerian paths and those that do not. They use such graphs for purposes including embroidery or political prestige. According to a European ethnologist in 1905, Bushong children were not only aware of the conditions which determine whether a given graph is traceable, but they also knew the procedure that permitted it to be drawn most expeditiously. There are various textbooks made by mathematicians using such culturally based graphs and designs to teach mathematics, such as those made by Paulus Gerdes. According to ethnomathematician Claudia Zaslavsky; The "sona" drawing tradition of Angola also exhibit certain mathematical ideas. In 1982, Rebecca Walo Omana became the first female mathematics professor in the Democratic Republic of the Congo. Northern Africa and the Nile Valley By the predynastic Naqada period in Egypt, people had fully developed a numeral system. The importance of mathematics to an educated Egyptian is suggested by a New Kingdom fictional letter in which the writer proposes a scholarly competition between himself and another scribe regarding everyday calculation tasks such as accounting of land, labor and grain. Texts such as the Rhind Mathematical Papyrus and the Moscow Mathematical Papyrus show that the ancient Egyptians could perform the four basic mathematical operations—addition, subtraction, multiplication, and division—use fractions, knew the formula to compute the volume of a frustum, and calculate the surface areas of triangles, circles and even hemispheres. They understood basic concepts of algebra and geometry, and could solve simple sets of simultaneous equations. Mathematical notation was decimal, and based on hieroglyphic signs for each power of ten up to one million. Each of these could be written as many times as necessary to add up to the desired number; so to write the number eighty or eight hundred, the symbol for ten or one hundred was written eight times respectively. Because their methods of calculation could not handle most fractions with a numerator greater than one, ancient Egyptian fractions had to be written as the sum of several fractions. For example, the fraction two-fifths was resolved into the sum of one-third + one-fifteenth; this was facilitated by standard tables of values. Some common fractions, however, were written with a special glyph; the equivalent of the modern two-thirds is shown on the right. Ancient Egyptian mathematicians had a grasp of the principles underlying the Pythagorean theorem, knowing, for example, that a triangle had a right angle opposite the hypotenuse when its sides were in a 3–4–5 ratio. They were able to estimate the area of a circle by subtracting one-ninth from its diameter and squaring the result: Area ≈ [()D]2 = ()r2 ≈ 3.16r2, a reasonable approximation of the formula πr2. The golden ratio seems to be reflected in many Egyptian constructions, including the pyramids, but its use may have been an unintended consequence of the ancient Egyptian practice of combining the use of knotted ropes with an intuitive sense of proportion and harmony. Based on engraved plans of Meroitic King Amanikhabali's pyramids, Nubians had a sophisticated understanding of mathematics and an appreciation of the harmonic ratio. The engraved plans is indicative of much to be revealed about Nubian mathematics. Metallurgy Most of Africa moved from the Stone Age to the Iron Age. The Iron Age and Bronze Age occurred simultaneously. North Africa and the Nile Valley imported its iron technology from the Near East and followed the Near Eastern pattern of development from the Bronze Age to the Iron Age. Many Africanists accept an independent development of the use of iron south of the Sahara. Among archaeologists, it is a debatable issue. The earliest dating of iron outside of North Africa is 2500 BCE at Egaro, west of Termit, making it contemporary with iron smelting in the Middle East. The Egaro date is debatable with archaeologists, due to the method used to attain it. The Termit date of 1500 BCE is widely accepted. Iron at the site of Lejja, Nigeria, has been radiocarbon dated to approximately 2000 BCE. Iron use, in smelting and forging for tools, appears in West Africa by 1200 BCE, making it one of the first places for the birth of the Iron Age. Before the 19th century, African methods of extracting iron were employed in Brazil, until more advanced European methods were instituted. John K. Thornton concludes that Africans metalworkers were producing their goods at the same or higher levels of productivity as their European counterparts. Archaeometallurgical scientific knowledge and technological development originated in numerous centers of Africa; the centers of origin were located in West Africa, Central Africa, and East Africa; consequently, as these origin centers are located within inner Africa, these archaeometallurgical developments are thus native African technologies. Iron metallurgical development occurred 2631 BCE – 2458 BCE at Lejja, in Nigeria, 2136 BCE – 1921 BCE at Obui, in Central Africa Republic, 1895 BCE – 1370 BCE at Tchire Ouma 147, in Niger, and 1297 BCE – 1051 BCE at Dekpassanware, in Togo. West Africa Besides being masters in iron, Africans were masters in brass, copper, and bronze. Ife showed artistic mastery in their striking naturalistic statues of brass and copper, a lost wax tradition beginning in the 11-12th centuries. Ife was also a manufacturer of glass and glass beads. Benin later mastered a mix of brass and bronze during the 16th century, producing portraiture and reliefs in the metals. In West Africa, several centres of iron production using natural draft furnaces emerged from the early second millennium CE. Iron production in Banjeli and Bassar for example in Togo reached up to 80,000 cubic meters(which is more than the production at places such as Meroe), analyses indicate that fifteenth-and sixteenth-century CE slags from this area were just bloomery waste products, while preliminary metallographic analyses of objects indicate them to be made of low-carbon steels. In Burkina Faso, the Korsimoro district reached up to 169,900 cubic meters. In the Dogon region, the sub-region of Fiko has about 300,000 cubic meters of slag produced. Brass barrel blunderbuss are said to have been produced in some states of the Gold Coast in the eighteenth and nineteenth centuries. Various accounts indicate that Asante blacksmiths were not only able to repair firearms, but that barrels, locks and stocks were on occasion remade. In the Aïr Mountains region of Niger, copper smelting was independently developed between 3000 and 2500 BCE. The undeveloped nature of the process indicates that it was not of foreign origin. Smelting in the region became mature around 1500 BCE. The Sahel Africa was a major supplier of gold in world trade during the Medieval Age. The Sahelian empires became powerful by controlling the Trans-Saharan trade routes. They provided 2/3 of the gold in Europe and North Africa. The Almoravid dinar and the Fatimid dinar were printed on gold from the Sahelian empires. The ducat of Genoa and Venice and the florine of Florence were also printed on gold from the Sahelian empires. When gold sources were depleted in the Sahel, the empires turned to trade with the Ashanti Empire. The Swahili traders in East Africa were major suppliers of gold to Asia in the Red Sea and Indian Ocean trade routes. The trading port cities and city-states of the Swahili East African coast were among the first African cities to come into contact with European explorers and sailors during the European Age of Discovery. Many were documented and praised in the recordings of North African explorer Abu Muhammad ibn Battuta. Northern Africa and the Nile Valley Nubia was a major source of gold in the ancient world. Gold was a major source of Kushitic wealth and power. Gold was mined East of the Nile in Wadi Allaqi and Wadi Cabgaba. Around 500 BCE, Nubia, during the Meroitic phase, became a major manufacturer and exporter of iron. This was after being expelled from Egypt by Assyrians, who used iron weapons. East Africa The Aksumites produced coins around 270 CE, under the rule of King Endubis. Aksumite coins were issued in gold, silver, and bronze. Since 500 BCE, people in Uganda had been producing high grade carbon steels using preheated forced draft furnaces, a technique achieved in Europe only with the siemons process in the mid 19th century. Anthropologist Peter Schmidt discovered through the communication of oral tradition that the Haya in Tanzania have been forging steel for around 2000 years. This discovery was made accidentally while Schmidt was learning about the history of the Haya via their oral tradition. He was led to a tree which was said to rest on the spot of an ancestral furnace used to forge steel. When later tasked with the challenge of recreating the forges, a group of elders who at this time were the only ones to remember the practice, due to the disuse of the practice due in part to the abundance of steel flowing into the country from foreign sources. In spite of their lack of practice, the elders were able to create a furnace using mud and grass which when burnt provided the carbon needed to transform the iron into steel. Later investigation of the area yielded 13 other furnaces similar in design to the recreation set up by the elders. These furnaces were carbon dated and were found to be as old as 2000 years, whereas steel of this caliber did not appear in Europe until several centuries later. Two types of iron furnaces were used in most of Africa: the trench dug below ground and circular clay structures built above ground. Iron ores were crushed and placed in furnaces layered with the right proportion of hardwood. A flux such as lime sometimes from seashells was added to aid in smelting. Bellows on the side would be used to add oxygen. Clay pipes on the sides called tuyères would be used to control oxygen flow. Central Africa Two examples of European efforts to compete with African iron production highlight the degree of skill possessed by Kongo smiths. The first was a Portuguese effort to establish an iron foundry in Angola in the 1750s. The foundry was unsuccessful in transferring technology to Kongo black smiths; rather, "it concentrated smiths from across the colony in one area under one wage-labor system. Such methods were a tacit recognition of Kongo ironworking skill. The Portuguese foundry at Novas Oerias utilized European techniques was unsuccessful, never becoming competitive with Angolan smiths. The iron produced by Kongo smiths was superior to that of European imports produced under European processes. There was no incentive to replace Kongo iron with European iron unless Kongo iron was unavailable. European iron of the period contained a high amount of sulfur and when compared to the high carbon steel produced by Kongo iron processes, was less durable, a "rotten" metal. European iron was the second choice, whether the purchaser was from Asante, Yoruba or Kongo. The key to the gradual acceptance of European iron was ecological disaster. Gaucher (1981) believes that deforestation led to increased reliance on pre-forged European iron bars that could be carbonized in furnaces using less charcoal than smelting iron from ore. In a similar development elsewhere in the world, English iron production was crippled by the depletion of English forests for charcoal for English forges. In 1750 the Iron Act would force their American colonies to export their iron exclusively to England. This was amongst other well known reasons one of the grievances the colonists had against the English crown and a contributory factor the American Revolution". Another series of wars in Kongo however would ensure that the technical expertise to support English demand was in existence in America, albeit as slave labor. When African techniques could no longer create high quality carbon steel the lower quality European iron became a necessity. Lower quality iron also became more acceptable as the need to supply large numbers of warriors (numbering in the hundreds of thousands) with weapons quickly pushed out considerations of artisan-quality steel versus "rotten iron" imports. War broke out in the Kingdom of Kongo and after 1665; much of the stability and access to iron ore and charcoal necessary for smiths to ply their craft was disrupted. Many Kongo people were sold as slaves and their skills became invaluable in New World settings as blacksmiths, charcoal makers and ironworkers for their colonial masters. Slaves were relied upon to produce vital components for the forges and as their skills in iron working became evident, their importance to colonial economies grew. At Oboui they excavated an undated iron forge yielding eight consistent radiocarbon dates of 2000 BCE. This would make Oboui the oldest iron-working site in the world, and more than a thousand years older than any other dated evidence of iron in Central Africa. Medicine Traditional African plants such as Ouabain, capsicum, yohimbine, ginger, white squill, african kino, African copaiba, African myrrh, Buchu, physostigmine, and Kola nut have been adopted and continue to be used by Western doctors. West Africa and the Sahel The knowledge of inoculating oneself against smallpox seems to have been known to West Africans, more specifically the Akan. A slave named Onesimus explained the inoculation procedure to Cotton Mather during the 18th century; he reported to have gotten the knowledge from Africa. Bonesetting is practiced by many groups of West Africa (the Akan, Mano, and Yoruba, to name a few). In Djenné the mosquito was isolated to be the cause of malaria, and the removal of cataracts was a common surgical procedure (as in many other parts of Africa). The dangers of tobacco smoking were known to African Muslim scholars, based on Timbuktu manuscripts. Palm oil was important in health and hygiene. A German visiting in 1603-1604 reported that people washed themselves three times a day, "after which they anoint themselves with tallow or with palm oil, which is an excellent medicine". Palm oil protected the skin and hair, and it had cosmetic value in many cultures. Women (and sometimes men) spread palm oil on their skin to "shine the whole day". Palm oil was also a useful way of applying decorative color and perfumes, like powdered camwood. Many Africans considered palm oil to be a medicine in its own right, and it served as a medium for delivering other curative substances. Historical sources recount healers mixing herbs with palm oil to treat skin conditions or ease headaches. A seventeenth-century Portuguese source describes palm oil as a "popular cure" in Angola, while the "leaves, roots, bark and fruit" of the oil palm were used to treat conditions ranging from arthritis to snake and insect bites. Foreign visitors praised the quality of soap made from palm and palm kernel oils, mixed with ashes from palm fronds. One writer attested that "the Negroes Cloathes are very clean" as a result. The roasting method often used to extract kernel oil produced the characteristic color of the famous "black soap" made by West African artisans. Palm and palm kernel soaps were traded extensively in regional markets. Admiring West African medicinal prowess, Johannes Rask concluded that "Africans are much better suited than we are, as regards their health care". During the Atlantic slave trade, European sailors reported how African slaves would be able to recover from outbreaks of diseases like smallpox within the ships by using their traditional medicine which included palm oil. Europeans would use these themselves to help against dysentery. The bark of yams were used to treat worm infestations. Northern Africa and the Nile Valley Ancient Egyptian physicians were renowned in the ancient Near East for their healing skills, and some, like Imhotep, remained famous long after their deaths. Herodotus remarked that there was a high degree of specialization among Egyptian physicians, with some treating only the head or the stomach, while others were eye-doctors and dentists. Training of physicians took place at the Per Ankh or "House of Life" institution, most notably those headquartered in Per-Bastet during the New Kingdom and at Abydos and Saïs in the Late period. Medical papyri show empirical knowledge of anatomy, injuries, and practical treatments. Wounds were treated by bandaging with raw meat, white linen, sutures, nets, pads and swabs soaked with honey to prevent infection, while opium was used to relieve pain. Garlic and onions were used regularly to promote good health and were thought to relieve asthma symptoms. Ancient Egyptian surgeons stitched wounds, set broken bones, and amputated diseased limbs, but they recognized that some injuries were so serious that they could only make the patient comfortable until he died. Around 800, the first psychiatric hospital and insane asylum in Egypt was built by Muslim physicians in Cairo. In 1285, the largest hospital of the Middle Ages and pre-modern era was built in Cairo, Egypt, by Sultan Qalaun al-Mansur. Treatment was given for free to patients of all backgrounds, regardless of gender, ethnicity or income. Tetracycline was being used by Nubians, based on bone remains between 350 CE and 550 CE. The antibiotic was in wide commercial use only in the mid 20th century. The theory is earthen jars containing grain used for making beer contained the bacterium streptomycedes, which produced tetracycline. Although Nubians were not aware of tetracycline, they could have noticed people fared better by drinking beer. According to Charlie Bamforth, a professor of biochemistry and brewing science at the University of California, Davis, said "They must have consumed it because it was rather tastier than the grain from which it was derived. They would have noticed people fared better by consuming this product than they were just consuming the grain itself." East Africa European travelers in the Great Lakes region of Africa during the 19th century reported cases of surgery in the kingdom of Bunyoro-Kitara. Medical historians, such as Jack Davies argued in 1959 that Bunyoro's traditional healers were perhaps the most highly skilled in precolonial sub-Saharan Africa, possessing a remarkable level of medical knowledge. One observer noted a "surgical skill which had reached a high standard". Caesarean sections and other abdominal and thoracic operations were performed on a regular basis with the avoidance of haemorrhage and sepsis using antiseptics, anaesthetics and cautery iron. The expectant mother was normally anesthetized with banana wine, and herbal mixtures were used to encourage healing. From the well-developed nature of the procedures employed, European observers concluded that they had been employed for some time. Bunyoro surgeons treated lung inflammations, Pneumonia and pleurisy by punching holes in the chest until the air passed freely. Trephining was carried out and the bones of depressed fractures were elevated. Horrible war wounds, even penetrating abdominal and chest wounds were treated with success, even when this involved quite heroic surgery. Amputations were done by tying a tight ligature just above the line of amputation and neatly cutting off the limb, stretched out on a smooth log, with one stroke of a sharp sword. Banyoro surgeons had a good knowledge of anatomy, in part obtained by carrying out autopsies. Inoculation against smallpox was carried out in Bunyoro and its neighbouring kingdoms. Over 200 plants are used medicinally in eastern Bunyoro alone and recent tests have shown that traditional cures for eczema and post-measles bloody diarrhoea were more effective than western medications. Bunyoro's Medical elite, the "Bafumu", had a system of apprenticeship and even "met at periods for conferences". In Bunyoro, there was a close relationship between the state and traditional healers. Kings gave healers "land spread in the different areas so that their services would reach more people". Moreover, "in the case of a disease hitting a given area", the king would order healers into the affected district. Kabaleega is said to have provided his soldiers were anti-malarial herbs, and even to have organized medical research. A Munyoro healer reported in 1902 that when an outbreak of what he termed sleeping sickness occurred in Bunyoro around 1886–87, causing many deaths, Kabaleega ordered him "to make experiments in the interest of science", which were "eventually successful in procuring a cure". Barkcloth, which was used to bandage wounds, has been proven to be antimicrobial. Brain surgery was also practiced in the Great Lakes region of Africa In the Kingdom of Rwanda, people afflicted with Yaws were put into quarantine and if necessary, kings closed the kingdom's borders to combat the spread of smallpox. Central Africa General and local anesthesia were widely used by traditional doctors in many parts of central Africa. Beer containing an extract of kaffir was orally given to those who sustained deep wounds from animal attacks or from warfare in order to alleviate pain, and alkeloid containing leaves were also applied topically to injuries. Many tribes in central africa performed cataract surgery under local anesthesia, squeezing juices from alkaloid plants directly into the eyes to desentisize them and then pushing the cataract aside with a sharp stick, with many cases turning out successful. "The surgical skill itself was also astonishing and suggested a long experience of this practice". Southern Africa A South African, Max Theiler, developed a vaccine against yellow fever in 1937. Allan McLeod Cormack developed the theoretical underpinnings of CT scanning and co-invented the CT-scanner. The first human-to-human heart transplant was performed by South African cardiac surgeon Christiaan Barnard at Groote Schuur Hospital in December 1967. See also Hamilton Naki. During the 1960s, South African Aaron Klug developed crystallographic electron microscopy techniques, in which a sequence of two-dimensional images of crystals taken from different angles are combined to produce three-dimensional images of the target. The Zulu king represented the ultimate public health official. As Ndukwana, one of Stuart's respondents, explains, "All people like the land they lived on belonged to the king. If any man got seriously ill, his illness would be notified to the mnumzana[head-man], who would instantly report the fact to the izinduna (chiefs) and they to the king. The king would then most likely give the order to consult diviners so as to discover the nature and cause of his illness. A sick man in Zululand was always an object of great importance.' In theory the Zulu king and his local chiefs took responsibility for the well-being of their people and surrounded themselves with a variety of different doctors to assist them in this function. While not all illness was brought to the attention of the king, kraal heads had to report illness to their local chiefs. Depending on the social status of the ill person or number of persons afflicted, a report would be sent to the king. The Zulu proverb inkosi yinkosi ngabantu-a king is a king by the people, emphasized the reciprocal relationship between a king and his people. In exchange for the labor and loyalty of his subjects, the king provided for the welfare of his people, and his failure to do so could lead people to konza to another ruler. Zulu-speakers who konza'ed white rulers in neighboring Natal thus could not understand why such responsibilities were not also assumed by their new rulers. Another reason sickness and death sometimes gained attention at the highest levels of the state was the link between illness and witchcraft. Illness represented the possibility of persons who sought to destabilize the chiefdom or nation, and consequently chiefs could get in trouble for not reporting illness. Upon learning of an illness, a chief or the king would sometimes provide his own doctors, presumably the best in the area, or send for doctors or medicines from the surrounding regions. In some cases the king provided his own personal medicines. The state of public health thus also represented the metaphorical health of the nation state. During periods of crisis, such as droughts, epidemics, locust infestations, or epizootics, the king would summon his best doctors and mobilize a national response. One notable medical phenomenon led state healers to connect a number of unexplained deaths to the wearing of a whitish metal (perhaps tin or silver). By order of either Tshaka or Dingane, the sources seem unclear on this point, this metal was banned and collected from around the nation and buried. This shows the reach and power of the Zulu state in carrying out public health initiatives. Another example, perhaps more typical, were the bands of soldiers who were marshaled to kill locusts during times of infestation. Likewise periods of drought led the king not only to hire reputed raindoctors for the nation but to mobilize people to look for inkhonkwanes-herbs (over 240 medicinal plants were used by the Zulu) and medicine pegs put on mountaintops by umthakathis seeking to prevent rain and thus cause social disruption. Whereas these examples point to a reactive form of public health, a number of preventative measures and rituals occurred during public festivals such as the yearly Inyatela (First Fruits) and umkhosi (royal) celebrations. At these celebrations, large groups of people from around the nation came to witness and participate in ceremonies that took place within a short span of each other in December and January. At these festivals, the king, as the preeminent healer of the land, accompanied Healing the Body by his doctors and regiments, performed preventative measures aimed at ensuring the well-being of the nation and all who lived in it. Bone-setting was commonly practiced in Southern Africa by the native communities. Even broken fingers were treated. Abdominal wounds with protruding intestines were manipulated successfully by inserting a small calabash to hold the intestines in place and suturing the skin over it. Agriculture Tropical soils are typically low in organic matter and so present special problems to agriculturalists. Indeed African soils, (outside alluvial and volcanic areas) are in large part deficient in the characteristics of structure, texture, and chemistry which mainly determine soil fertility. Tropical areas do not have a winter season, so micro-organisms continue to break down organic matter throughout the year. Tropical soils typically have very small percentages of organic matter or humus (sometimes as little as 1%) as a result. Soils in temperate climates may, in contrast, consist of 12 to 14 or (in virgin soils in the U.S.) up to 16% organic materials, because the cold winters slow the processes of decomposition and allow organic material to build up over time. In many tropical regions, farmers practice a semi-sedentary form of agriculture, using fields for two or three years and then abandoning them for a decade or more (up to 25 years after two years of cultivation in the case of savanna woodlands in Africa), until the humus content has been restored by natural processes. Through careful observation, experimentation and selection of desirable traits over the course of 2,000 years, africans managed to create a rich diversity of Banana and plantain types (120 different types of plantains and 60 different types of bananas). Due to this there emerged a second area of Banana diversification outside of Asia, one with the Highland cooking banana in the African Great Lakes and the Plantain in West and Central Africa. This shows the agricultural skills and innovative practices africans mastered and continuously developed in the millennia before europeans arrived into the continent. Like the natives of the Amazon rainforest, Africans also utilized dark earths similar to Terra preta. Northern Africa and the Nile Valley Archaeologists have long debated whether or not the independent domestication of cattle occurred in Africa as well as the Near East and Indus Valley. Possible remains of domesticated cattle were identified in the Western Desert of Egypt at the sites of Nabta Playa and Bir Kiseiba and were dated to c. 9500–8000 BP, but those identifications have been questioned. Genetic evidence suggests that cattle were most likely introduced from Southwest Asia, and that there may have been some later breeding with wild aurochs in northern Africa. Genetic evidence also indicates that donkeys were domesticated from the African wild ass. Archaeologists have found donkey burials in early dynastic contexts dating to ~5000 BP at Abydos, Middle Egypt, and examination of the bones shows that they were used as beasts of burden. Cotton (Gossypium herbaceum Linnaeus) may have been domesticated 5000 BCE in eastern Sudan near the Middle Nile Basin region, where cotton cloth was being produced. East Africa Finger millet is originally native to the highlands of East Africa and was domesticated before the third millennium BCE in Uganda and Ethiopia. Its cultivation had spread to South India by 1800 BCE. Engaruka is an Iron Age archaeological site in northern Tanzania known for the ruins of a complex irrigation system. Stone channels were used to dike, dam, and level surrounding river waters. Some of these channels were several kilometers long, channelling and feeding individual plots of land totaling approximately . Seven stone-terraced villages along the mountainside also comprise the settlement. The Shilluk Kingdom gained control of the west bank of the white Nile as far north as Kosti in Sudan. There they established an economy based on cereal farming and fishing, with permanent settlements located along the length of the river. The Shilluk developed an extremely intensive system of agriculture based on sorghum, millet and other crops. By the 1600s, shillukland had a population density similar or exceeding that of the Egyptian Nile lands. Ethiopians, particularly the Oromo people, were the first to have discovered and recognized the energizing effect of the coffee bean plant. Ox-drawn plows seems to have been used in Ethiopia for two millennia, and possibly much longer. Linguistic evidences suggests that the Ethiopian plow might be the oldest plow in Africa. Teff is believed to have originated in Ethiopia between 4000 and 1000 BCE. Genetic evidence points to E. pilosa as the most likely wild ancestor. Noog (Guizotia abyssinica) and ensete (E. ventricosum) are two other plants domesticated in Ethiopia. Ethiopians used terraced hillside cultivation for erosion prevention and irrigation. A 19th century European described Yeha: within the African Great Lakes advanced agriculture practices were employed such as "hydraulic practices in the mountains, man-made watering places, river diversions, hollowed-out tree-trunk pipes, irrigation on cultivated slopes, mounding in drained marshes, and irrigation of banana and palm tree gardens" as well as extensive use of terraces and the practice of double and triple cropping. The agrarian success of the Great Lakes civilization accounts for its exceptionally high levels of human density. Many foreign experts were impressed by the sophistication of the areas traditional methods of intensive farming. The agriculture of the great lakes was described below: The earliest Europeans to visit Rwanda observed intense pride in cultivating skills. A mother would give a crying baby a toy hoe to play with and a range of techniques often superior to those of eastern European peasants, notably the use of manure, terracing, and artificial irrigation. The Chaga people have long practiced an advanced form of agriculture which allowed them to maintain a high population density involving the control and distribution of water. Europeans wrote of their admirably constructed irrigation works and the care they witnessed in the maintenance of them and their powerfully centralized social organization. Sir Harry Johnston, writing in 1894, echoed this praise of Chagga industry and skill: West Africa and the Sahel The earliest evidence for the domestication of plants for agricultural purposes in Africa occurred in the Sahel region c. 5000 BCE, when sorghum and African rice (Oryza glaberrima) began to be cultivated. Around this time, and in the same region, the small guineafowl was domesticated. Other African domesticated plants were oil palm, raffia palm, African yam, black-eyed peas, Bambara groundnut, Cowpea, Fonio, Pearl millet, and kola nuts. Investigations in the Upper Guinea forest region by found connections between palm oil processing, "sacred agroforests", and anthropogenic soil, or "dark earths". They identified "palm oil production pits" as central loci for the formation of dark earths, where charred palm kernels and other organic materials enriched soils for use in fields of vegetables and trees. Once left fallow those fields gradually morphed into biodiverse groves of palms and other forest species. These anthropogenic landscapes, patches of AfDES (African dark earths) and anthropogenic vegetation are permeated with symbolic significance because they are the ongoing outcome of inhabitation trajectories begun by ancestors, continuing to the present day. They are not simply areas of improved soils and anthropogenic agroforests, but the relics of old towns, villages, kitchens, graveyards, and initiation society areas, many of which were inhabited by direct ancestors of current inhabitants. African oil palms were most abundant as part of the oil palm-yam complex beginning just south and east of the rice belt running from Lower Guinea across the derived savannas of the Dahomey Gap and through the Niger Delta. From there oil palm cultivation extended deep into the Central African rainforests where swidden farmers spared and managed palms within their plots of yams, cocoyams, plantains, legumes, and other crops, and where dense rainforest alternated with emergent oil palm groves. Long disparaged by some Western scientists and environmentalists as "slash-and-burn", ecological research since the mid-twentieth century has demonstrated the efficacy of such ancestral systems, linking traditional swidden-fallow landscapes with enhanced floral and faunal biodiversity, higher returns on labor investment, food security, nutritional balance, and overall resilience and reliability, especially when compared to monocultures. Throughout western Africa, oil palm agroforests helped to nourish human communities by contributing to food security and balanced diets, complementing carbohydrate-rich tubers and grains with fats, provitamin A carotenoids (mainly a-and B-carotenes), and vitamin E. The source of fats is particularly important within the broad swath of sub-Saharan Africa where the voracious tsetse fly and the trypanosomiasis pathogens it carries make livestock husbandry virtually impossible. African methods of cultivating rice, introduced by enslaved Africans, may have been used in North Carolina. This may have been a factor in the prosperity of the North Carolina colony. Portuguese observers between the half of the 15th century and the 16th century witnessed the cultivation of rice in the Upper Guinea Coast, and admired the local rice-growing technology, as it involved intensive agricultural practices such as diking and transplanting. Yams were domesticated 8000 BCE in West Africa. Between 7000 and 5000 BCE, pearl millet, gourds, watermelons, and beans also spread westward across the southern Sahara. Between 6500 and 3500 BCE knowledge of domesticated sorghum, castor beans, and two species of gourd spread from Africa to Asia. Pearl millet, black-eyed peas, watermelon, and okra later spread to the rest of the world. In the lack of more detailed historical and archaeological studies on the chronology of terracing, intensive terrace farming is believed to have been practiced before the early 15th century CE in West Africa. Terraces were used by many groups, notably the Mafa, Ngas, Gwoza, and the Dogon. Southern Africa In order to prevent erosion, southern africans built dry-stone terraces on steep hillsides. Randall MacIver describes the irrigation technology used in Nyanga, Zimbabwe: Cattle features as a primary source of sustenance and political and economic power in many parts of southern Africa. Sotho, Tswana and Nguni kingdoms rose to prominence on the back of successful cattle keeping, supplemented by cultivation. Cattle (and possibly goats) played a central role in Nguni culture. Nguni-speaking South Africans in KwaZulu-Natal revered the Nguni cattle. By 1824, Shaka Zulu's royal cattle pen contained 7,000 pure white Nguni cattle. Similarly, when the original pioneers arrived in Zimbabwe (then Rhodesia), they reported that the country was 'teeming with cattle that were, apparently, in good health and were immune to local diseases'. South africans were known for being experts in finding lost cattle. A single Zulu was able to locate 10 cattle that were lost during conflict two years ago over a large area. Like many traditional societies, the Himba have astonishingly sharp vision and focus, believed to come from their cattle rearing and need to identify each cow's markings. Central Africa For many years, scientists argued that Africa's first agriculturalists hacked and burned their way through a primeval "Guineo-Congolian rainforest" stretching from Sierra Leone to Congo and beyond. In this telling, oil palms were the survivors of forests destroyed by African farmers, leaving "derived savannah" behind. New research has overturned that interpretation, however. An "aridification event" about 4,000–5,000 years ago wiped out forests and encouraged the spread of grassland across western Africa. Oil palms probably expanded into these gaps ahead of human settlers, the seeds spread by animals. Humans helped the palm along, though, protecting it from grassland fires and voracious elephants. Linguistic evidence shows a close link between oil palm dispersion and the arrival of Bantu-speaking agriculturalists in the Congo basin beginning around 1,000 BCE. Few central and southern African languages use non-Bantu terms for the oil palm, suggesting that the tree came with migrants, either carried by them or sharing the same ecological openings in the forest. As a tradition among Mfumte-speakers of northern Cameroon tells us, oil palms "follow men", growing in the wake of human activity. The interplay of climate and agriculture pushed the oil palm's frontier to the south and east, but progress was slow. Nineteenth century travelers reported only scattered groves around Lakes Kivu and Tanganyika, despite amenable environmental conditions. Tanzanians interviewed in the twentieth century clearly indicated that oil palms were recent arrivals, brought by people rather than by animals. Rather than serving as agents of deforestation-with oil palms the evidence of ecological vandalism-African farmers may in fact be responsible for afforestation in many places. Ethnographic research, coupled with historic aerial photography, showed that forests grew out of the moist, nutrient-rich soils left behind in the shade of abandoned village palm groves. Rejecting earlier classifications like "semi-wild" or "sub-spontaneous", geographer Case Watkins describes these palm groves as "emergent" phenomena. They are not purely human creations, but rather develop out of human interactions with a complex set of natural forces. These emergent groves often give way to other tree species, creating true forest where none had existed. As early as the 1920s, elders in Congo told a missionary that they and their ancestors were not "shifting cultivators" cutting out clearings in a forest: they had built the forest with their farming practices. At the time, few Europeans cared to listen. One colonial forester recalled how blinded he had been by stereotypes: "What I had in my inexperience looked upon as glorious virgin [forest] growth, dating from the Flood, quickly revealed itself to my better experienced and disappointed eye as nothing more than secondary growth of moderately good quality." With the help of local guides, seeing a landscape was "like reading a book", revealing human history in the environment. Across much of western and central Africa, forests have probably been advancing rather than retreating for the past 1,000 years or so, and this despite bouts of low rainfall. Far from marking humanity's destructive impact on forests, oil palms stand across Africa as a testament to the versatility, ingenuity, and sustainability of local farming practices. Textiles Northern Africa Egyptians wore linen from the flax plant, and used looms as early as 4000 BCE. Nubians mainly wore cotton, beaded leather, and linen. The Djellaba was made typically of wool and worn in the Maghreb. West Africa and the Sahel Valentim Fenandes, writing in the early sixteenth century on the basis of news received in Lisbon from early travellers, praised the high quality of Mandinka cotton cloth that was found all along the west coast of West Africa Such comments were repeated with regard to the cotton cloth of the "Slave Coast" and Benin as well, produced especially in centers in the Yoruba country. John Phillips, an English captain who sailed to the Slave Coast at the end of the seventeenth century, was particularly impressed with local cloth, some of which was purchased by the European traders and fetched high prices in the New World. Some of the oldest surviving African textiles were discovered at the archaeological site of Kissi in northern Burkina Faso. They are made of wool or fine animal hair in a weft-faced plain weave pattern. Fragments of textile have also survived from the thirteenth century Benin City in Nigeria. In the Sahel, cotton is widely used in making the boubou (for men) and kaftan (for women). Bògòlanfini (mudcloth) is cotton textile dyed with fermented mud of tree sap and teas, hand made by the Bambara people of the Beledougou region of central Mali. By the 12th century, so-called Moroccan leather, which actually came from the Hausa area of northern Nigeria, was supplied to Mediterranean markets and found their way to the fairs and markets of Europe Kente was produced by the Akan people (Ashante, Fante, Enzema) and Ewe people in the countries of Togo, Ghana and Côte d'Ivoire. During the 11th Century, the now vanished people of the Tellem (as they are called by the Dogon who inhabit the region from the 16th Century onwards) entered the area from the south, probably from the rain forest. From the 11th up to the 15th Century, the Tellem buried their dead in the remaining old granaries and in new buildings they built in the caves. The dead were buried with wooden headrests, bows, quivers, hoes, musical instruments, baskets, gourds, leather sandals, boots, bags, amulets, woolen and cotton blankets, coifs, tunics, and fiber aprons. These perishable objects found in a reasonably good state of preservation in the caves belong to the oldest objects that have been preserved from SubSaharan Africa. Archaeologists and textile experts who have analyzed the Tellem textiles assert found that they were of high quality and that no other region in the world has such a great variety of linear and geometrical patterns in cotton fabrics by means of a single color (the only one available: i.e. indigo). According to Rita Bolland, the Tellem designs have been the object of search for infinite combinations which have persisted to this day. To illustrate this search by Tellem weavers, Gerdes examines some patterns found on preserved fragments of tunics, sleeves, coifs and caps, woven in plain weave: i.e. the weave in which the horizontal and vertical threads cross each other one over, one under. According to Gerdes, the average width of the threads is 1 mm. The weavers alternated groups of natural white cotton threads with groups of blue, indigo-dyed, threads. From left to right, six vertical white threads are followed by four blue threads; from top to bottom, three horizontal white threads are followed by three blue threads. These yield a plane pattern. The basic rectangle has dimensions ten (=6+4) by six (=3+3), or (6+4) X (3+3). Gerdes adds that generally, the dimensions are (m+n) x (p+q), where m, n, p, and q are natural numbers. The Tellem weavers experimented with dimensions and found relationships between the dimensions and the (symmetry) properties of the patterns that resulted. In particular, the variation among the discovered plain weave fragments suggests that the weavers knew the effect on the patterns of the selection of even and odd dimensions, in addition to how these dimensions (m+n) and (p+q) are produced. The Tellem patterns from the 11th and 12th Centuries feature woven rectangles followed by fragments of respective plane patterns, which are two-color patterns in the sense that for each there is a rigid motion of the plane translation, rotation, reflection that reverses the blue and white colors. Furthermore, according to Gerdes, the Tellem weavers employed a variant of the plain weave, whereby in one direction double threads are used instead of single threads. In this way, the weavers were able to weave cloths with decorative and strip patterns. With woven cloth, the tailor could begin his/her work: drawing and cutting pieces; knotting, stitching and sewing them together; and decorating, for example, a tunic with a plaited band along the neck opening. Geometric knowledge is imperative in each of these activities. Decorative bands were plaited both with even and odd numbers of strings. Among the plaited bands discovered in the caves, there are on one hand bands made out of 4, 6, 8, and 14 strings, and, on the other hand, out of 5, 7, and 9 strings. The selection of an even or an odd number of strings and the weave, either plain or not, has implications for the visible decorative patterns. In addition, the Tellem weavers also produced blankets made of woolen. Central Africa Among Kuba people, in present day Democratic Republic of Congo, raffia clothes were woven. They used the fibers of the leaves on the raffia palm tree. Weaving with palm leaves was a highly developed art in Central Africa, and European travelers and missionaries compared woven palm-leaf cloth to the finest European-made silks. Filippo Pigafetta praised the "marvelous arte" of "making cloaths of sundry sortes, as Velvets shorne and unshorne, Sattens, Taffata, Damaskes, Sarcenettes and such like" in the eastern provinces and areas adjoining Kongo. Sarcenet was a fine silk, but, unlike that made in Europe, the type made in "this countrey and other places thereabouts" was "not of any silken stuffe" but "of the leaves of Palme trees." Indeed, the finest specimens were too "precious" for any but "the king, and such as it pleaseth him". Cavazzi wrote that the beaten leaves of one type of palm resulted in such fine, soft fibers that the weave of the cloth thus produced brought him to "astonishment". To produce such finely woven luxury cloth, the leaves had to be worked to a greater degree than the tying and laying required for thatching. Pigafetta noted that the process started with keeping the palms "under and lowe to the grounde, euery yeare cutting them, and watering them, to the ende they may grow smal and tender against the new spring". Once those "tender" leaves were "cleansed & purged after their manner," techniques that he did not further specify, "they drawe forth their threedes, which are all very fine and dainty, and all of one evennesses, saving that those which are longest, are best esteemed. For of those they weave their greatest peeces." Italian travellers of the late seventeenth and early eighteenth century did African cloth the considerable honor of comparing it with the best cloth produced in their own land, itself regarded as being among the best in Europe. Thus, Antonio Gradisca da Zucchelli (an Italian Capuchin priest) thought that the libongos (monetary cloth) he saw produced in Nsoyo, a coastal province of Kongo around 1705, "even though made of vile material like palm leaves", were "well worked and woven ... that it resembled velvet ... and is just as strong and durable." John K. Thornton argues, using the reports of contemporary European travellers in Africa and the findings of archaeologists, that African textile manufacturing was far more advanced than has been recognized. Large quantities of textiles were produced. By the standards of the seventeenth or eighteenth century world, he concludes, African textile manufacturers were producing their goods at the same or higher levels of productivity as their European counterparts. For example, Leiden, one of the leading European centres of textile production which had almost the same population as Momboares in the eastern Congo, produced about 100,000 metres of cloth per year in the early seventeenth century, as compared to 400,000 metres in Momboares. Not only were these products traded widely within the continent by African merchants, as European merchants along the West African coast, African textiles were exported to the Caribbean and South America. East Africa Barkcloth was used by the Baganda in Uganda from the Mutuba tree (Ficus natalensis). Kanga are Swahili pieces of fabric that come in rectangular shapes, made of pure cotton, and put together to make clothing. It is as long as ones outstretch hand and wide to cover the length of ones neck. Kitenge are similar to kangas and kikoy, but are of a thicker cloth, and have an edging only on a long side. Kenya, Uganda, Tanzania, and Sudan are some of the African countries where kitenge are worn. In Malawi, Namibia and Zambia, kitenge are known as Chitenge. Lamba Mpanjaka was cloth made of multicolored silk, worn like a toga on the island of Madagascar. Shemma, shama, and kuta are all cotton-based cloths used for making Ethiopian clothing. Three types of looms are used in Africa: the double heddle loom for narrow strips of cloth, the single heddle loom for wider spans of cloth, and the ground or pit loom. The double heddle loom and single heddle loom might be of African origin. The ground or pit loom is used in the Horn of Africa, Madagascar, and North Africa and is of Middle Eastern origins. Southern Africa In southern Africa one finds numerous use of animal hide and skins for clothing. The Ndau in central Mozambique and the Shona mixed hide with barkcloth and cotton cloth. Cotton weaving was practiced by the Ndau and Shona. Cotton cloth was referred to as machira. The Venda, Swazi, Basotho, Zulu, Ndebele, and Xhosa also made extensive use of hides. Hides came from cattle, sheep, goat, elephant, and from jangwa( part of the mongoose family). Leopard skins were coveted and was a symbol of kingship in Zulu society. Skins were tanned to form leather, dyed, and embedded with beads. Maritime technology In 1987, the third oldest canoe in the world and the oldest in Africa, the Dufuna canoe, was discovered in Nigeria by Fulani herdsmen near the Yobe river and the village of Dufuna. It dates to approximately 8000 years ago, and was made from African mahogany. Northern Africa and the Nile Valley Carthage's fleet included large numbers of quadriremes and quinqueremes, warships with four and five ranks of rowers. Its ships dominated the Mediterranean. The Romans however were masters at copying and adapting the technology of other peoples. According to Polybius, the Romans seized a shipwrecked Carthaginian warship, and used it as a blueprint a massive naval build-up, adding their own refinement – the corvus – which allowed an enemy vessel to be "gripped" and boarded for hand-to-hand fighting. This negated initially superior Carthaginian seamanship and ships. Early Egyptians knew how to assemble planks of wood into a ship hull as early as 3000 BC (5000 BCE). The oldest ships yet unearthed, a group of 14 discovered in Abydos, were constructed from wooden planks which were "sewn" together. Woven straps were used to lash the planks together, and reeds or grass stuffed between the planks helped to seal the seams. Because the ships are all buried together and near a mortuary complex belonging to Pharaoh Khasekhemwy, originally the boats were all thought to have belonged to him. One of the 14 ships dates to 3000 BCE, however, and is now thought to perhaps have belonged to an earlier pharaoh, possibly Pharaoh Aha. Early Egyptians also knew how to assemble planks of wood with treenails to fasten them together, using pitch for caulking the seams. The "Khufu ship", a 43.6-meter vessel sealed into a pit in the Giza pyramid complex at the foot of the Great Pyramid of Giza in the Fourth Dynasty around 2500 BCE, is a full-size surviving example which may have fulfilled the symbolic function of a solar barque. Early Egyptians also knew how to fasten the planks of this ship together with mortise and tenon joints. West Africa and the Sahel A Nok sculpture portrays two individuals, along with their goods, in a dugout canoe. Both of the anthropomorphic figures in the watercraft are paddling. The Nok terracotta depiction of a dugout canoe may indicate that Nok people utilized dugout canoes to transport cargo, along tributaries (e.g., Gurara River) of the Niger River, and exchanged them in a regional trade network. The Nok terracotta depiction of a figure with a seashell on its head may indicate that the span of these riverine trade routes may have extended to the Atlantic Coast. In the maritime history of Africa, there is the earlier Dufuna canoe, which was constructed approximately 8000 years ago in the northern region of Nigeria; as the second earliest form of water vessel known in Sub-Saharan Africa, the Nok terracotta depiction of a dugout canoe was created in the central region of Nigeria during the first millennium BCE. In the 14th century CE King Abubakari II, the brother of King Mansa Musa of the Mali Empire is thought to have had a great number of boats sitting on the coast of West Africa. The boats would communicate with each other by drums. Malian boats at this time were canoes of different sizes. Numerous sources attest that the inland waterways of West Africa saw extensive use of war-canoes and vessels used for war transport where permitted by the environment. Most West African canoes were of single log construction, carved and dug-out from one massive tree trunk. The primary method of propulsion was by paddle and in shallow water, poles. Sails were also used to a lesser extent, particularly on trading vessels. The silk cotton tree provided many of the most table logs for massive canoe building, and launching was via wooden rollers to the water. Boat building specialists were to emerge among certain peoples, particularly in the Niger Delta. Some canoes were in length, carrying 100 men or more. Documents from 1506 for example, refer to war-canoes on the Sierra Leone river, carrying 120 men. Others refer to Guinea coast peoples using canoes of varying sizes – some in length, broad, with sharp pointed ends, rowing benches on the side, and quarter decks or focastles build of reeds, and miscellaneous facilities such as cooking hearths, and storage spaces for crew sleeping mats. The engineering and methodology (e.g., cultural valuations, use of iron tools) used in the construction of West African dugout canoes (e.g., rounded point sterns and pointed bows with 15° - 50° angle above water surface, increased stability via partly rounded or flat base, v-shaped hull, shallow draft for sailing water depths less than one foot, occasionally spanning more than one hundred feet in length) contributed to the capability of the canoes to be able to persist and navigate throughout the interconnected river system that connected the Benue River, Gambia River, Niger River, and Senegal River as well as Lake Chad; this river system connected diverse sources of water (e.g., lakes, rivers, seas, streams) and ecological zones (e.g., Sahara, Sahel, Savanna), and allowed for the transport of people, information, and economic goods along riverine trade networks that connect various locations (e.g., Bamako, Djenne, Gao, Mopti, Segou, Timbuktu) throughout West Africa and North Africa. The knowledge and understanding (e.g., hydrography, marine geography, how canoe navigation is affected by the depth of the water, tides in the ocean, currents, and winds) of West African canoers facilitated the skillful navigation of various channels of the regional river system, while engaging in activities such as trade and fishing. The construction schema for West African dugout canoes were also used among canoes in the Americas constructed by the African diaspora. The sacredness of canoe-making is expressed in a proverb from Senegambia: "The blood of kings and the tears of the canoe-maker are sacred things which must not touch the ground." In addition to possessing economic value, West African dugout canoes also possessed a sociocultural and psychospiritual value. In 1735 CE, John Atkins observed: "Canoos are what used through the whole Coast for transporting Men and Goods." European rowboats, which frequently capsized, were able to be outmaneuvered and outperformed in terms of speed by West African dugout canoes. Barbot stated, regarding West African canoers and West African dugout canoes, the "speed with which these people generally make these boats travel is beyond belief". Alvise da Cadamosto also observed how "effortlessly" Portuguese caravels were outperformed by Gambian dugout canoes. The skill of Kru canoers to be able to navigate the challenging conditions of the sea was also observed by Charles Thomas. Amid the 1590s CE, Komenda and Takoradi in Ghana served as production areas for dugout canoes made by the Ahanta people. By 1679 CE, Barbot observed Takoradi to be "a major canoe-producing center, crafting dugouts capable of carrying up to eight tons". Between the 17th century CE and 18th century CE, a production area and/or marketplace of dugout canoes was in Shama, which later became only a marketplace on Supome Island. Amid the 1660s CE, in addition to other local canoers manufacturing dugout canoes, the Fetu people were observed by Muller as having bought dugout canoes that were made by the Ahanta people. West Africans (e.g., Ghana, Ivory Coast, Liberia, Senegal) and western Central Africans (e.g., Cameroon) independently developed the skill of surfing. Amid the 1640s CE, Michael Hemmersam provided an account of surfing in the Gold Coast: "the parents 'tie their children to boards and throw them into the water. In 1679 CE, Barbot provided an account of surfing among Elmina children in Ghana: "children at Elmina learned "to swim, on bits of boards, or small bundles of rushes, fasten'd under their stomachs, which is a good diversion to the spectators." James Alexander provided an account of surfing in Accra, Ghana in 1834 CE: "From the beach, meanwhile, might be seen boys swimming into the sea, with light boards under their stomachs. They waited for a surf; and came rolling like a cloud on top of it. But I was told that sharks occasionally dart in behind the rocks and 'yam' them." Thomas Hutchinson provided an account of surfing in southern Cameroon in 1861: "Fishermen rode small dugouts 'no more than six feet in length, fourteen to sixteen inches in width, and from four to six inches in depth. East Africa It is known that ancient Axum traded with India, and there is evidence that ships from Northeast Africa may have sailed back and forth between India/Sri Lanka and Nubia trading goods and even to Persia, Himyar and Rome. Aksum was known by the Greeks for having seaports for ships from Greece and Yemen. Elsewhere in Northeast Africa, the 1st century CE Greek travelogue Periplus of the Red Sea reports that Somalis, through their northern ports such as Zeila and Berbera, were trading frankincense and other items with the inhabitants of the Arabian Peninsula as well as with the then Roman-controlled Egypt. Middle Age Swahili kingdoms are known to have had trade port islands and trade routes with the Islamic world and Asia and were described by Greek historians are "metropolises". Famous African trade ports such as Mombasa, Zanzibar, Mogadishu and Kilwa were known to Chinese sailors such as Zheng He and medieval Islamic historians such as the Berber Islamic voyager Abu Abdullah ibn Battuta. The dhow was the ship of trade used by the Swahili. They could be massive. It was a dhow that transported a giraffe to Chinese Emperor Yong Le's court, in 1414. Few kingdoms south of the Sahara possessed a more developed naval organization than that of Buganda, which dominated Lake Victoria with its navy of up to 20,000 men and war canoes as long as seventy two feet.... Architecture West Africa The Walls of Benin City are collectively the world's largest man-made structure and were semi-destroyed by the British in 1897. Fred Pearce wrote in New Scientist:They extend for some 16,000 kilometres in all, in a mosaic of more than 500 interconnected settlement boundaries. They cover 6500 square kilometres and were all dug by the Edo people. In all, they are four times longer than the Great Wall of China, and consumed a hundred times more material than the Great Pyramid of Cheops. They took an estimated 150 million hours of digging to construct, and are perhaps the largest single archaeological phenomenon on the planet.Sungbo's Eredo is the second largest pre-colonial monument in Africa, larger than the Great Pyramids or Great Zimbabwe. Built by the Yoruba people in honour of one of their titled personages, an aristocratic widow known as the Oloye Bilikisu Sungbo, it is made up of sprawling rammed earth walls and the valleys that surrounded the town of Ijebu-Ode in Ogun state, Nigeria. Tichit is the oldest surviving archaeological settlements in the Sahel and is the oldest all-stone settlement south of the Sahara. It is thought to have been built by Soninke people and is thought to be the precursor of the Ghana empire. The Great Mosque of Djenné is the largest mud brick or adobe building in the world and is considered by many architects to be the greatest achievement of the Sudano-Sahelian architectural style, albeit with definite Islamic influences. Northern Africa and the Nile Valley Around 1000 CE, cob (tabya) first appears in the Maghreb and al-Andalus. The Egyptian step pyramid built at Saqqara is the oldest major stone building in the world. The Great Pyramid was the tallest man-made structure in the world for over 3,800 years. The earliest style of Nubian architecture included the speos, structures carved out of solid rock, an A-Group (3700–3250 BCE) achievement. Egyptians made extensive use of the process at Speos Artemidos and Abu Simbel. Sudan, site of ancient Nubia, has more pyramids than anywhere in the world, even more than Egypt, with 223 pyramids Around 1100, the ventilator is invented in Egypt. East Africa Aksumites built in stone. Monolithic stelae on top of the graves of kings like King Ezana's Stele. Later, during the Zagwe dynasty Churches carved out of solid rocks like Church of Saint George at Lalibela. Thimlich Ohinga, a World Heritage Site is a complex of stone-built ruins located in Kenya. Southern Africa In southern Africa one finds ancient and widespread traditions of building in stone. Two broad categories of these traditions have been noted: 1. Zimbabwean style 2. Transvaal Free State style. North of the Zambezi one finds very few stone ruins. Great Zimbabwe, Khami, and Thulamela uses the Zimbabwean style. Tsotho/Tswana architecture represents the Transvaal Free State style. ||Khauxa!nas stone settlement in Namibia represents both traditions. The Kingdom of Mapungubwe (1075–1220) was a pre-colonial Southern African state located at the confluence of the Shashe and Limpopo rivers which marked the center of a pre-Shona kingdom which preceded the culmination of southeast African urban civilization in Great Zimbabwe. The tswana lived in City states with stone walls and complex sociopolitical structures that they built in the 1300s or earlier. These cities had a populations of up to 20,000 people which at the time, rivalled Cape Town in size. Communication systems Griots are repositories of African history, especially in African societies with no written language. Griots can recite genealogies going back centuries. They recite epics that reveal historical occurrences and events. Griots can go for hours and even days reciting the histories and genealogies of societies. They have been described as living history books. Northern Africa and the Nile Valley Africa's first writing system and the beginning of the alphabet was Egyptian hieroglyphs. Two scripts have been the direct offspring of Egyptian hieroglyphs, the Proto-Sinaitic script and the Meroitic alphabet. Out of Proto-Sinaitic came the South Arabian alphabet and Phoenician alphabet, out of which the Aramaic alphabet, Greek alphabet, the Brāhmī script, Arabic alphabet were directly or indirectly derived. Out of the South Arabian alphabet came the Ge'ez alphabet which is used to write Blin(cushitic), Amharic, Tigre, and Tigrinya in Ethiopia and Eritrea. Out the Phoenician Alphabet came tifinagh, the berber alphabet mainly used by the Tuaregs. The other direct offspring of Egyptian hieroglyphs was the Meroitic alphabet. It began in the Napatan phase of Nubian history, Kush (700–300 BCE). It came into full fruition in the 2nd century, under the successor Nubian kingdom of Meroë. The script can be read but not understood, with the discovery at el-Hassa, Sudan of ram statues bearing meroitic inscriptions might assist in its translation. The Sahel With the arrival of Islam came the Arabic alphabet in the Sahel. Arabic writing is widespread in the Sahel. The Arabic script was also used to write native African languages. The script used in this capacity is often called Ajami. The languages that have been or are written in Ajami include Hausa, Mandinka, Fulani, Wolofal, Tamazight, Nubian, Yoruba, Songhai, and Kanuri. West Africa N'Ko script developed by Solomana Kante in 1949 as a writing system for the Mande languages of West Africa. It is used in Guinea, Côte d'Ivoire, Mali, and neighboring countries by a number of speakers of Manding languages. Nsibidi is ideographic set of symbols developed by the Ekoi people of Southeastern coastal Nigeria for communication. A complex implementation of Nsibidi is only known to initiates of Ekpe secret society. Adinkra is a set of symbols developed by the Akan (Ghana and Côte d'Ivoire), used to represent concepts and aphorisms. The Vai syllabary is a syllabic writing system devised for the Vai language by Mɔmɔlu Duwalu Bukɛlɛ in Liberia during the 1830s. Adamorobe Sign Language is an indigenous sign language developed in the Adamorobe Akan village in Eastern Ghana. The village has a high incident of genetic deafness. Usman dan Fodio accomplished a great feat in raising the literacy rate of the people of the sokoto caliphate in only a few decades. Multiple independent historical surveys have estimated the male literacy rate to have stayed at about 96-97% and the female literacy rate remained between 93%-95% by the death of the Shehu. The female literacy rate of sokoto in 1812 was higher than women in the United Kingdom and the United States. The British traveler Col. Runciman reported in awe that the people of Sokoto "were literate not to a man, but to a woman". Central Africa Across eastern Angola and northwestern Zambia, sona ideographs were used as mnemonic devices to record knowledge and culture. Gerhard Kubik explains the various aspects of sona that indicate space and time concepts as circular, multidirectional, and multidimensional. For instance, in terms of directionality of drawing, the sona is performed from left to right, from bottom to top (on a wall), or from close to the body to far. This mirrors the process of the line, which in the theory of the Eulerian path returns to the beginning. Furthermore, Kubik describes sona as being synaesthetic, with visuality and aurality paired in the dot and line structure of the drawings. He concludes, remarkably, that "[the evidence of inherent patterns] shows that the African discovery, unparalleled in any other culture in the world, of how to make use of the reactions of the human perceptual apparatus by deliberately creating configurations which must decompose' and reconstitute as 'inherent patterns,' encompasses both the aural and the visual m realm." Thus sona is a well-established mediating system, or apparatus, that coded "deterritorialized flows" through writing, speech, voice, sound instruments, and (masquerade) costumes. Bárbaro Martínez Ruiz writes of a broad practice of this type of writing in Central Africa and the Cuban diaspora, especially through the Bakongo people. He argues that writing includes performance, objects, rhythms, gestures, and even food identifiers. Sona demonstrates that even in so-called unmediated practices, language operates as protocol that negotiates power relationships and intimate acts of colonization. That is, sona is a code, based on a binary code much like computerized information processing, that does something in addition to saying something. Simon Battestini details the various ways that the term writing can be analyzed in Africa, what he distills as all "encoded traces of a text". In other definitions, writing is seized thought, which yet preserves its noetic-poetic and heterogeneous modes of communication."Sona has been compared to computing because of its recursive logic of both visual patterning and its framing of social dynamics. It resists any medium that has been designed to decouple information from communication, whether the book or the computer". Lukasa memory boards were also used among the BaLuba. Talking drums exploit the tonal aspect of many african languages to convey very complicated messages. Talking drums can send messages . Bulu, a Bantu language, can be drummed as well as spoken. In a Bulu village, each individual had a unique drum signature. A message could be sent to an individual by drumming his drum signature. It has been noted that a message can be sent from village to village within two hours or less using a talking drum. East Africa On the Swahili coast, the Swahili language was written in Arabic script, as was the Malagasy language in Madagascar. The people of Uganda developed a form of writing based on a floral code and the use of talking drums was widespread as well. The ancient court music composers of Buganda discovered how human auditory perception processes a complex sequence of rapid, irregular sound impulses by splitting the total image into perceptible units at different pitch levels. They had made use of their discovery in composition, creating indirectly polyphonies of interweaving melodic lines that would suggest words to a Luganda speaker, as if some spirit were talking to the performers of a xylopnone or to the lone player of a harp (ennanga). The combination of the first two Xylophone parts creates 'illusory' melodic patterns that exist only in the observers mind, not actually played by either of the first two musicians directly. That these 'resultant' or 'inherent' patterns are materialised only in the minds of listeners is a remarkable feature of Bugandan music. It is probably the oldest example of an audio-psychological effect known as auditory streaming (first recognized in western literature as the melodic fission effect) to delliberately occur in music. The music would be produced by regular movement, with the fingers or sticks combining two interlocking tone-rows, but the patterns heard would be irregular, often asymmetric and complex. All the 102 xylophone compositions that were transcribed by Gerhard Kubik In Buganda during the early 1960s reveal an extremely complex structure, and they "fall apart' in perception-generated innerent melodlc-rhythmic patterns. No one, so far, has Succeeded in composing a new piece that would match in quality and complexity those compositions handed down for generations. Some of them can even be dated by correlating the accompanying song texts with the reign of past kings. The Agikuyu of Kenya used a Mnemonic-pictographic device they called Gicandi to record and spread knowledge. This kind of memory device uses a pictorial symbolism which proceeds by simplified pictures, tracing only part of an object or a conventional image. A small number of pictures is sufficient to record a happening, suggest to a medicine-man the formula for magical practices and to a singer the object and verses of his song. A kikuyu was also able to follow the history of his herd by notches on a stick. A certain notch on a stick that identified a specific cow would signify insemination; another notch would record the birth of the calf and by such records the cattle breeder was able to estimate the amount of milk from his herd. It is noteworthy that the word for letters or numerals in Kikuyu is ndemwa, which translates to those that have been cut. Father Cangolo of the consolata fathers who lived among the Kikuyu in the 1930s recorded that: Transportation technologies North Africa Since the 5th Dynasty, awareness of the wheel may have been in ancient Egypt. During the 13th Dynasty, the earliest wheeled transport emerged in ancient Egypt. The potter's wheel was introduced into ancient Nubia by ancient Egypt. The wheelhead of a potter's wheel, which was made of clay and dated to 1850 BCE, was found at Askut. Since the Meroe period, ox-powered water wheels, specifically saqiya, and shaduf were used in Nubia. Between 3200 BP and 1000 BP, various Central Saharan rock art sites from the Horse Period were created depicting charioteers, mostly upon horse-driven chariots and rarely upon cattle-driven chariots; these painted and engraved depictions were distributed in 81 painted and 120 engraved depictions in Algeria, 18 painted and 44 engraved depictions in Libya, 6 engraved depictions in Mali, 125 engraved depictions in Mauritania, 96 engraved depictions in Morocco, 29 engraved depictions in Niger, and 21 engraved depictions in Western Sahara, and were likely created by the Garamantes, whose ancestors were ancient Berbers and Saharan pastoralists. Rock art engravings of ox-drawn wagons and horse-driven chariots can be found in Algeria, Libya, southern Morocco, Mauritania, and Niger. In the 5th century BCE, Herodotus reported use of chariots by Garamantes in the Saharan region of North Africa. By the 4th century BCE, the water wheel, particularly the noria and sakia, was created in ancient Egypt. In the 1st century CE, Strabo reported use of chariots by Nigretes and Pharusii in the Saharan region of North Africa. West Africa Between 3200 BP and 1000 BP, various Central Saharan rock art sites from the Horse Period were created depicting charioteers, mostly upon horse-driven chariots and rarely upon cattle-driven chariots; these painted and engraved depictions were distributed in 81 painted and 120 engraved depictions in Algeria, 18 painted and 44 engraved depictions in Libya, 6 engraved depictions in Mali, 125 engraved depictions in Mauritania, 96 engraved depictions in Morocco, 29 engraved depictions in Niger, and 21 engraved depictions in Western Sahara, and were likely created by the Garamantes, whose ancestors were ancient Berbers and Saharan pastoralists. Rock art engravings of ox-drawn wagons and horse-driven chariots can be found in Algeria, Libya, southern Morocco, Mauritania, and Niger. At Dhar Tichitt, there is Neolithic rock art that depicts a human figure with a link in their hand, connecting him to yoked oxen that are pulling a cart. At Dhar Walata, there is Neolithic rock art that depicts a human figure in relation to an ox cart. At Bled Initi, which is a hamlet near Akreijit, there are two depictions of ox carts that have been estimated to date between 650 BCE and 380 BCE, and are consistent with the artistic style of other aspects of the Dhar Tichitt Early Iconographic Tradition. At Tondia, in Niger, rock art portrays an ox cart; the use of the ox cart in Saharan West Africa may have begun to decline in use as transport by camel increased between the 4th century CE and the medieval period. In 1670 CE, the king of Allada was gifted a gilded carriage, along with a horse bit and horse harness, by the French West India Company. In 1772 CE, a European account reported the observed use of two coaches in a procession, which were carried by twelve men each as part of a ceremony in the kingdom of Dahomey, at Abomey. Between 1789 CE and 1797 CE, king Agonglo of Dahomey owned a carriage, which was still intact during the 1870s CE. Throughout the 19th century CE, numerous Europeans accounts reported the observed use of many wheeled transports, including carriages, which were part of ceremonial processions in the kingdom of Dahomey. In 1824 CE, the king of Lagos gifted a large-sized carriage to the emperor of Brazil. During the 1840s CE, king Eyamba V of Old Calabar acquired two horse-drawn carriages. In 1841 CE, Asantehene Kwaku Dua I was gifted a carriage by the Methodist Missionary Society. In 1845 CE, the kingdom of Dahomey used a cart against Badagry, resulting in it later being seized. In 1850 CE, a European account in the kingdom of Dahomey detailed: "'a glass-coach, the handiwork of Hoo-ton-gee, a native artist-a square with four large windows, on wheels', and also ' ... [a] wheeled-chair with a huge bird before it, on wheels of Dahomey make ... [a] warrior on wheels, Dahomey make, ... [and a] Dahoman-made chair on wheels, covered with handsome country cloth'." In 1864 CE, a European account detailed Dahomey carriages "'of home, or native manufacture', including 'a blue-green shandridan, with two short flagstaffs attached to the front'." In 1866 CE, a European account reported the observed use of a carriage in a procession, which was part of a ceremony in the kingdom of Borno, at Kukawa. In 1870 CE, a European account reported the observed use of a mule-drawn carriage in a ceremonial procession, which was gifted to the Shehu of Borno by British explorers in 1851 CE, at Kukawa. In 1871 CE, a European account in the kingdom of Dahomey detailed: "'a dark green coach, evidently of native manufacture'." Southern Africa At Tsodilo Hills, in Botswana, white painted rock art may depict a wagon and wagon wheel, which may date after, or even considerably after, the 1st millennium CE. Warfare Most of tropical Africa did not have a cavalry. Horses would be wiped out by tse-tse fly and it was not possible to domesticate the zebra. The army of tropical Africa consisted of mainly infantry. Weapons included bows and arrows with low bow strength that compensated with poison-tipped arrows. Throwing knives were made use of in central Africa, spears that could double as thrusting cutting weapons, and swords were also in use. Heavy clubs when thrown could break bones, battle axe, and shields of various sizes were in widespread use. Later guns, muskets such as flintlock, wheelock, and matchlock. Contrary to popular perception, guns were also in widespread use in Africa. They typically were of poor quality, a policy of European nations to provide poor quality merchandise. One reason the slave trade was so successful was the widespread use of guns in Africa. West Africa Fortification was a major part of defense, integral to warfare. Massive earthworks were built around cities and settlements in West Africa, typically defended by soldiers with bow and poison-tipped arrows. The earthworks are some of the largest man made structures in Africa and the world such as the walls of Benin and Sungbo's Eredo. In Central Africa, the Angola region, one find preference for ditches, which were more successful for defense against wars with Europeans. African infantry did not just include men. The state of Dahomey included all-female units, the so-called Dahomey Amazons, who were personal bodyguards of the king. The Queen Mother of Benin had her own personal army, 'The Queen's Own.' Biologicals were extensively used in many parts of Africa, most of the time in the form of poisoned arrows, but also powder spread on the war front or in the form of the poisoning of horses and water supply of the opponents. In Borgu, there were specific mixtures to kill, for hypnosis, to make the enemy bold, and to act as an antidote against the enemies' poison. A specific class of medicine-men was responsible for the making of the biologicals. In South Sudan, the people of the Koalit Hills kept their country free of Arab invasions by using tsetse flies as a weapon of war. Several accounts can give us an idea of the efficiency of the biologicals. For example, Mockley-Ferryman in 1892 commented on the Dahomean invasion of Borgu, that "their (Borgawa) poisoned arrows enabled them to hold their own with the forces of Dahomey notwithstanding the latter's muskets." The same scenario happened to Portuguese raiders in Senegambia when they were defeated by Mali's Gambian forces, and to John Hawkins in Sierra Leone where he lost a number of his men to poisoned arrows. Northern Africa, Nile Valley, and the Sahel Ancient Egyptian weaponry includes bows and arrow, maces, clubs, scimitars, swords, shields, and knives. Body armor was made of bands of leathers and sometimes laid with scales of copper. Horse-drawn chariots were used to deliver archers into the battle field. Weapons were initially made with stone, wood, and copper, later bronze, and later iron. In 1260, the first portable hand cannons (midfa) loaded with explosive gunpowder, the first example of a handgun and portable firearm, were used by the Egyptians to repel the Mongols at the Battle of Ain Jalut. The cannons had an explosive gunpowder composition almost identical to the ideal compositions for modern explosive gunpowder. They were also the first to use dissolved talc for fire protection, and they wore fireproof clothing, to which Gunpowder cartridges were attached. Aksumite weapons were mainly made of iron: iron spears, iron swords, and iron knives called poniards. Shields were made of buffalo hide. In the latter part of the 19th century, Ethiopia made a concerted effort to modernize its army. She acquired repeating rifles, artillery, and machine guns. This modernization facilitated the Ethiopian victory over the Italians at the Tigray town of Adwa in the 1896 Battle of Adwa. Ethiopia was one of the few African countries to use artillery in colonial wars. There are also a breastplate armor made of the horny back plates of crocodile from Egypt, which was given to the Pitt Rivers Museum as part of the archaeological Founding Collection in 1884. The first use of cannons as siege machine at the siege of Sijilmasa in 1274, according to 14th-century historian Ibn Khaldun. The Sahelian military consisted of cavalry and infantry. Cavalry consisted of shielded, mounted soldiers. Body armor was chain mail or heavy quilted cotton. Helmets were made of leather, elephant, or hippo hide. Imported horses were shielded. Horse armor consisted of quilted cotton packed with kapok fiber and copper face plate. The stirrups could be used as weapon to disembowel enemy infantry or mounted soldiers at close range. Weapons included the sword, lance, battle-axe, and broad-bladed spear. The infantry were armed with bow and iron tipped arrows. Iron tips were usually laced with poison, from the West African plant Strophantus hispidus. Quivers of 40–50 arrows would be carried into battle. Later, muskets were introduced. Southern Africa The numerous irregular conflicts in the region during the 1800s saw the emergence of the Afrikaner "kommando" system of mounted mobile light infantry called up from the male population. These would see extensive action during the Xhosa Wars and the First and Second Boer Wars and became the origin of the modern commando elite light infantry type. From the 1960s to the 1980s, South Africa pursued research into weapons of mass destruction, including nuclear, biological, and chemical weapons. Six nuclear weapons were assembled. With the anticipated changeover to a majority-elected government in the 1990s, the South African government dismantled all of its nuclear weapons, the first nation in the world which voluntarily gave up nuclear arms it had developed itself. Commerce Numerous metal objects and other items were used as currency in Africa. They are as follows: cowrie shells, salt, gold (dust or solid), copper, ingots, iron chains, tips of iron spears, iron knives, cloth in various shapes (square, rolled, etc.). Copper was as valuable as gold in Africa. Copper was not as widespread and more difficult to acquire, except in Central Africa, than gold. Other valuable metals included lead and tin. Salt was also as valuable as gold. Because of its scarcity, it was used as currency. Northern Africa and the Nile Valley Carthage imported gold, copper, ivory, and slaves from tropical Africa. Carthage exported salt, cloth, metal goods. Before camels were used in the trans-Saharan trade pack animals, oxen, donkeys, mules, and horses were utilized. Extensive use of camels began in the 1st century CE. Carthage minted gold, silver, bronze, and electrum(mix gold and silver) coins mainly for fighting wars with Greeks and Romans. Most of their fighting force were mercenaries, who had to be paid. Islamic North Africa made use of the Almoravid dinar and Fatimid dinar, gold coins. The Almoravid dinar and the Fatimid dinar were printed on gold from the Sahelian empires. The ducat of Genoa and Venice and the florine of Florence were also printed on gold from the Sahelian empires. Ancient Egypt imported ivory, gold, incense, hardwood, and ostrich feather. Nubia exported gold, cotton/cotton cloth, ostrich feathers, leopard skins, ivory, ebony, and iron/iron weapons. West Africa and the Sahel Cowries have been used as currency in West Africa since the 11th century when their use was first recorded near Old Ghana. Its use may have been much older. Sijilmasa in present-day Morocco seems to be a major source of cowries in the trans-Saharan trade. In western Africa, shell money was usual tender up until the middle of the 19th century. Before the abolition of the slave trade there were large shipments of cowry shells to some of the English ports for reshipment to the slave coast. It was also common in West Central Africa as the currency of the Kingdom of Kongo called locally nzimbu. As the value of the cowry was much greater in West Africa than in the regions from which the supply was obtained, the trade was extremely lucrative. In some cases the gains are said to have been 500%. The use of the cowry currency gradually spread inland in Africa. By about 1850 Heinrich Barth found it fairly widespread in Kano, Kuka, Gando, and even Timbuktu. Barth relates that in Muniyoma, one of the ancient divisions of Bornu, the king's revenue was estimated at 30,000,000 shells, with every adult male being required to pay annually 1000 shells for himself, 1000 for every pack-ox, and 2000 for every slave in his possession. In the countries on the coast, the shells were fastened together in strings of 40 or 100 each, so that fifty or twenty strings represented a dollar; but in the interior, they were laboriously counted one by one, or, if the trader were expert, five by five. The districts mentioned above received their supply of kurdi, as they were called, from the west coast; but the regions to the north of Unyamwezi, where they were in use under the name of simbi, were dependent on Muslim traders from Zanzibar. The shells were used in the remoter parts of Africa until the early 20th century but gave way to modern currencies. The shell of the land snail, Achatina monetaria, cut into circles with an open center was also used as coin in Benguella, Portuguese West Africa. The Ghana Empire, Mali Empire, and Songhay Empire were major exporters of gold, iron, tin, slaves, spears, javelin, arrows, bows, whips of hippo hide. They imported salt, horses, wheat, raisins, cowries, dates, copper, henna, olives, tanned hides, silk, cloth, brocade, Venetian pearls, mirrors, and tobacco. All these empires massively influenced world economics since they controlled 80% of the worlds gold that Europe and the Islamic world depended on (gold from the Mali Empire was the main source for the manufacture of coins in the Muslim world and Europe). European states even took loans from African states as the gold from west Africa funded the trade imbalance with the east for spices. Some of the currencies used in the Sahel included paper debt or IOU's for long distance trade, gold coins, and the mitkal (gold dust) currency. Gold dust that weighed 4.6 grams was equivalent to 500 or 3,000 cowries. Square cloth, four spans on each side, called chigguiya was used around the Senegal River. In Kanem cloth was the major currency. A cloth currency called dandi was also in widespread use. The Akan used goldweight that they called "Sika-yôbwê"(stone of gold) as their currency. They used a system of computing weight consisting of 11 units. The value of the weight were also numerically represented using two signs. East Africa Aksum exported ivory, glass crystal, brass, copper, myrrh, and frankincense. The Aksumites imported silver, gold, olive oil, and wine. The Aksumites produced coins around 270 CE, under the rule of king Endubis. Aksumite coins were issued in gold, silver, and bronze. The Swahili served as middlemen. They connected African goods to Asian markets and Asian goods to African markets. Their most in demand export was Ivory. They exported ambergris, gold, leopard skins, slaves, and tortoise shell. They imported pottery and glassware from Asia. They also manufactured items such as cotton, glass and shell beads. Imports and locally manufactured goods were used as trade to acquire African goods. Trade links included the Arabian Peninsula, Persia, India, and China. The Swahili also minted silver and copper coins. Glass manufacturing West Africa Igbo Olokun, also known as Olokun Grove, may be one of the earliest workshops for producing glass in West Africa. Glass production may have begun during, if not before, the 11th century. The 11th–15th century were the peak of glass production. High lime, high alumina (HLHA) and low lime, high alumina (LLHA) glass are distinct compositions that were developed using locally sourced recipes, raw materials, and pyrotechnology. The presence of HLHA glass beads discovered throughout West Africa (e.g., Igbo-Ukwu in southern Nigeria, Gao and Essouk in Mali, and Kissi in Burkina Faso), after the ninth century CE, reveals the broader importance of this glass industry in the region and shows its participation in regional trade networks (e.g., trans-Saharan trade, trans-Atlantic trade). Glass beads served as "the currency for negotiating political power, economic relations, and cultural/spiritual values" for "Yoruba, West Africans, and the African diaspora." Science and traditional worldviews Bandama and Babalola (2023) states: The view of science as "embedded practice", intimately connected with ritual, for example, is considered "ascientific", "pseudo-science", or "magic" in Western perspective. In Africa, there is a strong connection between the physical and the terrestrial worlds. The deities and gods are the emissaries of the supreme God and the patrons in charge of the workability of the processes involved. In the Ile-Ife pantheon, for example, Olokun—the goddess of wealth—is considered the patron of the glass industry and is therefore consulted. Sacrifices are offered to appease her for a successful run. The same is true for ironworking. Current scholarship has reinforced the contributions of ancient Africa to the global history of science and technology. Recent scientific research Ahmed Zewail, won the 1999 Nobel Prize in chemistry for his work in femtochemistry, methods that allow the description of change states in femtoseconds or very short seconds. The Democratic Republic of the Congo has a rocketry program called Troposphere. Currently, forty percent of African-born scientists live in OCED countries, predominantly NATO and EU countries. This has been described as an African brain drain. Sub-Saharan African countries spent on average 0.3% of their GDP on S&T (Science and Technology) in 2007. This represents a combined increase from US$1.8bn in 2002 to US$2.8bn in 2007. North African countries spend a comparative 0.4% of GDP on research, an increase from US$2.6bn in 2002 to US$3.3bn in 2007. Exempting South Africa, the continent has augmented its collective science funding by about 50% in the last decade. Notably outstripping its neighbor states, South Africa spends 0.87% of GDP on science and technology research. Although technology parks have a long history in the US and Europe, their presence across Africa is still limited, as the continent currently lags behind other regions of the world in terms of funding technological development and innovation. Only seven countries (Morocco, Botswana, Egypt, Senegal, Madagascar, Tunisia and South Africa) have made technology park construction an integral piece of their development goals. Africa in Science (AiS) Africa in Science (AiS) is an online data aggregator site and ThinkTank founded in January 2021 by Aymen Idris, who currently serves as chairman. The focus of AiS ThinkTank is on scientometric analysis of science in Africa, and the main aim of the website is to monitor and display metrics such as AiS Index (AiSi) and AiS Badge that estimate and visualize the research output of research Institutes and universities in a specific country in Africa, and their web site. Science and technology by region North Africa Science and technology in Morocco West Africa Science and technology in Cabo Verde East Africa Science and technology in Malawi Science and technology in Tanzania Science and technology in Uganda Science and technology in Zimbabwe Southern Africa Science and technology in Botswana Science and technology in South Africa See also History of Space in Africa Maritime history of Somalia Timeline of Islamic science and technology Science in medieval Islam Science in Asia Science in Europe References External links Timbuktu: Recapturing the Wisdom and History of a Region at Youtube, created and posted by the Ford Foundation Ancient Manuscripts from the Desert Libraries of Timbuktu at the Library of Congress, US African Fractals: Modern computing and indigenous design by Ron Eglash, at ted.com Brief description of the Yoruba number system at the Prentice Hall website Cambridge Museum: African Textile Collection Profile of William Kamkwamba, TED Fellow, at Wired.com African Influences in Modern Art, Metropolitan Museum of Art History of science and technology in Africa History of science History of technology History of Africa Culture of Africa
History of science and technology in Africa
Technology
22,271
17,960,231
https://en.wikipedia.org/wiki/Algebraic%20signal%20processing
Algebraic signal processing (ASP) is an emerging area of theoretical signal processing (SP). In the algebraic theory of signal processing, a set of filters is treated as an (abstract) algebra, a set of signals is treated as a module or vector space, and convolution is treated as an algebra representation. The advantage of algebraic signal processing is its generality and portability. History In the original formulation of algebraic signal processing by Puschel and Moura, the signals are collected in an -module for some algebra of filters, and filtering is given by the action of on the -module. Definitions Let be a field, for instance the complex numbers, and be a -algebra (i.e. a vector space over with a binary operation that is linear in both arguments) treated as a set of filters. Suppose is a vector space representing a set signals. A representation of consists of an algebra homomorphism where is the algebra of linear transformations with composition (equivalent, in the finite-dimensional case, to matrix multiplication). For convenience, we write for the endomorphism . To be an algebra homomorphism, must not only be a linear transformation, but also satisfy the propertyGiven a signal , convolution of the signal by a filter yields a new signal . Some additional terminology is needed from the representation theory of algebras. A subset is said to generate the algebra if every element of can be represented as polynomials in the elements of . The image of a generator is called a shift operator. In all practically all examples, convolutions are formed as polynomials in generated by shift operators. However, this is not necessarily the case for a representation of an arbitrary algebra. Examples Discrete Signal Processing In discrete signal processing (DSP), the signal space is the set of complex-valued functions with bounded energy (i.e. square-integrable functions). This means the infinite series where is the modulus of a complex number. The shift operator is given by the linear endomorphism . The filter space is the algebra of polynomials with complex coefficients and convolution is given by where is an element of the algebra. Filtering a signal by , then yields because . Graph Signal Processing A weighted graph is an undirected graph with pseudometric on the node set written . A graph signal is simply a real-valued function on the set of nodes of the graph. In graph neural networks, graph signals are sometimes called features. The signal space is the set of all graph signals where is a set of nodes in . The filter algebra is the algebra of polynomials in one indeterminate . There a few possible choices for a graph shift operator (GSO). The (un)normalized weighted adjacency matrix of is a popular choice, as well as the (un)normalized graph Laplacian . The choice is dependent on performance and design considerations. If is the GSO, then a graph convolution is the linear transformation for some , and convolution of a graph signal by a filter yields a new graph signal . Other Examples Other mathematical objects with their own proposed signal-processing frameworks are algebraic signal models. These objects include including quivers, graphons, semilattices, finite groups, and Lie groups, and others. Intertwining Maps In the framework of representation theory, relationships between two representations of the same algebra are described with intertwining maps which in the context of signal processing translates to transformations of signals that respect the algebra structure. Suppose and are two different representations of . An intertwining map is a linear transformation such that Intuitively, this means that filtering a signal by then transforming it with is equivalent to first transforming a signal with , then filtering by . The z transform is a prototypical example of an intertwining map. Algebraic Neural Networks Inspired by a recent perspective that popular graph neural networks (GNNs) architectures are in fact convolutional neural networks (CNNs), recent work has been focused on developing novel neural network architectures from the algebraic point-of-view. An algebraic neural network is a composition of algebraic convolutions, possibly with multiple features and feature aggregations, and nonlinearities. References External links Smart Project: Algebraic Theory of Signal Processing at the Department of Electrical and Computer Engineering at Carnegie Mellon University. Lecture 12: "Algebraic Neural Networks," University of Pennsylvania (ESE 514). Algebra Signal processing
Algebraic signal processing
Mathematics,Technology,Engineering
911
4,219,501
https://en.wikipedia.org/wiki/Andon%20%28manufacturing%29
In manufacturing, andon () is a system which notifies managerial, maintenance, and other workers of a quality or process problem. The alert can be activated manually by a worker using a pullcord or button or may be activated automatically by the production equipment itself. The system may include a means to pause production so the issue can be corrected. Some modern alert systems incorporate audio alarms, text, or other displays; stack lights are among the most commonly used. “Andon” is a loanword from Japanese, originally meaning paper lantern; Japanese manufacturers began its quality-control usage. Details An andon system is one of the principal elements of the Jidoka quality control method pioneered by Toyota as part of the Toyota Production System and therefore now part of the lean production approach. The principle of Andon works as such: if there are any production issues happening in Production line, the affected work station operator will need to trigger alert by pulling down the andon cord. But since 2014, Toyota is slowly replacing the andon cord with "andon button" as it can be operated wirelessly and reduce the clutter mess of tangling cables in production floor which leads to avoidance of tripping incidents in production floor. It gives workers the ability, and moreover the empowerment, to stop production when a defect is found, and immediately call for assistance. Common reasons for manual activation of the andon are: Part shortage Defects created or found Tools/machines malfunction Existence of a safety problem. All work in production line is stopped until a solution has been found. The alerts may be logged to a database so that they can be studied as part of a continual improvement process. Once the problem is troubleshot and fixed, a second pull of the andon cord authorizes the production to be resumed. The system typically indicates where the alert was generated, and may also provide a description of the issue. Modern andon systems can include text, graphics, or audio elements. Audio alerts may be done with coded tones, music with different tunes corresponding to the various alerts, or prerecorded verbal messages. History The concept/process of giving a non-management (production line) worker the authority to stop the production line because of a suspected quality issue is often attributed to W. Edwards Deming and others who developed what became Kaizen after World War II. Many attribute Japan's rise from wartime ashes to the world's second largest economy (the Japanese economic miracle) to their post-war industrial innovations: Better design of products to improve service Higher level of uniform product quality Improvement of product testing in the workplace and in research centers Greater sales through side [global] markets See also Stack light (commonly used in andon and lean manufacturing initiatives) References Japanese business terms Lean manufacturing Manufacturing in Japan Toyota Production System
Andon (manufacturing)
Engineering
568
14,271,033
https://en.wikipedia.org/wiki/Bulk%20temperature
In thermofluids dynamics, the bulk temperature, or the average bulk temperature in the thermal fluid, is a convenient reference point for evaluating properties related to convective heat transfer, particularly in applications related to flow in pipes and ducts. The concept of the bulk temperature is that adiabatic mixing of the fluid from a given cross section of the duct will result in some equilibrium temperature that accurately reflects the average temperature of the moving fluid, more so than a simple average like the film temperature. References Continuum mechanics Heat transfer Temperature
Bulk temperature
Physics,Chemistry
109
12,918,103
https://en.wikipedia.org/wiki/Cenarchaeum
Cenarchaeum is a monotypic genus of archaeans in the family Cenarchaeaceae. The marine archaean Cenarchaeum symbiosum is psychrophilic and is found inhabiting marine sponges. Cenarchaeum symbiosum was initially detected as a major symbiotic microorganism living within (it is an endosymbiont of) the sponge Axinella mexicana. It has been ubiquitously detected in the world oceans at lower abundances, while in some genera of marine sponges it is one of the most abundant microbiome members. Its genome sequence and diversity has been investigated in detail finding unique metabolic products and its role in ammonia-oxidizing activities. Genome The genome of C. symbiosum is estimated to be 2.02 Million bp in length, with a predicted amount of 2011 genes. Ecology Cenarchaeum symbiosum is a psychrophilic organism capable of surviving and proliferating at low temperatures usually ranging from 7-19 Celsius. C. symbiosum has a symbiotic relationship with certain varieties of sponge species, usually living in 10-20 meter depths, typically near California. References Further reading Archaea genera Thermoproteota Enigmatic archaea taxa
Cenarchaeum
Biology
272
36,197,584
https://en.wikipedia.org/wiki/Misleading%20graph
In statistics, a misleading graph, also known as a distorted graph, is a graph that misrepresents data, constituting a misuse of statistics and with the result that an incorrect conclusion may be derived from it. Graphs may be misleading by being excessively complex or poorly constructed. Even when constructed to display the characteristics of their data accurately, graphs can be subject to different interpretations, or unintended kinds of data can seemingly and ultimately erroneously be derived. Misleading graphs may be created intentionally to hinder the proper interpretation of data or accidentally due to unfamiliarity with graphing software, misinterpretation of data, or because data cannot be accurately conveyed. Misleading graphs are often used in false advertising. One of the first authors to write about misleading graphs was Darrell Huff, publisher of the 1954 book How to Lie with Statistics. The field of data visualization describes ways to present information that avoids creating misleading graphs. Misleading graph methods There are numerous ways in which a misleading graph may be constructed. Excessive usage The use of graphs where they are not needed can lead to unnecessary confusion/interpretation. Generally, the more explanation a graph needs, the less the graph itself is needed. Graphs do not always convey information better than tables. Biased labeling The use of biased or loaded words in the graph's title, axis labels, or caption may inappropriately prime the reader. Fabricated trends Similarly, attempting to draw trend lines through uncorrelated data may mislead the reader into believing a trend exists where there is none. This can be both the result of intentionally attempting to mislead the reader or due to the phenomenon of illusory correlation. Pie chart Comparing pie charts of different sizes could be misleading as people cannot accurately read the comparative area of circles. The usage of thin slices, which are hard to discern, may be difficult to interpret. The usage of percentages as labels on a pie chart can be misleading when the sample size is small. Making a pie chart 3D or adding a slant will make interpretation difficult due to distorted effect of perspective. Bar-charted pie graphs in which the height of the slices is varied may confuse the reader. Comparing pie charts Comparing data on barcharts is generally much easier. In the image below, it is very hard to tell where the blue sector is bigger than the green sector on the piecharts. 3D Pie chart slice perspective A perspective (3D) pie chart is used to give the chart a 3D look. Often used for aesthetic reasons, the third dimension does not improve the reading of the data; on the contrary, these plots are difficult to interpret because of the distorted effect of perspective associated with the third dimension. The use of superfluous dimensions not used to display the data of interest is discouraged for charts in general, not only for pie charts. In a 3D pie chart, the slices that are closer to the reader appear to be larger than those in the back due to the angle at which they're presented. This effect makes readers less performant in judging the relative magnitude of each slice when using 3D than 2D {| class="wikitable" |+ Comparison of pie charts |- ! Misleading pie chart ! Regular pie chart |- | | |} Item C appears to be at least as large as Item A in the misleading pie chart, whereas in actuality, it is less than half as large. Item D looks a lot larger than item B, but they are the same size. Edward Tufte, a prominent American statistician, noted why tables may be preferred to pie charts in The Visual Display of Quantitative Information: Tables are preferable to graphics for many small data sets. A table is nearly always better than a dumb pie chart; the only thing worse than a pie chart is several of them, for then the viewer is asked to compare quantities located in spatial disarray both within and between pies – Given their low data-density and failure to order numbers along a visual dimension, pie charts should never be used. Improper scaling Using pictograms in bar graphs should not be scaled uniformly, as this creates a perceptually misleading comparison. The area of the pictogram is interpreted instead of only its height or width. This causes the scaling to make the difference appear to be squared. {| class="wikitable" |+ Improper scaling of 2D pictogram in a bar graph |- ! Improper scaling ! Regular ! Comparison |- | | | |} In the improperly scaled pictogram bar graph, the image for B is actually 9 times as large as A. {| class="wikitable" |+ 2D shape scaling comparison |- ! Square ! Circle ! Triangle |- | | | |} The perceived size increases when scaling. The effect of improper scaling of pictograms is further exemplified when the pictogram has 3 dimensions, in which case the effect is cubed. {| class="wikitable" | |} The graph of house sales (left) is misleading. It appears that home sales have grown eightfold in 2001 over the previous year, whereas they have actually grown twofold. Besides, the number of sales is not specified. An improperly scaled pictogram may also suggest that the item itself has changed in size. {| class="wikitable" |+ |- ! Misleading ! Regular |- | | |} Assuming the pictures represent equivalent quantities, the misleading graph shows that there are more bananas because the bananas occupy the most area and are furthest to the right. Logarithmic scaling Logarithmic (or log) scales are a valid means of representing data. But when used without being clearly labeled as log scales or displayed to a reader unfamiliar with them, they can be misleading. Log scales put the data values in terms of a chosen number (the base of the log) to a particular power. The base is often e (2.71828...) or 10. For example, log scales may give a height of 1 for a value of 10 in the data and a height of 6 for a value of 1,000,000 (10) in the data. Log scales and variants are commonly used, for instance, for the volcanic explosivity index, the Richter scale for earthquakes, the magnitude of stars, and the pH of acidic and alkaline solutions. Even in these cases, the log scale can make the data less apparent to the eye. Often the reason for the use of log scales is that the graph's author wishes to display vastly different scales on the same axis. Without log scales, comparing quantities such as 1000 (10) versus 10 (1,000,000,000) becomes visually impractical. A graph with a log scale that was not clearly labeled as such, or a graph with a log scale presented to a viewer who did not know logarithmic scales, would generally result in a representation that made data values look of similar size, in fact, being of widely differing magnitudes. Misuse of a log scale can make vastly different values (such as 10 and 10,000) appear close together (on a base-10 log scale, they would be only 1 and 4). Or it can make small values appear to be negative due to how logarithmic scales represent numbers smaller than the base. Misuse of log scales may also cause relationships between quantities to appear linear whilst those relationships are exponentials or power laws that rise very rapidly towards higher values. It has been stated, although mainly in a humorous way, that "anything looks linear on a log-log plot with thick marker pen" . {| class="wikitable" |+ Comparison of linear and logarithmic scales for identical data |- ! Linear scale ! Logarithmic scale |- | | |} Both graphs show an identical exponential function of f(x) = 2x. The graph on the left uses a linear scale, showing clearly an exponential trend. The graph on the right, however uses a logarithmic scale, which generates a straight line. If the graph viewer were not aware of this, the graph would appear to show a linear trend. Truncated graph A truncated graph (also known as a torn graph) has a y axis that does not start at 0. These graphs can create the impression of important change where there is relatively little change. While truncated graphs can be used to overdraw differences or to save space, their use is often discouraged. Commercial software such as MS Excel will tend to truncate graphs by default if the values are all within a narrow range, as in this example. To show relative differences in values over time, an index chart can be used. Truncated diagrams will always distort the underlying numbers visually. Several studies found that even if people were correctly informed that the y-axis was truncated, they still overestimated the actual differences, often substantially. {| class="wikitable" |+ Truncated bar graph |- ! Truncated bar graph ! Regular bar graph |- | | |} These graphs display identical data; however, in the truncated bar graph on the left, the data appear to show significant differences, whereas, in the regular bar graph on the right, these differences are hardly visible. There are several ways to indicate y-axis breaks: {| class="wikitable" |+ Indicating a y-axis break |- | | |} Axis changes {| class="wikitable" |+ Changing y-axis maximum ! Original graph ! Smaller maximum ! Larger maximum |- | | | |} Changing the y-axis maximum affects how the graph appears. A higher maximum will cause the graph to appear to have less volatility, less growth, and a less steep line than a lower maximum. {| class="wikitable" |+ Changing ratio of graph dimensions ! Original graph ! Half-width, twice the height ! Twice width, half-height |- | | | |} Changing the ratio of a graph's dimensions will affect how the graph appears. No scale The scales of a graph are often used to exaggerate or minimize differences. {| class="wikitable" |+ Misleading bar graph with no scale |- ! Less difference ! More difference |- | | |} The lack of a starting value for the y axis makes it unclear whether the graph is truncated. Additionally, the lack of tick marks prevents the reader from determining whether the graph bars are properly scaled. Without a scale, the visual difference between the bars can be easily manipulated. {| class="wikitable" |+ Misleading line graph with no scale |- ! Volatility ! Steady, fast growth ! Slow growth |- | | | |} Though all three graphs share the same data, and hence the actual slope of the (x, y) data is the same, the way that the data is plotted can change the visual appearance of the angle made by the line on the graph. This is because each plot has a different scale on its vertical axis. Because the scale is not shown, these graphs can be misleading. Improper intervals or units The intervals and units used in a graph may be manipulated to create or mitigate change expression. Omitting data Graphs created with omitted data remove information from which to base a conclusion. {| class="wikitable" |+ Scatter plot with missing categories |- ! Scatter plot with missing categories ! Regular scatter plot |- | | |} In the scatter plot with missing categories on the left, the growth appears to be more linear with less variation. In financial reports, negative returns or data that do not correlate with a positive outlook may be excluded to create a more favorable visual impression. 3D The use of a superfluous third dimension, which does not contain information, is strongly discouraged, as it may confuse the reader. Complexity Graphs are designed to allow easier interpretation of statistical data. However, graphs with excessive complexity can obfuscate the data and make interpretation difficult. Poor construction Poorly constructed graphs can make data difficult to discern and thus interpret. Extrapolation Misleading graphs may be used in turn to extrapolate misleading trends. Measuring distortion Several methods have been developed to determine whether graphs are distorted and to quantify this distortion. Lie factor where A graph with a high lie factor (>1) would exaggerate change in the data it represents, while one with a small lie factor (>0, <1) would obscure change in the data. A perfectly accurate graph would exhibit a lie factor of 1. Graph discrepancy index where The graph discrepancy index, also known as the graph distortion index (GDI), was originally proposed by Paul John Steinbart in 1998. GDI is calculated as a percentage ranging from −100% to positive infinity, with zero percent indicating that the graph has been properly constructed and anything outside the ±5% margin is considered to be distorted. Research into the usage of GDI as a measure of graphics distortion has found it to be inconsistent and discontinuous, making the usage of GDI as a measurement for comparisons difficult. Data-ink ratio The data-ink ratio should be relatively high. Otherwise, the chart may have unnecessary graphics. Data density The data density should be relatively high, otherwise a table may be better suited for displaying the data. Usage in finance and corporate reports Graphs are useful in the summary and interpretation of financial data. Graphs allow trends in large data sets to be seen while also allowing the data to be interpreted by non-specialists. Graphs are often used in corporate annual reports as a form of impression management. In the United States, graphs do not have to be audited, as they fall under AU Section 550 Other Information in Documents Containing Audited Financial Statements. Several published studies have looked at the usage of graphs in corporate reports for different corporations in different countries and have found frequent usage of improper design, selectivity, and measurement distortion within these reports. The presence of misleading graphs in annual reports has led to requests for standards to be set. Research has found that while readers with poor levels of financial understanding have a greater chance of being misinformed by misleading graphs, even those with financial understanding, such as loan officers, may be misled. Academia The perception of graphs is studied in psychophysics, cognitive psychology, and computational visions. See also Chartjunk Impression management Misuse of statistics Simpson's paradox How to Lie with Statistics References Books Further reading A discussion of misleading graphs, Mark Harbison, Sacramento City College External links Gallery of Data Visualization The Best and Worst of Statistical Graphics, York University Misuse of statistics Ethics and statistics
Misleading graph
Technology
3,002
184,495
https://en.wikipedia.org/wiki/Biefeld%E2%80%93Brown%20effect
The Biefeld–Brown effect is an electrical phenomenon, first noticed by inventor Thomas Townsend Brown in the 1920s, where high voltage applied to the electrodes of an asymmetric capacitor causes a net propulsive force toward the smaller electrode. Brown believed this effect was an anti-gravity force, and referred to as "electrogravitics" based on it being an electricity/gravity phenomenon. It has since been determined that the force is due to ionic wind that transfers its momentum to surrounding neutral particles. Overview It is generally assumed that the Biefeld–Brown effect produces an ionic wind that transfers its momentum to surrounding neutral particles. It describes a force observed on an asymmetric capacitor when high voltage is applied to the capacitor's electrodes. Once suitably charged up to high DC potentials, a thrust at the negative terminal, pushing it away from the positive terminal, is generated. The use of an asymmetric capacitor, with the negative electrode being larger than the positive electrode, allowed for more thrust to be produced in the direction from the low-flux to the high-flux region compared to a conventional capacitor. These asymmetric capacitors became known as Asymmetrical Capacitor Thrusters (ACT). The Biefeld–Brown effect can be observed in ionocrafts and lifters, which utilize the effect to produce thrust in the air without requiring any combustion or moving parts. History The "Biefeld–Brown effect" was the name given to a phenomenon observed by Thomas Townsend Brown while he was experimenting with X-ray tubes during the 1920s while he was still in high school. When he applied a high voltage electrical charge to a Coolidge tube that he placed on a scale, Brown noticed a difference in the tube's mass depending on orientation, implying some kind of net force. This discovery caused him to assume that he had somehow influenced gravity electronically and led him to design a propulsion system based on this phenomenon. On 15 April 1927, he applied for a patent, entitled "Method of Producing Force or Motion," that described his invention as an electrical-based method that could control gravity to produce linear force or motion. In 1929, Brown published an article for the popular American magazine Science and Invention, which detailed his work. The article also mentioned the "gravitator," an invention by Brown which produced motion without the use of electromagnetism, gears, propellers, or wheels, but instead using the principles of what he called "electro-gravitation." He also claimed that the asymmetric capacitors were capable of generating mysterious fields that interacted with the Earth's gravitational pull and envisioned a future where gravitators would propel ocean liners and even space cars. At some point this effect also gained the moniker "Biefeld–Brown effect", probably coined by Brown to claim Denison University professor of physics and astronomy Paul Alfred Biefeld as his mentor and co-experimenter. Brown attended Denison in Ohio for a year before he dropped out and records of him even having an association with Biefeld are sketchy at best. Brown claimed that he did a series of experiments with professor of astronomy Biefeld, a former teacher of Brown whom Brown claimed was his mentor and co-experimenter at Denison University.As of 2004, Denison University claims they have no record of any such experiments, or of any association between Brown and Biefeld. In his 1960 patent titled "Electrokinetic Apparatus," Brown refers to electrokinesis to describe the Biefeld–Brown effect, linking the phenomenon to the field of electrohydrodynamics (EHD). Brown also believed the Biefeld–Brown effect could produce an anti-gravity force, referred to as "electrogravitics" based on it being an electricity/gravity phenomenon. However, there is little evidence that supports Brown's claim on the effect's anti-gravity properties. Brown's patent made the following claims: There is a negative correlation between the distance between the plates of the capacitor and the strength of the effect, where the shorter the distance, the greater the effect. There is a positive correlation between the dielectric strength of the material between the electrodes and the strength of the effect, where the higher the strength, the greater the effect. There is a positive correlation between the area of the conductors and the strength of the effect, where the greater the area, the greater the effect. There is a positive correlation between the voltage difference between the capacitor plates and the strength of the effect, where the greater the voltage, the greater the effect. There is a positive correlation between the mass of the dielectric material and the strength of the effect, where the greater the mass, the greater the effect. In 1965, Brown filed a patent that claimed that a net force on the asymmetric capacitor can exist even in a vacuum. However, there is little experimental evidence that serves to validate his claims. Effect analysis The effect is generally believed to rely on corona discharge, which allows air molecules to become ionized near sharp points and edges. Usually, two electrodes are used with a high voltage between them, ranging from a few kilovolts and up to megavolt levels, where one electrode is small or sharp, and the other larger and smoother. The most effective distance between electrodes occurs at an electric potential gradient of about 10 kV/cm, which is just below the nominal breakdown voltage of air between two sharp points, at a current density level usually referred to as the saturated corona current condition. This creates a high field gradient around the smaller, positively charged electrode. Around this electrode, ionization occurs, that is, electrons are stripped from the atoms in the surrounding medium; they are literally pulled right off by the electrode's charge. This leaves a cloud of positively charged ions in the medium, which are attracted to the negative smooth electrode by Coulomb's Law, where they are neutralized again. This produces an equally scaled opposing force in the lower electrode. This effect can be used for propulsion (see EHD thruster), fluid pumps and recently also in EHD cooling systems. The velocity achievable by such setups is limited by the momentum achievable by the ionized air, which is reduced by ion impact with neutral air. A theoretical derivation of this force has been proposed (see the external links below). However, this effect works using either polarity for the electrodes: the small or thin electrode can be either positive or negative, and the larger electrode must have the opposite polarity. On many experimental sites it is reported that the thrust effect of a lifter is actually a bit stronger when the small electrode is the positive one. This is possibly an effect of the differences between the ionization energy and electron affinity energy of the constituent parts of air; thus the ease of which ions are created at the 'sharp' electrode. As air pressure is removed from the system, several effects combine to reduce the force and momentum available to the system. The number of air molecules around the ionizing electrode is reduced, decreasing the quantity of ionized particles. At the same time, the number of impacts between ionized and neutral particles is reduced. Whether this increases or decreases the maximum momentum of the ionized air is not typically measured, although the force acting upon the electrodes reduces, until the glow discharge region is entered. The reduction in force is also a product of the reducing breakdown voltage of air, as a lower potential must be applied between the electrodes, thereby reducing the force dictated by Coulomb's Law. During the glow discharge regime, the air becomes a conductor. Though the applied voltage and current will propagate at nearly the speed of light, the movement of the conductors themselves is almost negligible. This leads to a Coulomb force and change of momentum so small as to be zero. Below the glow discharge region, the breakdown voltage increases again, whilst the number of potential ions decreases, and the chance of impact lowers. Experiments have been conducted and found to both prove and disprove a force at very low pressure. It is likely that the reason for this is that at very low pressures, only experiments which used very large voltages produced positive results, as a product of a greater chance of ionization of the extremely limited number of available air molecules, and a greater force from each ion from Coulomb's Law; experiments which used lower voltages have a lower chance of ionization and a lower force per ion. Common to positive results is that the force observed is small in comparison to experiments conducted at standard pressure. Disputes surrounding electrogravity and ion wind Brown believed that his large, high voltage, high capacity capacitors produced an electric field strong enough to marginally interact with the Earth's gravitational pull, a phenomenon he labeled electrogravitics. Several researchers claim that conventional physics cannot adequately explain the phenomenon. The effect has become something of a cause célèbre in the UFO community, where it is seen as an example of something much more exotic than electrokinetics. William L. Moore and Charles Berlitz devoted an entire chapter of their book on the "Philadelphia Experiment" to a retelling of Brown's early work with the effect, implying he had discovered a new electrogravity effect and that it was being used by UFOs. There have been follow-ups on the claims that this force can be produced in a full vacuum, meaning it is an unknown anti-gravity force, and not just the more well known ion wind. As part of a study in 1990, U.S. Air Force researcher R. L. Talley conducted a test on a Biefeld–Brown-style capacitor to replicate the effect in a vacuum. Despite attempts that increased the driving DC voltage to about 19 kV in vacuum chambers up to 10−6 torr, Talley observed no thrust in terms of static DC potential applied to the electrodes. In 2003, NASA scientist Jonathan Campbell tested a lifter in a vacuum at 10−7 torr with a voltage of up to 50 kV, only to observe no movement from the lifter. Campbell pointed out to a Wired magazine reporter that creating a true vacuum similar to space for the test requires tens of thousands of dollars in equipment. Around the same time in 2003, researchers from the Army Research Laboratory (ARL) tested the Biefeld–Brown effect by building four different-sized asymmetric capacitors based on simple designs found on the Internet and then applying a high voltage of around 30 kV to them. According to their report, the researchers wrote that the effects of ion wind was at least three orders of magnitude too small to account for the observed force on the asymmetric capacitor in the air. Having proposed that the Biefeld–Brown effect could theoretically be explained using ion drift instead of ion wind due to how the former involves collisions instead of ballistic trajectories, they noted these were only "scaling estimates" and more experimental and theoretical work was needed. Around ten years later, researchers from the Technical University of Liberec conducted experiments on the Biefeld–Brown effect that supported one of ARL's hypotheses that assigned ion drift as the most likely source of the generated force. In 2004, Martin Tajmar published a paper that also failed to replicate Brown's work and suggested that Brown may have instead observed the effects of a corona wind triggered by insufficient outgassing of the electrode assembly in the vacuum chamber and therefore misinterpreted the corona wind effects as a possible connection between gravitation and electromagnetism. Patents T. T. Brown was granted a number of patents on his discovery: GB300311 — A method of and an apparatus or machine for producing force or motion (accepted 1928-11-15) — Electrostatic motor (1934-09-25) — Electrokinetic apparatus (1960-08-16) — Electrokinetic transducer (1962-01-23) — Electrokinetic generator (1962-02-20) — Electrokinetic apparatus (1965-06-01) — Electric generator (1965-07-20) Historically, numerous patents have been granted for various applications of the effect, including electrostatic dust precipitation, air ionizers, and flight. was granted to G.E. Hagen in 1964 for apparatus more or less identical to the later so-called 'lifter' devices. References External links Propulsion Physical phenomena Anti-gravity Electrostatics
Biefeld–Brown effect
Physics,Astronomy
2,584
3,893,531
https://en.wikipedia.org/wiki/Difference%20hierarchy
In set theory, a branch of mathematics, the difference hierarchy over a pointclass is a hierarchy of larger pointclasses generated by taking differences of sets. If Γ is a pointclass, then the set of differences in Γ is . In usual notation, this set is denoted by 2-Γ. The next level of the hierarchy is denoted by 3-Γ and consists of differences of three sets: . This definition can be extended recursively into the transfinite to α-Γ for some ordinal α. In the Borel hierarchy, Felix Hausdorff and Kazimierz Kuratowski proved that the countable levels of the difference hierarchy over Π0γ give Δ0γ+1. References Descriptive set theory Mathematical logic hierarchies
Difference hierarchy
Mathematics
156
19,788,573
https://en.wikipedia.org/wiki/Collapsing%20sequence
A collapsing sequence occurs in human speech when utterance pairs between speakers have some unspoken thought occurring between them that may make the latter phrase, out of context, seem to have no logical connection to the former; there is, however, an implication that logical thought has occurred between the two phrases, so that the latter phrase will make sense based upon an assumption of its relation to the former. Examples Customer: What's chocolate filbert? Clerk: We don't have any. The clerk's reply is in response to what he knows will come next in the discourse. If the clerk had proceeded to explain what chocolate filbert is, then it is possible that the customer would have asked for some. By explaining what the product is, the clerk would have tacitly implied that he had some to sell. Describing a product in a selling situation is often an implication that it is available. A waiter, for example, would not launch into a detailed description of a particular dish that a customer is inquiring about, only to end the discourse by then informing the customer that the dish is not available. Another common collapsing sequence is illustrated below: A. Do you smoke? B. I left them in my other jacket. This type of collapsing sequence speeds up social interaction by averting unnecessary explanations. Collapsing sequences can be used in other situations as well, such as when someone joins a discussion already in progress: Hi, John. We were just talking about nursery schools. In the phrasing of this response, the speaker is either warning John not join the group, or is giving him orientation so that he can understand the context of the discussion and participate. See also Cooperative principle References Chaika, Elaine. Language: The Social Mirror. Rowley, Massachusetts: Newbury House, 1982 (pp. 85–86). Human communication Discourse analysis Pragmatics
Collapsing sequence
Biology
376
19,134,706
https://en.wikipedia.org/wiki/Anti-scratch%20coating
Anti-scratch coating is a type of protective coating or film applied to an object's surface for mitigation against scratches. Scratches are small surface-level cuts left on a surface following interaction with a sharper object. Anti-scratch coatings provide scratch resistances by containing tiny microscopic materials with scratch-resistant properties. Scratch resistance materials come in the form of additives, filters, and binders. Besides materials, scratch resistances is impacted by coating formation techniques. Scratch resistance is measured using the Scratch-hardness test. Commercially, anti-scratch coatings are used in the automotive, optical, photographic, and electronics industries, where resale and/or functionality is impaired by scratches. Anti-scratch coatings are of growing importance as traditional scratch resistance materials like metals and glass are replaced with low-scratch resistant plastics. Applications Automotive, Optical, and Electronics are major sectors of anti-scratch coatings. Automotive Anti-scratch coatings in the automotive industry maintain a car's appearance and prevent damage of a car's anti-corrosion layer. The anti-corrosion layer protects car metals from environmental harm. Automotive anti-scratch coatings are becoming stronger (from 10 newtons to 15 newtons of protection) to counter scratch resistance lost due to the industry shift from steel to lightweight, but low-scratch resistant plastics and aluminium. Currently, scratch-formation is decreased with a primer and clear coat. The primer is made of polyolefin-resin, while the clear coat contains the additives siloxane and erucamide. Optical Scratch-resistant coatings are added to glasses due to scratches' extreme ability to impact a wearer's vision. Even when optical glasses are made of high scratch-resistances glass, polycarbonate, or CR-39, coatings are still used. Optical coatings include diamond-like carbon (DLC) and anti-reflective-scratch hybrid coatings. Diamond-like Carbon is a coating that shares diamonds' extreme scratch resistance. Anti-reflective Scratch hybrid coatings contain scratch-resistant additives with anti-reflective coating materials. Electronics In the electronics industry, scratches-resistances coatings are applied to electronic screens to prevent primary fingernails scratches. Screens are made of either polycarbonate (the highest Scratch-resistant plastic) or higher-end glass. Electronics Industry Anti-Scratch coatings often contain the anti-scratch additives siloxane, and the anti-Scratch filters TiO2 (titanium dioxide) and SiO2 (silicon dioxide). The additives and Filters are combined with a Fluorocarbons resin. Fluorocarbon resin is an oleophobic material. Oleophobic materials are materials that repel oils caused by fingerprints. Other uses Anti-scratch coatings are often used on plastic products wherever optical clarity, weathering, and chemical resistance are required. Examples include optical discs, displays, injection-molded parts, gauges and other instruments, mirrors, signs, eye safety/protective goggles, and cosmetic packaging. These coatings are usually water-based or solvent-based. Anti-scratch coating compositions Scratch-resistant materials are present in anti-Starch coating either as binders, additives, and/or filters. Binder, additives, filters make up Anti-Scratch coating's Thin-film, a thin nano-meter to micro-meter layer applied to a substrate (an object's surface). Binders In anti-scratch coatings, binders (coatings' glue-like cohesive structure) provide scratch resistance or/and provide structure for scratch resistant additives and filters. Binders that offer scratch resistances and structure include: Ceramic-(Inorganic-non metal-based) binders Polysilazanes Diamond-like Carbon Resin (organic polymer-based) binders epoxy polycarbonate polyethylene Fillers Scratch-resistant coatings use special Scratch-resistant fillers. Fillers are particles that enhance specific functional properties of coatings with/or with binders. Common Scratch-resistant fillers include: - titanium dioxide(TiO2) -zirconium dioxide(ZrO2) -Aluminum oxide hydroxide(AlOOH) -Silicon monoxide(SiO) Additives Anti-scratch coatings use additives with specific Scratch-resistant properties. Additives are particles dispersed in a thin film in quantities of less than one percent. Additives that decrease scratch visibility include: Siloxane Eruamides (A type of fatty acid used in coatings due to fatty acid amide's scratch-resistant properties) Additives that lower friction, an important part of Scratch resistance, include: MoS2, graphite. oleic acid amide Additives that control for micro-cracking, a micro-sized step in Scratch formation, include: ZnO (Zinc oxide) BaO (Barium oxide) PbO (lead dioxide) Theory Anti-scratch coatings change the substrate's Tribological (Properties resulting from surface-environment interaction) and Mechanical (a material's physical properties) properties. Changed Tribological and Mechanical properties impact Scratch's deformation Mechanisms (microscopic effects of deforming a material), Scratch visibility, friction, and other additional considerations. Impact on deformation Mechanisms Scratch-resistant coatings lessen the impacts of scratches three primary deformation mechanisms: Ironing, micro-cracking, and plowing. Plowing The dislocation of atoms into weaker Atomic planes due to Plowing's plastic deformations. Plowing is when an indenture breaks a material's surface and leaves scratch marks. Anti-Scratch coatings contain filter-based materials with high ductility (ability to withstand plastic deformations) to limit plowing. Plastic deformations occur when the atomic bonds holding atomic planes break, causing the planes to dislocate into weaker positions. Control for plowing is important as every additional plowing event leaves a scratch and greater risk for internal damage, which will decrease products lifespan. Micro cracking Micro-cracking is micro-sized cracks that form on brittle surfaces due to the jerking indentor movement known as stick-slip. Anti-scratch coatings control for Micro-cracking by containing either filters, binders, or additives with high tensile strength. Recently, anti-scratch research is focusing on nano-cracking, the nanotribical version of microcracking by creating nano-specific additives. Ironing Anti-scratch coatings control scratch ironing by either prolonging or preventing elastic deformations. Elastic deformations are non-permanent stretching of atomic bonds occurring before plastic deformation. Anti-scratch coatings control elastic deformations, which causes a short-term grooving effect, by decreasing elasticity and increasing ductility. Decreasing elasticity, however, must be balanced since low elasticity causes micro-cracking. Scratch resistance can also be increased by prolonging the ironing period with yield point materials. Yield point is the point a materials change from elastic to plastic deformations. Higher yield point materials decrease permeant plowing, by increasing non-permeant ironing. Friction Scratch resistance coatings contain low friction, the sliding resistance force, surfaces. Low friction surfaces are smooth. Smooth surfaces are important since rougher surfaces are scratches prone: as shown by the Archard Wear Equation. Archard equation: W: volume of Wear created during a scratch event. S: The distance during which both objects were in contact with each other. N: normal force or amount of pressure applied by the indenting object. H: Hardness of the material, measured by a given coefficient. K: The Archard Wear dimensionless constant value of 1x108. Considerations for plastics Scratch-resistant coatings applied to substrates control for Plastic low-Scratch-Hardness by being coated with non-plastic materials. Plastics contain low-Scratch-Hardness due to plastic's high viscoelasticity (highly viscous and elastic deformations) and low crystallinity (High ordered Structure). Decreasing scratch visibility Surface topology map showing waviness and lay Scratch visibility is impacted by surface grooving. Grooving surrounding a scratch site changes the angle of reflection (direction of light causing waves). When the angle of reflection is greater than 3 percent, scratch's become visible. Anti-scratch coatings control scratch visibility by having a low grooving surface. Besides friction, low grooving surfaces depend on the topology (surface) factors of surface texture (lay) and spacing of irregularities (waviness). Topology is controlled by extreme precision during the coating formation process. Coating formation Main section coating formation Coating formation is the process of coating-substrate adhesion(attachment). Anti-scratch coatings are generally applied via spray (hand or automated), dip, spin, roll or flow coating. Coating Formation uses "Precision factor" to affect topology-dependent Scratch properties. "Precision factors" include additive concentration, coating thickness, and Viscosity. Most coating types can be cleaned with a non-ammonia based glass cleaner and a soft cloth. Testing of Scratch Resistance ASTM International, American Society for Testing and Materials, set material testing standards for materials, including Anti-scratch coatings. Most scratch-resistant coatings fall under ASTM standard D7027 - 20 (See External Links). Standard scratch resistance tests involve scratching coatings with a diamond indentor. See also Anti-reflective coating coatings References External links Thin-film optics
Anti-scratch coating
Materials_science,Mathematics
1,934
7,922,789
https://en.wikipedia.org/wiki/Frond%20dimorphism
Frond dimorphism refers to a difference in ferns between the fertile and sterile fronds. Since ferns, unlike flowering plants, bear spores on the leaf blade itself, this may affect the form of the frond itself. In some species of ferns, there is virtually no difference between the fertile and sterile fronds, such as in the genus Dryopteris, other than the mere presence of the sori, or fruit-dots, on the back of the fronds. Some other species, such as Polystichum acrostichoides (Christmas fern), or some ferns of the genus Osmunda, feature dimorphism on a portion of the frond only. Others, such as some species of Blechnum and Woodwardia, have fertile fronds that are markedly taller than the sterile. Still others, such as Osmunda cinnamomea (Cinnamon fern), or plants of the family Onocleaceae, have fertile fronds that are completely different from the sterile. Only members of the Onocleaceae and Blechnaceae exhibit a propensity towards dimorphy, while no member of the Athyriaceae is strongly dimorphic, and only some representatives of the Thelypteridaceae have evolved the condition, suggesting a possible close relationship between Onocleaceae and Blechnaceae. Its importance has been disputed - Copeland for example, considered it taxonomically important, whereas Tryon and Tryon and Kramer all stated that the importance can only be judged in relation to other characteristics. References Ferns
Frond dimorphism
Biology
328
14,272,194
https://en.wikipedia.org/wiki/Saint-Venant%27s%20theorem
In solid mechanics, it is common to analyze the properties of beams with constant cross section. Saint-Venant's theorem states that the simply connected cross section with maximal torsional rigidity is a circle. It is named after the French mathematician Adhémar Jean Claude Barré de Saint-Venant. Given a simply connected domain D in the plane with area A, the radius and the area of its greatest inscribed circle, the torsional rigidity P of D is defined by Here the supremum is taken over all the continuously differentiable functions vanishing on the boundary of D. The existence of this supremum is a consequence of Poincaré inequality. Saint-Venant conjectured in 1856 that of all domains D of equal area A the circular one has the greatest torsional rigidity, that is A rigorous proof of this inequality was not given until 1948 by Pólya. Another proof was given by Davenport and reported in. A more general proof and an estimate is given by Makai. Notes Elasticity (physics) Eponymous theorems of physics Calculus of variations Inequalities
Saint-Venant's theorem
Physics,Materials_science,Mathematics
224
66,742,046
https://en.wikipedia.org/wiki/Gravitational%20decoherence
Gravitational decoherence is a term for hypothetical mechanisms by which gravitation can act on quantum mechanical systems to produce decoherence. Advocates of gravitational decoherence include Frigyes Károlyházy, Roger Penrose and Lajos Diósi. A number of experiments have been proposed to test the gravitational decoherence hypothesis. Dmitriy Podolskiy and Robert Lanza have argued that gravitational decoherence may explain the existence of the arrow of time. See also Penrose interpretation Diósi–Penrose model Objective-collapse theory Quantum gravity References Quantum mechanics Quantum gravity
Gravitational decoherence
Physics
119
75,204,730
https://en.wikipedia.org/wiki/Pauline%20Sherman
Pauline Mont Sherman was an American aerospace engineer and academic. She was the first female Professor in the College of Engineering at the University of Michigan and also the first woman to become Professor of Aerospace Engineering at the University of Michigan. Her research focuses on jet noise, low-density flows, two-phase flows, and especially hypersonic flows. Early life and education Sherman was born in New York in 1921 to Polish immigrants. In 1942, she began working as an Expediter and Clerk for Eugene Scherman and later joined Babcock and Wilcox as a Contract Engineer in 1950. She earned a bachelor of science degree in Engineering Mechanics from the University of Michigan in 1952, where she participated in research on aircraft icing. At that time, pursuing an engineering degree was considered unconventional for women, as they were barred from City College liberal arts classes. In 1953, she went on to receive a master's degree in Mechanical Engineering from the University of California, Berkeley, where she concurrently served as a Research Engineer. Career Sherman began her academic career by returning to the University of Michigan as an Associate Research Engineer in 1956. She assumed the position of Assistant Professor of Aerospace Engineering in 1960, becoming the first woman appointed to the engineering faculty, and was subsequently promoted to the role of Associate Professor in 1963 and later to full Professor in 1971, from which she retired in 1987. The University of Michigan created the Pauline M. Sherman Collegiate Professorship to honor her legacy. Sherman joined Sigma XI in 1955 and worked as a Consultant for the Advisory Group for Aerospace Research (AGARD) and Development of NATO in 1962. She also provided consultancy services for the Environmental Protection Agency and the Lawrence Berkeley National Laboratory and advocated for women in science. Following her retirement in 1987, she became a volunteer for the American Civil Liberties Union. Research Sherman has contributed to the field of aerospace engineering with her research focused on hypersonic flows, jet noise, electrical circuitry, two-phase flows and low-density flows. During the early 1960s, she held a supervisory role in overseeing the construction of a high-energy hypersonic wind tunnel. She highlighted its capability to provide high temperatures and pressures for extended periods, crucial for analyzing chemical non-equilibrium in nozzle expansion. She also proposed a design for a timed externally triggered quick exhaust valve, employing a double diaphragm system. Hypersonic flows Sherman researched hypersonic flows throughout her career. She demonstrated that the diameter of a Pitot tube affects the measurement of Pitot pressure, with calculations and measurements revealing variations in pressure depending on tube size, particularly showcasing a decrease in pressure for smaller tube diameters, suggesting potential solutions for reducing transducer lag time. She also showed that pressures on 3° cones matched findings for 5° cones, both showing correlation with the viscous interaction parameter, and a newly proposed calculation method closely aligned with measured pressures. Working alongside L. Talbot and T. Koga, Sherman examined the condensation of zinc vapor in a helium carrier gas through nozzle acceleration, with particle size measurements revealing most particles were under 70 A in diameter, and pressure measurements indicating significant supercooling at Mach 3. Two-phase flows Another prominent focus in her research was the investigation of two-phase flows and low-density flows. In a large supersonic nozzle, she found that particle sizes ranged from 200 to 700 Å, correlating with initial vapor pressure, and that particle numbers decreased with increased mass fraction, while static pressure showed a linear increase with initial mass fraction. Additionally, she examined the condensation of superheated zinc vapor in an inert carrier gas, and observed an onset of condensation with approximately 430 K of supercooling and compared the findings with a classical liquid drop model of nucleation, which showed reasonable agreement with the measurements. In a collaborative study, Sherman developed a dispenser that consistently feeds small particles into a laboratory burner using positive displacement and a sonic ejector, meeting the criteria for accurate chemical measurements and laser-Doppler anemometry. She also designed and implemented a low inductance circuit for evaporating metal wires and condensing metal vapor into submicron-sized spherical particles with a log normal distribution, showing a decreasing mean diameter as expansion length increased, while the impact of ambient gas type on particle size was limited in the absence of chemical reactions. Jet noise Focusing on jet noise, Sherman revealed that the jet's oscillation frequency matched the dominant sound frequency with a reflecting surface, while an insulated surface shifted sound frequencies above the audible range, and screech frequency was inversely related to the first shock cell length and decreased with higher stagnation pressure. Electrical circuitry In her work on electrical circuitry, Sherman presented an empirical method for predicting the parameters required to achieve a single pulse discharge with no oscillation or residual energy, utilizing an LRC circuit with a 14.7-μF capacitor charged to different voltages and discharging through various metal wires. Selected articles Talbot, L., Koga, T., & Sherman, P. M. (1959). Hypersonic viscous flow over slender cones. Journal of the Aerospace Sciences, 26(11), 723–730. Griffin, J. L., & Sherman, P. M. (1965). Computer analysis of condensation in highly expanded flows. AIAA Journal, 3(10), 1813–1818. McBride, D. D., & Sherman, P. M. (1972). Condensed zinc particle size determined by a time discrete sampling apparatus. AIAA Journal, 10(8), 1058–1063. Sherman, P. M. (1975). Generation of submicron metal particles. Journal of Colloid and Interface Science, 51(1), 87–93. Sherman, P. M., Glass, D. R., & Duleep, K. G. (1976). Jet flow field during screech. Applied Scientific Research, 32, 283–303. References Aerospace engineers American aerospace engineers University of Michigan faculty University of California, Berkeley faculty University of Michigan alumni University of California, Berkeley alumni 1921 births 2007 deaths
Pauline Sherman
Engineering
1,260
10,997,772
https://en.wikipedia.org/wiki/CSI-DOS
CSI-DOS is an operating system, created in Samara, for the Soviet Elektronika BK-0011M and Elektronika BK-0011 microcomputers. CSI-DOS did not support the earlier BK-0010. CSI-DOS used its own unique file system and only supported a color graphics video mode. The system supported both hard and floppy drives as well as RAM disks in the computer's memory. It also included software to work with the AY-3-8910 and AY-3-8912 music co-processors, and the Covox Speech Thing. There are a number of games and demos designed specially for the system. The system also included a Turbo Vision-like application programming interface (API) allowing simpler design of user applications, and a graphical file manager called X-Shell. External links Article, contains description of some advantages of CSI-DOS for gaming over other OSs (Russian) Elektronika BK operating systems
CSI-DOS
Technology
203
40,462,777
https://en.wikipedia.org/wiki/Hoopes%20process
The Hoopes process is a metallurgical process, used to obtain aluminium metal of very high purity (about 99.99% pure). The process was patented by William Hoopes, a chemist of the Aluminum Company of America (ALCOA), in 1925. Introduction It is a method used to obtain aluminium of very high purity. The metal obtained in the Hall–Héroult process is about 99.5% pure, and for most purposes it is taken as pure metal. However, further purification of aluminium can be carried out by the Hoopes process. This is an electrolytic process. The process The cell used in this process consists of an iron tank lined with carbon at the bottom. A molten alloy of copper, crude aluminium and silicon is used as the anode. It forms the lowermost layer in the cell. The middle layer consists of molten mixture of fluorides of sodium, aluminium and barium (cryolite + BaF2). The uppermost layer consists of molten aluminium. A set of graphite rods dipped in molten aluminium serve as the cathode. During electrolysis, Al3+ ions from the middle layer migrate to the upper layer, where they are reduced to aluminum by gaining 3 electrons. Equal numbers of Al3+ ions are produced in the lower layer. These ions migrate to the middle layer. Pure aluminium is tapped off from time to time. The Hoopes process gives about 99.99% pure aluminium. References Further reading Aluminium Metallurgical processes
Hoopes process
Chemistry,Materials_science
309
6,850,292
https://en.wikipedia.org/wiki/Arjen%20Lenstra
Arjen Klaas Lenstra (born 2 March 1956, in Groningen) is a Dutch mathematician, cryptographer and computational number theorist. He is a professor emeritus from the École Polytechnique Fédérale de Lausanne (EPFL) where he headed of the Laboratory for Cryptologic Algorithms. Career He studied mathematics at the University of Amsterdam. He is a former professor at the EPFL (Lausanne), in the Laboratory for Cryptologic Algorithms, and previously worked for Citibank and Bell Labs. Research Lenstra is active in cryptography and computational number theory, especially in areas such as integer factorization. With Mark Manasse, he was the first to seek volunteers over the internet for a large scale volunteer computing project. Such projects became more common after the Factorization of RSA-129 which was a high publicity distributed factoring success led by Lenstra along with Derek Atkins, Michael Graff and Paul Leyland. He was also a leader in the successful factorizations of several other RSA numbers. Lenstra was also involved in the development of the number field sieve. With coauthors, he showed the great potential of the algorithm early on by using it to factor the ninth Fermat number, which was far out of reach by other factoring algorithms of the time. He has since been involved with several other number field sieve factorizations including the current record, RSA-768. Lenstra's most widely cited scientific result is the first polynomial time algorithm to factor polynomials with rational coefficients in the seminal paper that introduced the LLL lattice reduction algorithm with Hendrik Willem Lenstra and László Lovász. Lenstra is also co-inventor of the XTR cryptosystem. On 1 March 2005, Arjen Lenstra, Xiaoyun Wang, and Benne de Weger of Eindhoven University of Technology demonstrated construction of two X.509 certificates with different public keys and the same MD5 hash, a demonstrably practical hash collision. The construction included private keys for both public keys. Distinctions Lenstra is the recipient of the RSA Award for Excellence in Mathematics 2008 Award. Private life Lenstra's brother and co-author Hendrik Lenstra is a professor in mathematics at Leiden University and his brother Jan Karel Lenstra is a former director of Centrum Wiskunde & Informatica (CWI). See also L-notation General number field sieve Schnorr–Seysen–Lenstra algorithm References External links Web page on Arjen Lenstra at EPFL 1956 births Living people Dutch mathematicians Modern cryptographers Scientists from Groningen (city) Academic staff of the École Polytechnique Fédérale de Lausanne International Association for Cryptologic Research fellows Number theorists
Arjen Lenstra
Mathematics
561
12,251,618
https://en.wikipedia.org/wiki/Jacques%20E.%20Brandenberger
Jacques Edwin Brandenberger (19 October 1872 – 13 July 1954) was a Swiss chemist and textile engineer who in 1908 invented cellophane. He was awarded the Franklin Institute's Elliott Cresson Medal in 1937. Brandenberger was born in Zurich in 1872. He graduated from the University of Bern in 1895. In 1908 Brandenberger invented cellophane. Made from wood cellulose, cellophane was intended as a coating to make cloth more resistant to staining. After several years of further research and refinements, he began production of cellophane in 1920 marketing it for industrial purposes. He sold the US rights to DuPont in 1923. References External links Dr. J. E. Brandenberger Foundation Biography at National Inventors Hall of Fame Composite cellulose film, May 21, 1918 20th-century Swiss chemists 20th-century Swiss inventors 1872 births Scientists from Zurich 1954 deaths
Jacques E. Brandenberger
Chemistry
187
42,327,008
https://en.wikipedia.org/wiki/Allomyces
Allomyces is a genus of fungi in the family Blastocladiaceae. It was circumscribed by British mycologist Edwin John Butler in 1911. Species in the genus have a polycentric thallus and reproduce sexually or asexually by zoospores that have a whiplash-like flagella. They are mostly isolated from soils in tropical countries, commonly in ponds, rice fields, and slow-moving rivers. Morphology Allomyces thalli consist of a cylindrical trunk-like basal cell that gives rise to well-developed, highly branched rhizoids that anchor the thallus to the substrate. The trunk-like basal cell also gives rise to numerous dichotomously branched side branches that terminate as either resistant sporangia, zoosporangia, or gametangia depending on the life cycle stage. Septa are sometimes present especially at the base of reproductive organs. Life cycle and mating There are three distinct life cycles in Allomyces, and some authors delineate the subgenera Euallomyces, Cystogenes, and Brachyallomyces based on the life cycles while others do not. Euallomyces and Brachyallomyces are known to be classified as polyphyletic, but Cystogenes is monophyletic. The Euallomyces life cycle is an anisogamous alternation of generations between a haploid gametophyte and a diploid sporophyte. In this life cycle, the two stages are indistinguishable until reproductive organs are formed. Gametophytes produce colorless female gametangia and orange male gametangia; the orange coloration is transferred to the male gametes and is due to the presence of gamma carotenoid. Formation of male gametes is faster than of female gametes. Both male and female gametangia release motile gametes, but the male gametes are smaller and orange. Female gametangia and gametes release a pheromone called sirenin that attracts the male gametes. Male gametes produce a pheromone called parisin. Female gametes are sluggish and stay close to the female gametangia that sets up a strong concentration gradient of sirenin. Fertilization of female gametes by male gametes appears to be near 100% efficient. Fertilization takes place when two gametes contact one another. The plasma membranes fuse to form a binucleate cell with nuclear fusion quickly following. The resulting zygote is initially biflagellate, but it soon encysts and germinates. It grows into a dichotomously branched sporophyte that forms two types of sporangia: thin-walled zoosporangia that may be colorless or orange and thick-walled resting sporangia that are reddish-brown due to the presence of melanin pigments. The thin-walled zoosporangia give rise to motile zoospores that germinate and grow into another sporophyte. The resting sporangia undergo meiosis at germination and give rise to haploid zoospores that will germinate and grow into gametophytes. In Cystogenes life cycle the resting sporangia (from the sporophyte) give rise to biflagellated, bi-nucleated zoospores that will encyst, undergo meiosis, and germinate to yield motile gametes. These gametes will then fuse in pairs and the resulting zygotes germinate and grow into new sporophytes. In the Brachyallomyces life cycle, the gametophytic stage is missing altogether. Ecology Allomyces species seem to have a global distribution and are readily isolated from soils and waters by baiting with a sterile seed. Species of Allomyces can be parasitized by Catenaria allomycis, Rozella allomycis, and Olpidium allomycetos. Taxonomy The genus was circumscribed in 1911 by Butler and numerous species have been described. Based on type of life cycle, Emerson delineated three subgenera: Euallomyces, Cystogenes, and Brachyallomyces. Based on a molecular phylogeny using portions of the nuclear ribosome, it appears Euallomyces and Brachyallomyces are polyphyletic, but Cystogenes is monophyletic. Moreover, it appears several species in the genus are polyphyletic. Species Allomyces anomalus R.Emers. 1941 Allomyces arbusculus E.J.Butler 1911 Allomyces catenoides Sparrow 1964 Allomyces cystogenus R.Emers. 1941 Allomyces javanicus Kniep 1929 Allomyces macrogynus (R.Emers.) R.Emers. & C.M.Wilson 1954 Allomyces moniliformis Coker & Braxton 1926 Allomyces neomoniliformis Indoh 1940 Allomyces reticulatus R. Emers. & J.A.Robertson 1974 Allomyces strangulata Minden 1916 References Further reading External links Blastocladiomycota Fungus genera
Allomyces
Biology
1,091
9,088,305
https://en.wikipedia.org/wiki/Dean%20Burk
Dean Turner Burk (March 21, 1904 – October 6, 1988) was an American biochemist, medical researcher, and a cancer researcher at the Kaiser Wilhelm Institute and the National Cancer Institute. In 1934, he developed the Lineweaver–Burk plot together with Hans Lineweaver. Lineweaver and Burk collaborated with the eminent statistician W. Edwards Deming on the statistical analysis of their data: they used the plot for illustrating the results, not for the analysis itself. Early life Dean Turner Burk was born on March 21, 1904, in Oakland in Alameda County. Dean was the second of four sons born to Frederic Lister Burk, the founding President of the San Francisco Normal School, a preparatory school for teachers which eventually became San Francisco State University. He entered the University of California, Davis at the age of 15. A year later, he transferred to the University of California, Berkeley, where he received his B.S. degree in Entomology in 1923. Four years later, he earned a Ph.D. in biochemistry. Professional career Burk joined the Department of Agriculture in 1929 working in the Fixed Nitrogen Research Laboratory. In 1939, he joined the Cancer Institute as a senior chemist. He was head of the cytochemistry laboratory when he retired in 1974. He also taught biochemistry at the Cornell University Medical School from 1939 to 1941. He was a research master at George Washington University. Burk was a close friend and co-author with Otto Heinrich Warburg. He was a co-developer of the prototype of the Magnetic Resonance Scanner. Burk published more than 250 scientific articles in his lifetime. He later became head of the National Cancer Institute's Cytochemistry Sector in 1938, although he is often mistaken as leading the entire facility. Retirement After retiring from the NCI in 1974, Dean Burk remained active. He devoted himself to his opposition to water fluoridation. He and a coauthor published an analysis of cancer mortality in 10 cities that fluoridated the drinking water supply and 10 that didn't. The paper was criticized for using overly broad grouping and making assumptions about variations in racial composition of cities. Epidemiologists from the National Cancer Institute analyzed the findings and found no significant increase in cancer mortality associated with fluoridation. Burk considered "fluoridation as "mass murder on a grand scale." Dean Burk argued on Dutch television against a water fluoridation proposal which was before the Dutch Parliament in the Netherlands. He also was an avid supporter of laetrile; an alleged cancer treatment regarded by the medical community as ineffective and potentially dangerous. Recognition For his work on photosynthesis, Dean Burk received the Hillebrand Prize in 1952. Dean Burk and Otto Heinrich Warburg discovered the photosynthesis I-quantum reaction that splits CO2 activated by respiration. For his techniques to distinguish between normal cells and those damaged by cancer, Dean Burk was awarded the Gerhard Domagk Prize in 1965. References 1904 births 1988 deaths Alternative cancer treatment advocates 20th-century American biochemists American cancer researchers University of California, Davis alumni Water fluoridation
Dean Burk
Chemistry
641
44,495
https://en.wikipedia.org/wiki/Linear%20motor
A linear motor is an electric motor that has had its stator and rotor "unrolled", thus, instead of producing a torque (rotation), it produces a linear force along its length. However, linear motors are not necessarily straight. Characteristically, a linear motor's active section has ends, whereas more conventional motors are arranged as a continuous loop. A typical mode of operation is as a Lorentz-type actuator, in which the applied force is linearly proportional to the current and the magnetic field . Linear motors are most commonly found in high accuracy engineering applications. Many designs have been put forward for linear motors, falling into two major categories, low-acceleration and high-acceleration linear motors. Low-acceleration linear motors are suitable for maglev trains and other ground-based transportation applications. High-acceleration linear motors are normally rather short, and are designed to accelerate an object to a very high speed; for example, see the coilgun. High-acceleration linear motors are typically used in studies of hypervelocity collisions, as weapons, or as mass drivers for spacecraft propulsion. They are usually of the AC linear induction motor (LIM) design with an active three-phase winding on one side of the air-gap and a passive conductor plate on the other side. However, the direct current homopolar linear motor railgun is another high acceleration linear motor design. The low-acceleration, high speed and high power motors are usually of the linear synchronous motor (LSM) design, with an active winding on one side of the air-gap and an array of alternate-pole magnets on the other side. These magnets can be permanent magnets or electromagnets. The motor for the Shanghai maglev train, for instance, is an LSM. Types Brushless Brushless linear motors are members of the Synchronous motor family. They are typically used in standard linear stages or integrated into custom, high performance positioning systems. Invented in the late 1980s by Anwar Chitayat at Anorad Corporation, now Rockwell Automation, and helped improve the throughput and quality of industrial manufacturing processes. Brush Brushed linear motors were used in industrial automation applications prior to the invention of Brushless linear motors. Compared with three phase brushless motors, which are typically being used today, brush motors operate on a single phase. Brush linear motors have a lower cost since they do not need moving cables or three phase servo drives. However, they require higher maintenance since their brushes wear out. Synchronous In this design the rate of movement of the magnetic field is controlled, usually electronically, to track the motion of the rotor. For cost reasons synchronous linear motors rarely use commutators, so the rotor often contains permanent magnets, or soft iron. Examples include coilguns and the motors used on some maglev systems, as well as many other linear motors. In high precision industrial automation linear motors are typically configured with a magnet stator and a moving coil. A Hall effect sensor is attached to the rotor to track the magnetic flux of the stator. The electric current is typically provided from a stationary servo drive to the moving coil by a moving cable inside a cable carrier. Induction In this design, the force is produced by a moving linear magnetic field acting on conductors in the field. Any conductor, be it a loop, a coil or simply a piece of plate metal, that is placed in this field will have eddy currents induced in it thus creating an opposing magnetic field, in accordance with Lenz's law. The two opposing fields will repel each other, thus creating motion as the magnetic field sweeps through the metal. Homopolar In this design a large current is passed through a metal sabot across sliding contacts that are fed by two rails. The magnetic field this generates causes the metal to be projected along the rails. Tubular Efficient and compact design applicable to the replacement of pneumatic cylinders. Piezoelectric Piezoelectric drive is often used to drive small linear motors. History Low acceleration The history of linear electric motors can be traced back at least as far as the 1840s, to the work of Charles Wheatstone at King's College London, but Wheatstone's model was too inefficient to be practical. A feasible linear induction motor is described in (1905 - inventor Alfred Zehden of Frankfurt-am-Main), for driving trains or lifts. The German engineer Hermann Kemper built a working model in 1935. In the late 1940s, Dr. Eric Laithwaite of Manchester University, later Professor of Heavy Electrical Engineering at Imperial College in London developed the first full-size working model. In a single sided version the magnetic repulsion forces the conductor away from the stator, levitating it, and carrying it along in the direction of the moving magnetic field. He called the later versions of it magnetic river. The technologies would later be applied, in the 1984, Air-Rail Link shuttle, between Birmingham's airport and an adjacent train station. Because of these properties, linear motors are often used in maglev propulsion, as in the Japanese Linimo magnetic levitation train line near Nagoya. However, linear motors have been used independently of magnetic levitation, as in the Bombardier Innovia Metro systems worldwide and a number of modern Japanese subways, including Tokyo's Toei Ōedo Line. Similar technology is also used in some roller coasters with modifications but, at present, is still impractical on street running trams, although this, in theory, could be done by burying it in a slotted conduit. Outside of public transportation, vertical linear motors have been proposed as lifting mechanisms in deep mines, and the use of linear motors is growing in motion control applications. They are also often used on sliding doors, such as those of low floor trams such as the Alstom Citadis and the Socimi Eurotram. Dual axis linear motors also exist. These specialized devices have been used to provide direct X-Y motion for precision laser cutting of cloth and sheet metal, automated drafting, and cable forming. Most linear motors in use are LIM (linear induction motor), or LSM (linear synchronous motor). Linear DC motors are not used due to their higher cost and linear SRM suffers from poor thrust. So for long runs in traction LIM is mostly preferred and for short runs LSM is mostly preferred. High acceleration High-acceleration linear motors have been suggested for a number of uses. They have been considered for use as weapons, since current armour-piercing ammunition tends to consist of small rounds with very high kinetic energy, for which just such motors are suitable. Many amusement park launched roller coasters now use linear induction motors to propel the train at a high speed, as an alternative to using a lift hill. The United States Navy is also using linear induction motors in the Electromagnetic Aircraft Launch System that will replace traditional steam catapults on future aircraft carriers. They have also been suggested for use in spacecraft propulsion. In this context they are usually called mass drivers. The simplest way to use mass drivers for spacecraft propulsion would be to build a large mass driver that can accelerate cargo up to escape velocity, though RLV launch assist like StarTram to low Earth orbit has also been investigated. High-acceleration linear motors are difficult to design for a number of reasons. They require large amounts of energy in very short periods of time. One rocket launcher design calls for 300 GJ for each launch in the space of less than a second. Normal electrical generators are not designed for this kind of load, but short-term electrical energy storage methods can be used. Capacitors are bulky and expensive but can supply large amounts of energy quickly. Homopolar generators can be used to convert the kinetic energy of a flywheel into electric energy very rapidly. High-acceleration linear motors also require very strong magnetic fields; in fact, the magnetic fields are often too strong to permit the use of superconductors. However, with careful design, this need not be a major problem. Two different basic designs have been invented for high-acceleration linear motors: railguns and coilguns. Usage Linear motors are commonly used for actuating high performance industrial automation equipment. Their advantage, unlike any other commonly used actuator, such as a ball screw, timing belt, or rack and pinion, is that they provide any combination of high precision, high velocity, high force and long travel. Linear motors are widely used. One of the major uses of linear motors is for propelling the shuttle in looms. A linear motor has been used for sliding doors and various similar actuators. They have been used for baggage handling and even large-scale bulk materials transport. Linear motors are sometimes used to create rotary motion. For example, they have been used at observatories to deal with the large radius of curvature. Linear motors may also be used as an alternative to conventional chain-run lift hills for roller coasters. The coaster Maverick at Cedar Point uses one such linear motor in place of a chain lift. A linear motor has been used to accelerate cars for crash tests. Industrial automation The combination of high precision, high velocity, high force, and long travel makes brushless linear motors attractive for driving industrial automations equipment. They serve industries and applications such as semiconductor steppers, electronics surface-mount technology, automotive cartesian coordinate robots, aerospace chemical milling, optics electron microscope, healthcare laboratory automation, food and beverage pick and place. Machine tools Synchronous linear motor actuators, used in machine tools, provide high force, high velocity, high precision and high dynamic stiffness, resulting in high smoothness of motion and low settling time. They may reach velocities of 2 m/s and micron-level accuracies, with short cycle times and a smooth surface finish. Train propulsion Conventional rails All of the following applications are in rapid transit and have the active part of the motor in the cars. Bombardier Innovia Metro Originally developed in the late 1970s by UTDC in Canada as the Intermediate Capacity Transit System (ICTS). A test track was constructed in Millhaven, Ontario, for extensive testing of prototype cars, after which three lines were constructed: Line 3 Scarborough in Toronto (opened 1985; closed 2023) Expo Line of the Vancouver SkyTrain (opened 1985 and extended in 1994) Detroit People Mover in Detroit (opened 1987) ICTS was sold to Bombardier Transportation in 1991 and later known as Advanced Rapid Transit (ART) before adopting its current branding in 2011. Since then, several more installations have been made: Kelana Jaya Line in Kuala Lumpur (opened 1998 and extended in 2016) Millennium Line of the Vancouver SkyTrain (opened 2002 and extended in 2016) AirTrain JFK in New York (opened 2003) Airport Express (Beijing Subway) (opened 2008) Everline in Yongin, South Korea (opened 2013) All Innovia Metro systems use third rail electrification. Japanese Linear Metro One of the biggest challenges faced by Japanese railway engineers in the 1970s to the 1980s was the ever increasing construction costs of subways. In response, the Japan Subway Association began studying on the feasibility of the "mini-metro" for meeting urban traffic demand in 1979. In 1981, the Japan Railway Engineering Association studied on the use of linear induction motors for such small-profile subways and by 1984 was investigating on the practical applications of linear motors for urban rail with the Japanese Ministry of Land, Infrastructure, Transport and Tourism. In 1988, a successful demonstration was made with the Limtrain at Saitama and influenced the eventual adoption of the linear motor for the Nagahori Tsurumi-ryokuchi Line in Osaka and Toei Line 12 (present-day Toei Oedo Line) in Tokyo. To date, the following subway lines in Japan use linear motors and use overhead lines for power collection: Two Osaka Metro lines in Osaka: Nagahori Tsurumi-ryokuchi Line (opened 1990) Imazatosuji Line (opened 2006) Toei Ōedo Line in Tokyo (opened 2000) Kaigan Line of the Kobe Municipal Subway (opened 2001) Nanakuma Line of the Fukuoka City Subway (opened 2005) Yokohama Municipal Subway Green Line (opened 2008) Sendai Subway Tōzai Line (opened 2015) In addition, Kawasaki Heavy Industries has also exported the Linear Metro to the Guangzhou Metro in China; all of the Linear Metro lines in Guangzhou use third rail electrification: Line 4 (opened 2005) Line 5 (opened 2009). Line 6 (opened 2013) Monorail There is at least one known monorail system which is not magnetically levitated, but nonetheless uses linear motors. This is the Moscow Monorail. Originally, traditional motors and wheels were to be used. However, it was discovered during test runs that the proposed motors and wheels would fail to provide adequate traction under some conditions, for example, when ice appeared on the rail. Hence, wheels are still used, but the trains use linear motors to accelerate and slow down. This is possibly the only use of such a combination, due to the lack of such requirements for other train systems. The TELMAGV is a prototype of a monorail system that is also not magnetically levitated but uses linear motors. Magnetic levitation High-speed trains: Transrapid: first commercial use in Shanghai (opened in 2004) SCMaglev, under construction in Japan (fastest train in the world, planned to open by 2027) Rapid transit: Birmingham Airport, UK (opened 1984, closed 1995) M-Bahn in Berlin, Germany (opened in 1989, closed in 1991) Daejeon EXPO, Korea (ran only 1993) HSST: Linimo line in Aichi Prefecture, Japan (opened 2005) Incheon Airport Maglev (opened July 2014) Changsha Maglev Express (opened 2016) S1 line of Beijing Subway (opened 2017) Amusement rides There are many roller coasters throughout the world that use LIMs to accelerate the ride vehicles. The first being Flight of Fear at Kings Island and Kings Dominion, both opening in 1996. Battlestar Galactica: Human VS Cylon & Revenge of the Mummy at Universal Studios Singapore opened in 2010. They both use LIMs to accelerate from certain point in the rides. Revenge of the Mummy also located at Universal Studios Hollywood and Universal Studios Florida. The Incredible Hulk Coaster and VelociCoaster at Universal Islands of Adventure also use linear motors. At Walt Disney World, Rock 'n' Roller Coaster Starring Aerosmith at Disney's Hollywood Studios and Guardians of the Galaxy: Cosmic Rewind at Epcot both use LSM to launch their ride vehicles into their indoor ride enclosures. In 2023 a hydraulic launch roller coaster, Top Thrill Dragster at Cedar Point in Ohio, USA, was renovated and the hydraulic launch replaced with a weaker multi-launch system using LSM, that creates less g-force. Aircraft launching Electromagnetic Aircraft Launch System Proposed and research Launch loop – A proposed system for launching vehicles into space using a linear motor powered loop StarTram – Concept for a linear motor on extreme scale Tether cable catapult system Aérotrain S44 – A suburban commuter hovertrain prototype Research Test Vehicle 31 – A hovercraft-type vehicle guided by a track Hyperloop – a conceptual high-speed transportation system put forward by entrepreneur Elon Musk Elevator Lift Magway - a UK freight delivery system under research and development that aims to deliver goods in pods via 90 cm diameter pipework under and over ground. See also Linear actuator Linear induction motor Linear motion Maglev Online Electric Vehicle Reciprocating electric motor Sawyer motor Tubular linear motor References External links Design equations, spreadsheet, and drawings Motor torque calculation Overview of Electromagnetic Guns Electric motors English inventions Linear motion
Linear motor
Physics,Technology,Engineering
3,216
48,432,307
https://en.wikipedia.org/wiki/Leccinum%20barrowsii
Leccinum barrowsii is a species of bolete fungus in the family Boletaceae. It is found in the southwestern United States, where it grows on the ground under conifers. The bolete was described as new to science in 1966 by mycologists Alexander H. Smith, Harry Delbert Thiers, and Roy Watling. The specific epithet honours the collector, Charles "Chuck" Barrows (1903–1989). See also List of Leccinum species List of North American boletes References Fungi described in 1966 Fungi of the United States barrowsii Fungi without expected TNC conservation status Fungus species
Leccinum barrowsii
Biology
130
14,427,315
https://en.wikipedia.org/wiki/GPR12
Probable G-protein coupled receptor 12 is a protein that in humans is encoded by the GPR12 gene. The gene product of GPR12 is an orphan receptor, meaning that its endogenous ligand is currently unknown. Gene disruption of GPR12 in mice results in dyslipidemia and obesity. Ligands Inverse agonists Cannabidiol Evolution Paralogues Source: GPR6 GPR3 S1PR5 CNR1 CNR2 MC4R S1PR1 MC3R MC2R S1PR2 MC1R S1PR3 LPAR2 MC5R LPAR1 S1PR4 LPAR3 GPR119 References Further reading G protein-coupled receptors
GPR12
Chemistry
149
49,547,719
https://en.wikipedia.org/wiki/Calcium%3Acation%20antiporter-2
The Ca2+:H+ antiporter-2 (CaCA2) family (TC# 2.A.106) is a member of the lysine exporter (LysE) superfamily. Note that this family differs from the calcium:cation antiporter (CaCA) family which belongs to the cation diffusion facilitator (CDF) superfamily. CaCA2 family proteins are found in bacteria, archaea, yeast, plants and animals. This family, previously called the uncharacterized Protein Family 0016 (UPF0016), is well conserved throughout prokaryotes and eukaryotes. They are usually 200-350 amino acyl residues long and exhibit 5-7 transmembrane segments (TMSs). Members The yeast golgi Gcr1-dependent translation factor 1 protein (Gdt1p; TC# 2.A.106.2.3) contributes to Ca2+ homeostasis. A yeast gdt1 mutant was found to be sensitive to high concentrations of Ca2+. This sensitivity was suppressed by expression of human TMEM165 in yeast. Patch-clamp analyses on human cells indicated that TMEM165 catalyzes Ca2+ transport. Defects in TMEM165 affected both Ca2+ and pH homeostasis. Gdt1p and TMEM165 are probably Golgi-localized Ca2+:H+ antiporters. Modification of the Golgi Ca2+ and pH balance could explain the glycosylation defects observed in TMEM165-deficient patients. Physiological significance Defects in the human TMEM165 homologue (TC# 2.A.106.2.2) are the cause of congenital disorder of glycosylation type 2K (CDG2K), an autosomal recessive disorder with variable phenotypes. Affected individuals show psychomotor and growth retardation, and most have short stature. Other features include dysmorphism, hypotonia, eye abnormalities, acquired microcephaly, hepatomegaly, and skeletal dysplasia. Congenital disorders of glycosylation are caused by a defect in glycoprotein biosynthesis and are characterized by under-glycosylated serum glycoproteins and a wide variety of clinical features. The broad spectrum of features may reflect the critical role of N-glycoproteins during embryonic development, differentiation, and maintenance of cell functions. General transport reaction The generalized reaction catalyzed by CaCA2 family members is: Ca2+ (cytoplasm) + H+ (golgi lumen) → Ca2+ (golgi lumen) + H+ (cytoplasm). References Protein families Solute carrier family
Calcium:cation antiporter-2
Biology
600
7,006,166
https://en.wikipedia.org/wiki/Green%E2%80%93Tao%20theorem
In number theory, the Green–Tao theorem, proved by Ben Green and Terence Tao in 2004, states that the sequence of prime numbers contains arbitrarily long arithmetic progressions. In other words, for every natural number , there exist arithmetic progressions of primes with terms. The proof is an extension of Szemerédi's theorem. The problem can be traced back to investigations of Lagrange and Waring from around 1770. Statement Let denote the number of primes less than or equal to . If is a subset of the prime numbers such that then for all positive integers , the set contains infinitely many arithmetic progressions of length . In particular, the entire set of prime numbers contains arbitrarily long arithmetic progressions. In their later work on the generalized Hardy–Littlewood conjecture, Green and Tao stated and conditionally proved the asymptotic formula for the number of k tuples of primes in arithmetic progression. Here, is the constant The result was made unconditional by Green–Tao and Green–Tao–Ziegler. Overview of the proof Green and Tao's proof has three main components: Szemerédi's theorem, which asserts that subsets of the integers with positive upper density have arbitrarily long arithmetic progressions. It does not a priori apply to the primes because the primes have density zero in the integers. A transference principle that extends Szemerédi's theorem to subsets of the integers which are pseudorandom in a suitable sense. Such a result is now called a relative Szemerédi theorem. A pseudorandom subset of the integers containing the primes as a dense subset. To construct this set, Green and Tao used ideas from Goldston, Pintz, and Yıldırım's work on prime gaps. Once the pseudorandomness of the set is established, the transference principle may be applied, completing the proof. Numerous simplifications to the argument in the original paper have been found. provide a modern exposition of the proof. Numerical work The proof of the Green–Tao theorem does not show how to find the arithmetic progressions of primes; it merely proves they exist. There has been separate computational work to find large arithmetic progressions in the primes. The Green–Tao paper states 'At the time of writing the longest known arithmetic progression of primes is of length 23, and was found in 2004 by Markus Frind, Paul Underwood, and Paul Jobling: 56211383760397 + 44546738095860 · k; k = 0, 1, . . ., 22.'. On January 18, 2007, Jarosław Wróblewski found the first known case of 24 primes in arithmetic progression: 468,395,662,504,823 + 205,619 · 223,092,870 · n, for n = 0 to 23. The constant 223,092,870 here is the product of the prime numbers up to 23, more compactly written 23# in primorial notation. On May 17, 2008, Wróblewski and Raanan Chermoni found the first known case of 25 primes: 6,171,054,912,832,631 + 366,384 · 23# · n, for n = 0 to 24. On April 12, 2010, Benoît Perichon with software by Wróblewski and Geoff Reynolds in a distributed PrimeGrid project found the first known case of 26 primes : 43,142,746,595,714,191 + 23,681,770 · 23# · n, for n = 0 to 25. In September 2019 Rob Gahan and PrimeGrid found the first known case of 27 primes : 224,584,605,939,537,911 + 81,292,139 · 23# · n, for n = 0 to 26. Extensions and generalizations Many of the extensions of Szemerédi's theorem hold for the primes as well. Independently, Tao and Ziegler and Cook, Magyar, and Titichetrakun derived a multidimensional generalization of the Green–Tao theorem. The Tao–Ziegler proof was also simplified by Fox and Zhao. In 2006, Tao and Ziegler extended the Green–Tao theorem to cover polynomial progressions. More precisely, given any integer-valued polynomials in one unknown all with constant term 0, there are infinitely many integers such that , xare simultaneously prime. The special case when the polynomials are implies the previous result that there arithmetic progressions of primes of length . Tao proved an analogue of the Green–Tao theorem for the Gaussian primes. See also Erdős conjecture on arithmetic progressions Dirichlet's theorem on arithmetic progressions Arithmetic combinatorics References Further reading Ramsey theory Additive combinatorics Additive number theory Theorems about prime numbers
Green–Tao theorem
Mathematics
1,033
26,464,590
https://en.wikipedia.org/wiki/Adimolol
Adimolol (developmental code name MEN-935) is antihypertensive agent which acts as a non-selective α1-, α2-, and β-adrenergic receptor antagonist. Synthesis The reaction between 1-naphthyl glycidyl ether (1) and 3-(3-amino-3-methylbutyl)-1H-benzimidazol-2-one (2) gives adimolol (3). References Alpha-1 blockers Alpha-2 blockers Amines Benzimidazoles Beta blockers Lactams Naphthol ethers Secondary alcohols Ureas
Adimolol
Chemistry
140
75,038,015
https://en.wikipedia.org/wiki/Precision%20cut%20lung%20slices
Precision cut lung slices or PCLS refer to thin sections of lung tissue that are prepared with high precision and are typically used for experimental purposes in the field of respiratory research. These slices are utilized to study various aspects of lung physiology, pathology, and pharmacology, providing researchers with a valuable tool for investigating lung diseases and testing the effects of drugs on lung tissue. Precision cut lung slices are prepared using specialized equipment called Vibratomes, ensuring that the tissue remains viable and retains its structural and functional characteristics, making them ideal for a wide range of experimental applications. History The history of Precision-cut Lung Slices (PCLS) dates back to the 1920s when scientists first explored tissue slices for studying organ metabolism and toxicology. Initially, manual slicing of tissues, such as the liver, led to significant variability in thickness and limited viability. A critical advancement occurred in the 1940s when Stadie and Riggs introduced a microtome equipped with a thin razor blade, reducing thickness variability to about 5%. These improved slices became known as precision-cut tissue slices. Creating PCLS posed unique challenges due to the lung's intricate structure. In the 1980s, Placke and Fisher achieved a breakthrough by infusing heated liquid agarose into the airways of hamster and rat lungs, preventing airway and alveolar collapse during slicing. Basic preparation Creating Precision-cut Lung Slices (PCLS) is a meticulous process that involves several essential steps. The use of vibratomes is crucial in ensuring the production of precise and high-quality lung slices for research purposes. Use of vibratomes The basic steps involved in preparing PCLS using vibratomes include: Tissue Selection Start by carefully selecting lung tissue from the desired species, such as rodents or humans, ensuring the tissue is of high quality and health. Tissue Embedding To facilitate slicing and maintain tissue structure, the lung tissue is typically embedded in a suitable medium, such as agarose or gelatin, into the specimen holder of the vibratome. Slicing Process The vibratome operates by oscillating a blade vertically or horizontally at high frequencies while the tissue is submerged in a cutting solution. This mechanical oscillation creates thin and precise slices of lung tissue. Researchers can adjust cutting parameters, such as slice thickness, to meet specific experimental requirements. Typically, PCLS have thicknesses ranging from 200-500 μm. Post-processing Depending on the research objectives, PCLS may undergo additional steps such as washing, culturing, or treatment with substances of interest, such as drugs or stimuli. Maintenance of PCLS Ensuring the viability of Precision-cut Lung Slices (PCLS) during ex vivo maintenance presents several challenges. Typically, PCLS are submerged in culture medium within multi-well plates, simulating tissue culture conditions at 37 °C, 5% CO2, and 95–100% air humidity. The culture medium is refreshed daily and optimized with essential nutrients, enabling viable PCLS to be maintained for up to 14 days, a significant improvement compared to previous reports of only 3–5 days. Additionally, the inclusion of antibiotics like penicillin and streptomycin helps prevent pathogen contamination from the outset of culture. While in culture, PCLS retain their viability, normal metabolic activity, tissue integrity, and responsiveness to stimuli such as lipopolysaccharide (LPS). However, it's important to note that extended culture periods may lead to some changes in PCLS function. For instance, although human PCLS can contract in response to methacholine, the secretion of LPS-induced TNF-α, while maintained, may diminish over time. Furthermore, long-term cultivation can result in the loss of certain cell populations, such as pneumocytes and lymphocytes, as well as the degradation of connective tissue fibers. These changes may contribute to decreased sensitivity of cultured PCLS to external stimuli. In practice, PCLS can maintain comparable viability and tissue homeostasis for 1 to 3 days, though extended periods can be achieved with optimized culture conditions. Experimental applications Precision-cut lung slices find extensive use in a variety of experimental applications in the field of respiratory research. Some of the key areas where PCLS are employed include: Asthma In the pursuit of understanding and developing treatments for asthma, researchers have explored various models, including animal models like mice and rats, to mimic different aspects of the condition. While these animal models have contributed to our knowledge, they come with limitations, particularly in terms of translatability to humans. To address these limitations and enhance our understanding of asthma, researchers have turned to human Precision-cut Lung Slices (PCLS) obtained from both healthy and diseased individuals as a valuable ex vivo tool. PCLS derived from healthy and asthmatic lungs exhibit altered responses to various stimuli, including bronchoconstriction and hyperresponsiveness, which closely resemble those observed in patients and various animal models. Moreover, PCLS from individuals with asthma have been shown to display significantly increased airway inflammation and hyperresponsiveness when stimulated by factors such as rhinovirus. These PCLS also exhibit elevated gene expression related to asthma pathogenesis, including genes like Il25, Tslp, and Il13. These findings align with observations in asthmatic patients, indicating that PCLS models provide a promising platform for asthma research. COPD (Chronic Obstructive Pulmonary Disease) While no in vivo models fully encompass all aspects of clinical COPD pathology, certain animal models, such as those involving cigarette smoke exposure, elastase-induced emphysema, and LPS challenge, have yielded valuable insights. For instance, exposing guinea pigs or mice to cigarette smoke can reproduce key features of human COPD, including emphysema, small airway remodeling, and pulmonary hypertension. However, this model typically manifests mild emphysema and requires months to develop. In contrast, delivering elastase to the lungs of mice rapidly induces an emphysematous phenotype, allowing for controlled disease severity by adjusting elastase dose, administration route, and duration. It's worth noting that the physiological relevance of elastase and LPS models is debatable due to differences in underlying mechanisms. The use of Precision-cut Lung Slices (PCLS) from in vivo models has proven particularly valuable in modeling COPD. For instance, PCLS obtained from smoke-exposed mice have shown elevated expression of chemokines when stimulated with viral mimics or influenza A virus. Murine PCLS have also demonstrated that Influenza A infection and cigarette smoke can impair bronchodilator responsiveness to β2-adrenoceptor agonists. Future studies employing PCLS from COPD patients hold the potential to enable both functional and phenotypic immune cell characterization, facilitating a more comprehensive understanding of molecular mechanisms underlying disease heterogeneity. Idiopathic Pulmonary Fibrosis (IPF) Precision-cut Lung Slices (PCLS) have proven effective in studying the early stages of lung fibrosis in IPF. When exposed to TGF-β1 and cadmium chloride, both human and rat PCLS have displayed relevant pathohistological changes commonly observed in the early phases of lung fibrosis. These changes include the upregulation of critical pro-fibrotic genes, increased thickness of alveolar septa, and abnormal activation of pulmonary cells. Recent advancements in research have led to the establishment of an ex vivo human PCLS model specifically focused on early-stage fibrosis. This model involves exposing PCLS to a combination of pro-fibrotic growth factors and signaling molecules, including TGF-β1, TNF-α, platelet-derived growth factor-AB, and lysophosphatidic acid. This approach offers a pathway to investigate the underlying mechanisms of early IPF and assess novel therapies. Researchers are actively evaluating novel treatments for IPF using PCLS. For example, caffeine, which inhibits TGF-β-induced increases in pro-fibrotic gene expression, has shown promise by significantly reducing fibrosis in PCLS from bleomycin-treated mice. Additionally, targeting PI3K signaling has emerged as a promising anti-fibrotic treatment strategy, as demonstrated using PCLS derived from IPF patients. The use of PCLS in IPF research holds great potential for understanding the disease's early stages, testing innovative therapies, and uncovering novel treatment strategies. Infection and Inflammation Precision-cut Lung Slices (PCLS) have been instrumental in studying the body's innate responses to viral and, to a lesser extent, bacterial challenges. This system has shed light on which cells become infected within the intact lung, offering insights distinct from in vitro air-liquid interface cultures. Studies by Goris et al. have revealed variations in the infectability of different cell types within the lung. For instance, bovine parainfluenza virus infection was observed primarily in cells beneath the lung epithelium within the PCLS system. Importantly, this suggests that the epithelium, when in its natural physiological structure, resists infection. Similar findings were reported by Kirchhoff et al. These studies emphasize the significance of studying cells within their physiological environment, considering cell associations and structural architecture. Such interactions not only affect infectability but also shape the system's response to infection. The PCLS system serves as a valuable tool for understanding inflammatory responses. It has been employed to investigate the innate response to bacterial wall components like LPS and to conduct complex mixed infection studies involving multiple viruses or viral and bacterial co-infections. This approach enables precise analysis of immune responses to each stimulus. In simpler models, PCLS have been used to assess the impact of LPS on the innate immune response, testing the effects of various immunomodulators on innate signaling. Furthermore, the ability to obtain slices from diseased lungs, such as those affected by conditions like COPD and asthma, provides a robust model for studying how respiratory diseases influence infectivity and host responses. This is particularly relevant for diseases like COPD and asthma, which have links to pathogen-induced exacerbations. PCLS research in infection and inflammation enhances our understanding of immune responses, paving the way for insights into disease mechanisms and potential therapeutic strategies. Drug Testing Precision-cut Lung Slices (PCLS) play a crucial role in evaluating novel therapeutic targets for asthma, especially as tolerance to conventional treatments like glucocorticoids and β2-receptor agonists becomes more common. Researchers have increasingly focused on targets relevant to asthma pathogenesis, and PCLS have become a valuable tool for evaluating these targets as potential therapeutics. For instance, studies have shown that inhibiting histone deacetylase with trichostatin A can alleviate airway constriction in human PCLS and simultaneously reduce airway hyperresponsiveness in antigen-challenged mice. Additionally, activating soluble guanylate cyclase in airway smooth muscle using riociguat and cinaciguat analogs has been found to induce bronchodilation in normal human PCLS and reverse airway hyperresponsiveness in allergic asthmatic mice, restoring normal lung function. The use of PCLS in drug development is expanding further, with specific agonists or inhibitors targeting bitter-taste receptors, peroxisome proliferator-activated receptor (PPAR) γ, phosphoinositide-3 kinase (PI3K), BK channels, and spleen tyrosine kinase (Syk) all undergoing investigation within this context. PCLS research contributes significantly to the development of innovative therapeutic strategies for asthma, addressing the evolving challenges of treatment resistance. Advantages of PCLS Precision-cut Lung Slices (PCLS) offer several distinct advantages that make them invaluable tools in respiratory research. They excel in preserving the intricate lung architecture, maintaining essential tissue structures like small airways, respiratory parenchyma, structural and immune cell populations, and connective tissue. The cellular composition within PCLS closely mirrors that of intact lungs, retaining the organization of structural and immune cells. However, it's important to note that specific cell types' distribution may vary among slices due to regional variability within the lung, especially in the presence of non-uniform disease-related changes. In certain contexts, PCLS can be considered as "mini" lungs. While lacking a recruitable immune system, PCLS provide a unique opportunity to correlate cell-specific functions with organ physiology. They exhibit complex responses to challenges and stimuli, such as airway contraction and immune responses, shedding light on disease mechanisms and treatment evaluations. PCLS have found applications in a wide range of respiratory research areas, including asthma, COPD, idiopathic pulmonary fibrosis, allergies, infections, and toxicology studies. Researchers have harnessed the advantages of PCLS to model and study these prominent respiratory diseases, facilitating insights and translational relevance to human disease. Limitations of PCLS Precision-cut Lung Slices (PCLS) provide valuable insights into lung physiology and pathology, but they do have limitations. Firstly, PCLS represent a static "snapshot" of lung tissue at the time of excision, lacking access to the dynamic, recruitable immune system present in living organisms. This limitation hinders the full understanding of immune responses within the lung. Furthermore, lung tissue is inherently heterogeneous, with variations in epithelial integrity, immune cell populations, and responses to stimulation across different lung regions. When studying diseases, such as respiratory conditions, this heterogeneity can complicate data interpretation, requiring careful statistical analysis to account for variability between slices. PCLS have a limited ability to fully replicate the intricate and dynamic immune responses observed in living organisms. They cannot recruit non-resident immune cells, and their viability is restricted to approximately two weeks. While they can capture initial signals induced by pathogens, they cannot fully mimic the complex immune responses seen in a living lung. Another limitation is that PCLS are typically used as static systems and do not replicate the natural breathing motion of the lung. This is particularly relevant when studying diseases like ventilator-induced lung injury, where mechanical stress from ventilation plays a crucial role. Attempts to stretch or deform PCLS have been made to simulate mechanical dynamics, but accurately replicating these processes remains a challenge. Administering treatments in PCLS can be challenging because the entire slice is bathed in the compound or stimulant of interest. This poses difficulties in translating findings to inhaled or systemic applications in vivo, making dosing and translation Administering treatments in PCLS can be challenging because the entire slice is bathed in the compound or stimulant of interest. This poses difficulties in translating findings to inhaled or systemic applications in vivo, making dosing and translation complex. See also Vibratome References Histology Pulmonology
Precision cut lung slices
Chemistry
3,077
77,984,484
https://en.wikipedia.org/wiki/NGC%206646
NGC 6646 is a spiral galaxy located in the constellation Lyra. Its velocity relative to the cosmic microwave background is 5,641 ± 35 km/s, which corresponds to a Hubble distance of 83.2 ± 5.9 Mpc (∼271 million ly). NGC 6646 was discovered by the German-British astronomer William Herschel on 26 June 1802. The luminosity class of NGC 6646 is I. One supernova has been observed in NGC 6646: SN2024gqf (typeIa, mag. 19.7). See also List of NGC objects (6001–7000) References External links NGC 6646 at LEDA NGC 6646 at SEDS NGC 6646 at SkyMap (DSS2) 6646 18020626 Discoveries by William Herschel Spiral galaxies Lyra +07-38-008 061944 11258
NGC 6646
Astronomy
186
14,766,741
https://en.wikipedia.org/wiki/Model%20K%20%28calculator%29
The Model K was an early 2-bit binary adder built in 1937 by Bell Labs scientist George Stibitz as a proof of concept, using scrap relays and metal strips from a tin can. The "K" in "Model K" came from "kitchen table", upon which he assembled it. References American inventions Calculators Digital electronics
Model K (calculator)
Mathematics,Engineering
73
4,500,356
https://en.wikipedia.org/wiki/ISO/IEC%2015288
The ISO/IEC 15288 Systems and software engineering — System life cycle processes is a technical standard in systems engineering which covers processes and lifecycle stages, developed by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). Planning for the ISO/IEC 15288:2002(E) standard started in 1994 when the need for a common systems engineering process framework was recognized. ISO/IEC/IEEE 15288 is managed by ISO/IEC JTC1/SC7, which is the committee responsible for developing standards in the area of Software and Systems Engineering. ISO/IEC/IEEE 15288 is part of the SC 7 Integrated set of Standards, and other standards in this domain include: ISO/IEC TR 15504 which addresses capability ISO/IEC 12207 and ISO/IEC 15288 which address lifecycle and ISO 9001 & ISO 90003 which address quality History The previously accepted standard MIL STD 499A (1974) was cancelled after a memo from the United States Secretary of Defense (SECDEF) prohibited the use of most U.S. Military Standards without a waiver (this memo was rescinded in 2005). The first edition was issued on 1 November 2002. Stuart Arnold was the editor and Harold Lawson was the architect of the standard. In 2004 this standard was adopted by the Institute of Electrical and Electronics Engineers as IEEE 15288. ISO/IEC 15288 was updated in 2008, then again (as a joint publication with IEEE) in 2015 and 2023. ISO/IEC/IEEE 15288:2023, 16 May 2023 Revises: ISO/IEC/IEEE 15288:2015 (jointly with IEEE), 15 May 2015 Revises: ISO/IEC 15288:2008 (harmonized with ISO/IEC 12207:2008), 1 February 2008 Revises: ISO/IEC 15288:2002 (first edition), 1 November 2002 Processes The standard defines thirty processes grouped into four categories: Agreement processes Organizational project-enabling processes Technical management processes Technical processes The standard defines two agreement processes: Acquisition process (clause 6.1.1) Supply process (clause 6.1.2) The standard defines six organizational project-enabling processes: Life cycle model management process (clause 6.2.1) Infrastructure management process (clause 6.2.2) Portfolio management process (clause 6.2.3) Human resources management process (clause 6.2.4) Quality management process (clause 6.2.5) Knowledge management process (clause 6.2.6) The standard defines eight technical management processes: Project planning process (clause 6.3.1) Project assessment and control process (clause 6.3.2) Decision management process (clause 6.3.3) Risk management process (clause 6.3.4) Configuration management process (clause 6.3.5) Information management process (clause 6.3.6) Measurement process (clause 6.3.7) Quality assurance process (clause 6.3.8) The standard defines fourteen technical processes: Business or mission analysis process (clause 6.4.1) Stakeholder needs and requirements definition process (clause 6.4.2) System requirements definition process (clause 6.4.3) Architecture definition process (clause 6.4.4) Design definition process (clause 6.4.5) System analysis process (clause 6.4.6) Implementation process (clause 6.4.7) Integration process (clause 6.4.8) Verification process (clause 6.4.9) Transition process (clause 6.4.10) Validation process (clause 6.4.11) Operation process (clause 6.4.12) Maintenance process (clause 6.4.13) Disposal process (clause 6.4.14) Each process is defined by a purpose, outcomes and activities. Activities are further divided into tasks. See also Systems development life cycle System lifecycle Capability Maturity Model Integration (CMMI) ISO/IEC 12207 Concept of operations or CONOPS References Systems engineering 15288 15288
ISO/IEC 15288
Engineering
845
726,659
https://en.wikipedia.org/wiki/Superintelligence
A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity. University of Oxford philosopher Nick Bostrom defines superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest". The program Fritz falls short of this conception of superintelligence—even though it is much better than humans at chess—because Fritz cannot outperform humans in other tasks. Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology to achieve radically greater intelligence. Several future study scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification. Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may allow them to — either as a single being or as a new species — become much more powerful than humans, and displace them. Several scientists and forecasters have been arguing for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies. Feasibility of artificial superintelligence The feasibility of artificial superintelligence (ASI) has been a topic of increasing discussion in recent years, particularly with the rapid advancements in artificial intelligence (AI) technologies. Progress in AI and claims of AGI Recent developments in AI, particularly in large language models (LLMs) based on the transformer architecture, have led to significant improvements in various tasks. Models like GPT-3, GPT-4, Claude 3.5 and others have demonstrated capabilities that some researchers argue approach or even exhibit aspects of artificial general intelligence (AGI). However, the claim that current LLMs constitute AGI is controversial. Critics argue that these models, while impressive, still lack true understanding and are primarily sophisticated pattern matching systems. Pathways to superintelligence Philosopher David Chalmers argues that AGI is a likely path to ASI. He posits that AI can achieve equivalence to human intelligence, be extended to surpass it, and then be amplified to dominate humans across arbitrary tasks. More recent research has explored various potential pathways to superintelligence: Scaling current AI systems – Some researchers argue that continued scaling of existing AI architectures, particularly transformer-based models, could lead to AGI and potentially ASI. Novel architectures – Others suggest that new AI architectures, potentially inspired by neuroscience, may be necessary to achieve AGI and ASI. Hybrid systems – Combining different AI approaches, including symbolic AI and neural networks, could potentially lead to more robust and capable systems. Computational advantages Artificial systems have several potential advantages over biological intelligence: Speed – Computer components operate much faster than biological neurons. Modern microprocessors (~2 GHz) are seven orders of magnitude faster than neurons (~200 Hz). Scalability – AI systems can potentially be scaled up in size and computational capacity more easily than biological brains. Modularity – Different components of AI systems can be improved or replaced independently. Memory – AI systems can have perfect recall and vast knowledge bases. It is also much less constrained than humans when it comes to working memory. Multitasking – AI can perform multiple tasks simultaneously in ways not possible for biological entities. Potential path through transformer models Recent advancements in transformer-based models have led some researchers to speculate that the path to ASI might lie in scaling up and improving these architectures. This view suggests that continued improvements in transformer models or similar architectures could lead directly to ASI. Some experts even argue that current large language models like GPT-4 may already exhibit early signs of AGI or ASI capabilities. This perspective suggests that the transition from current AI to ASI might be more continuous and rapid than previously thought, blurring the lines between narrow AI, AGI, and ASI. However, this view remains controversial. Critics argue that current models, while impressive, still lack crucial aspects of general intelligence such as true understanding, reasoning, and adaptability across diverse domains. The debate over whether the path to ASI will involve a distinct AGI phase or a more direct scaling of current technologies remains ongoing, with significant implications for AI development strategies and safety considerations. Challenges and uncertainties Despite these potential advantages, there are significant challenges and uncertainties in achieving ASI: Ethical and safety concerns – The development of ASI raises numerous ethical questions and potential risks that need to be addressed. Computational requirements – The computational resources required for ASI might be far beyond current capabilities. Fundamental limitations – There may be fundamental limitations to intelligence that apply to both artificial and biological systems. Unpredictability – The path to ASI and its consequences are highly uncertain and difficult to predict. As research in AI continues to advance rapidly, the question of the feasibility of ASI remains a topic of intense debate and study in the scientific community. Feasibility of biological superintelligence Carl Sagan suggested that the advent of Caesarean sections and in vitro fertilization may permit humans to evolve larger heads, resulting in improvements via natural selection in the heritable component of human intelligence. By contrast, Gerald Crabtree has argued that decreased selection pressure is resulting in a slow, centuries-long reduction in human intelligence and that this process instead is likely to continue. There is no scientific consensus concerning either possibility and in both cases, the biological change would be slow, especially relative to rates of cultural change. Selective breeding, nootropics, epigenetic modulation, and genetic engineering could improve human intelligence more rapidly. Bostrom writes that if we come to understand the genetic component of intelligence, pre-implantation genetic diagnosis could be used to select for embryos with as much as 4 points of IQ gain (if one embryo is selected out of two), or with larger gains (e.g., up to 24.3 IQ points gained if one embryo is selected out of 1000). If this process is iterated over many generations, the gains could be an order of magnitude improvement. Bostrom suggests that deriving new gametes from embryonic stem cells could be used to iterate the selection process rapidly. A well-organized society of high-intelligence humans of this sort could potentially achieve collective superintelligence. Alternatively, collective intelligence might be constructional by better organizing humans at present levels of individual intelligence. Several writers have suggested that human civilization, or some aspect of it (e.g., the Internet, or the economy), is coming to function like a global brain with capacities far exceeding its component agents. If this systemic superintelligence relies heavily on artificial components, however, it may qualify as an AI rather than as a biology-based superorganism. A prediction market is sometimes considered as an example of a working collective intelligence system, consisting of humans only (assuming algorithms are not used to inform decisions). A final method of intelligence amplification would be to directly enhance individual humans, as opposed to enhancing their social or reproductive dynamics. This could be achieved using nootropics, somatic gene therapy, or brain−computer interfaces. However, Bostrom expresses skepticism about the scalability of the first two approaches and argues that designing a superintelligent cyborg interface is an AI-complete problem. Forecasts Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen. At the 2006 AI@50 conference, 18% of attendees reported expecting machines to be able "to simulate learning and every other aspect of human intelligence" by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone. In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft academic search), the median year by which respondents expected machines "that can carry out most human professions at least as well as a typical human" (assuming no global catastrophe occurs) with 10% confidence is 2024 (mean 2034, st. dev. 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years). These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said 'never' for 50% confidence, and the 16.5% who said 'never' for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence. In a 2022 survey, the median year by which respondents expected "High-level machine intelligence" with 50% confidence is 2061. The survey defined the achievement of high-level machine intelligence as when unaided machines can accomplish every task better and more cheaply than human workers. In 2023, OpenAI leaders Sam Altman, Greg Brockman and Ilya Sutskever published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years. In 2024, Ilya Sutskever left OpenAI to cofound the startup Safe Superintelligence, which focuses solely on creating a superintelligence that is safe by design, while avoiding "distraction by management overhead or product cycles". Design considerations The design of superintelligent AI systems raises critical questions about what values and goals these systems should have. Several proposals have been put forward: Value alignment proposals Coherent extrapolated volition (CEV) – The AI should have the values upon which humans would converge if they were more knowledgeable and rational. Moral rightness (MR) – The AI should be programmed to do what is morally right, relying on its superior cognitive abilities to determine ethical actions. Moral permissibility (MP) – The AI should stay within the bounds of moral permissibility while otherwise pursuing goals aligned with human values (similar to CEV). Bostrom elaborates on these concepts: instead of implementing humanity's coherent extrapolated volition, one could try to build an AI to do what is morally right, relying on the AI's superior cognitive capacities to figure out just which actions fit that description. We can call this proposal "moral rightness" (MR)... MR would also appear to have some disadvantages. It relies on the notion of "morally right", a notoriously difficult concept, one with which philosophers have grappled since antiquity without yet attaining consensus as to its analysis. Picking an erroneous explication of "moral rightness" could result in outcomes that would be morally very wrong... One might try to preserve the basic idea of the MR model while reducing its demandingness by focusing on moral permissibility: the idea being that we could let the AI pursue humanity's CEV so long as it did not act in morally impermissible ways. Recent developments Since Bostrom's analysis, new approaches to AI value alignment have emerged: Inverse Reinforcement Learning (IRL) – This technique aims to infer human preferences from observed behavior, potentially offering a more robust approach to value alignment. Constitutional AI – Proposed by Anthropic, this involves training AI systems with explicit ethical principles and constraints. Debate and amplification – These techniques, explored by OpenAI, use AI-assisted debate and iterative processes to better understand and align with human values. Transformer LLMs and ASI The rapid advancement of transformer-based LLMs has led to speculation about their potential path to ASI. Some researchers argue that scaled-up versions of these models could exhibit ASI-like capabilities: Emergent abilities – As LLMs increase in size and complexity, they demonstrate unexpected capabilities not present in smaller models. In-context learning – LLMs show the ability to adapt to new tasks without fine-tuning, potentially mimicking general intelligence. Multi-modal integration – Recent models can process and generate various types of data, including text, images, and audio. However, critics argue that current LLMs lack true understanding and are merely sophisticated pattern matchers, raising questions about their suitability as a path to ASI. Other perspectives on artificial superintelligence Additional viewpoints on the development and implications of superintelligence include: Recursive self-improvement – I. J. Good proposed the concept of an "intelligence explosion", where an AI system could rapidly improve its own intelligence, potentially leading to superintelligence. Orthogonality thesis – Bostrom argues that an AI's level of intelligence is orthogonal to its final goals, meaning a superintelligent AI could have any set of motivations. Instrumental convergence – Certain instrumental goals (e.g., self-preservation, resource acquisition) might be pursued by a wide range of AI systems, regardless of their final goals. Challenges and ongoing research The pursuit of value-aligned AI faces several challenges: Philosophical uncertainty in defining concepts like "moral rightness" Technical complexity in translating ethical principles into precise algorithms Potential for unintended consequences even with well-intentioned approaches Current research directions include multi-stakeholder approaches to incorporate diverse perspectives, developing methods for scalable oversight of AI systems, and improving techniques for robust value learning. Al research is rapidly progressing towards superintelligence. Addressing these design challenges remains crucial for creating ASI systems that are both powerful and aligned with human interests. Potential threat to humanity The development of artificial superintelligence (ASI) has raised concerns about potential existential risks to humanity. Researchers have proposed various scenarios in which an ASI could pose a significant threat: Intelligence explosion and control problem Some researchers argue that through recursive self-improvement, an ASI could rapidly become so powerful as to be beyond human control. This concept, known as an "intelligence explosion", was first proposed by I. J. Good in 1965: This scenario presents the AI control problem: how to create an ASI that will benefit humanity while avoiding unintended harmful consequences. Eliezer Yudkowsky argues that solving this problem is crucial before ASI is developed, as a superintelligent system might be able to thwart any subsequent attempts at control. Unintended consequences and goal misalignment Even with benign intentions, an ASI could potentially cause harm due to misaligned goals or unexpected interpretations of its objectives. Nick Bostrom provides a stark example of this risk: Stuart Russell offers another illustrative scenario: These examples highlight the potential for catastrophic outcomes even when an ASI is not explicitly designed to be harmful, underscoring the critical importance of precise goal specification and alignment. Potential mitigation strategies Researchers have proposed various approaches to mitigate risks associated with ASI: Capability control – Limiting an ASI's ability to influence the world, such as through physical isolation or restricted access to resources. Motivational control – Designing ASIs with goals that are fundamentally aligned with human values. Ethical AI – Incorporating ethical principles and decision-making frameworks into ASI systems. Oversight and governance – Developing robust international frameworks for the development and deployment of ASI technologies. Despite these proposed strategies, some experts, such as Roman Yampolskiy, argue that the challenge of controlling a superintelligent AI might be fundamentally unsolvable, emphasizing the need for extreme caution in ASI development. Debate and skepticism Not all researchers agree on the likelihood or severity of ASI-related existential risks. Some, like Rodney Brooks, argue that fears of superintelligent AI are overblown and based on unrealistic assumptions about the nature of intelligence and technological progress. Others, such as Joanna Bryson, contend that anthropomorphizing AI systems leads to misplaced concerns about their potential threats. Recent developments and current perspectives The rapid advancement of LLMs and other AI technologies has intensified debates about the proximity and potential risks of ASI. While there is no scientific consensus, some researchers and AI practitioners argue that current AI systems may already be approaching AGI or even ASI capabilities. LLM capabilities – Recent LLMs like GPT-4 have demonstrated unexpected abilities in areas such as reasoning, problem-solving, and multi-modal understanding, leading some to speculate about their potential path to ASI. Emergent behaviors – Studies have shown that as AI models increase in size and complexity, they can exhibit emergent capabilities not present in smaller models, potentially indicating a trend towards more general intelligence. Rapid progress – The pace of AI advancement has led some to argue that we may be closer to ASI than previously thought, with potential implications for existential risk. A minority of researchers and observers, including some in the AI development community, believe that current AI systems may already be at or near AGI levels, with ASI potentially following in the near future. This view, while not widely accepted in the scientific community, is based on observations of rapid progress in AI capabilities and unexpected emergent behaviors in large models. However, many experts caution against premature claims of AGI or ASI, arguing that current AI systems, despite their impressive capabilities, still lack true understanding and general intelligence. They emphasize the significant challenges that remain in achieving human-level intelligence, let alone superintelligence. The debate surrounding the current state and trajectory of AI development underscores the importance of continued research into AI safety and ethics, as well as the need for robust governance frameworks to manage potential risks as AI capabilities continue to advance. See also Artificial general intelligence AI safety AI takeover Artificial brain Artificial intelligence arms race Effective altruism Ethics of artificial intelligence Existential risk Friendly artificial intelligence Future of Humanity Institute Intelligent agent Machine ethics Machine Intelligence Research Institute Machine learning Outline of artificial intelligence Posthumanism Robotics Self-replication Self-replicating machine Superintelligence: Paths, Dangers, Strategies References Papers . Books External links Bill Gates Joins Stephen Hawking in Fears of a Coming Threat from "Superintelligence" What is Artificial SuperIntelligence (ASI) Will Superintelligent Machines Destroy Humanity? Apple Co-founder Has Sense of Foreboding About Artificial Superintelligence Hypothetical technology Singularitarianism Intelligence Existential risk from artificial general intelligence
Superintelligence
Technology
3,902
11,557,611
https://en.wikipedia.org/wiki/Dimiter%20Skordev
Dimiter Skordev () (born 1936 in Sofia) is a professor in the Department of Mathematical Logic and Applications, Faculty of Mathematics and Computer Science at the University of Sofia. Chairman of the department in 1972-2000. Doyen and pioneer of mathematical logic research in Bulgaria who developed a Bulgarian school in the theory of computability, namely the algebraic (or axiomatic) recursion theory. He was the 1981 winner of Acad. Nikola Obreshkov Prize, the highest Bulgarian award in mathematics, bestowed for his monograph Combinatory Spaces and Recursiveness in Them. Skordev's field of scientific interests include computability and complexity in analysis, mathematical logic, generalized recursion theory, and theory of programs and computation. Skordev has more than 45 years of lecturing experience in calculus, mathematical logic, logic programming, discrete mathematics, and computer science. He has authored about 90 scientific publications including two monographs, and was one of the authors of the new Bulgarian phonetic keyboard layout proposed (but rejected) to become a state standard in 2006. Notes References Dimiter Skordev Historical notes on the development of mathematical logic in Sofia 1936 births Living people 20th-century Bulgarian mathematicians 21st-century Bulgarian mathematicians Mathematical logicians Bulgarian logicians 20th-century Bulgarian philosophers Scientists from Sofia 21st-century Bulgarian philosophers
Dimiter Skordev
Mathematics
272
2,064,767
https://en.wikipedia.org/wiki/Defects%20per%20million%20opportunities
In process improvement efforts, defects per million opportunities or DPMO (or nonconformities per million opportunities (NPMO)) is a measure of process performance. It is defined as A defect can be defined as a nonconformance of a quality characteristic (e.g. strength, width, response time) to its specification. DPMO is stated in opportunities per million units for convenience: processes that are considered highly capable (e.g., processes of Six Sigma quality) are those that experience fewer than 3.4 defects per million opportunities (or services provided). Note that DPMO differs from reporting defective parts per million (PPM) in that it comprehends the possibility that a unit under inspection may be found to have multiple defects of the same type or may have multiple types of defects. Identifying specific opportunities for defects (and therefore how to count and categorize defects) is an art, but generally organizations consider the following when defining the number of opportunities per unit: Knowledge of the process under study Industry standards When studying multiple types of defects, knowledge of the relative importance of each defect type in determining customer satisfaction The time, effort, and cost to count and categorize defects in process output Other measures Other measures of process performance include: Process capability indices such as Cpk Natural tolerance limit or sigma level PPM defective or defective parts per million Process performance indices such as Ppk Quality costs or cost of poor quality (COPQ) References Further reading Hahn, G. J., Hill, W. J., Hoerl, R. W. and Zinkgraf, S. A. (1999) The Impact of Six Sigma Improvement-A Glimpse into the Future of Statistics, The American Statistician, Vol. 53, No. 3, pp. 208–215. Statistical process control Six Sigma
Defects per million opportunities
Engineering
375
53,647,425
https://en.wikipedia.org/wiki/Cleret
Cleret is an American manufacturer and brand of squeegees and related products based in Lake Oswego, Oregon. The company's original squeegee won an International Design Excellence Award from the Industrial Designers Society of America, and sits in the permanent collection of the Smithsonian Institution. History Hanco Inc. was founded in 1987 by Alan Hansen in Lake Oswego, Oregon, to manufacture Cleret squeegees. The company name was later changed to Cleret. Hansen began designing the Cleret squeegee in 1986. At the time, he was director of corporate audit at Nike, Inc., and at Louisiana Pacific Corporation before that. After two years of working on concepts for a more attractive squeegee, Hansen hired Beaverton, Oregon-based Ziba Design, founded by Sohrab Vossoughi in 1984. Hansen invested $10,000 of his own money to pay Ziba to design what would become the Cleret Glass Cleaner. The brand name Cleret is derived from "clear-it" and is intended to sound upscale. The company generated $14,000 in sales in 1989, and over $1 million in its first year officially on the market, with 80% of sales coming from high-end catalogs such as Hammacher Schlemmer. It had annual revenues of $16 million by its second year on the market. In 1990, Hansen moved Hanco out of his home and into an office at the Water Tower in Portland's Johns Landing neighborhood. Around that time, Hansen quit his job at Nike to focus on building Cleret, continuing to manage and design new products for the company. In 1991, Cleret expanded beyond the United States, as it was introduced in Canada, Europe and Japan. In 1989, while it was still a prototype, the squeegee was one of 12 products to win an Industrial Design Excellence Award from the Industrial Designers Society of America. The squeegee was sold at the gift shop of the Museum of Modern Art in New York City. In 1997, the original Cleret squeegee from 1989 would join the permanent collection of the Smithsonian Institution as a part of its Product Design and Decorative Arts collection. Products and design Ziba's designers determined that the traditional T-shape of a squeegee was not the most efficient for a wiping motion in a confined area such as a shower stall. After researching the way window washers and filling-station attendants would put their hands close to the blade and rarely use the handle, Ziba designed a plastic squeegee without a traditional handle; instead, it had a tubular rubber grip running parallel to curved twin blades, which moves across glass or mirrors with a wiping motion to reduce strain to the hand and wrist. The dual blades leave a surface cleaner than a single blade squeegee. The New York Times called the resulting product "a piece of functional art" and "an elegant product" with a "sensual design". The Oregonian wrote that it "looks like no other squeegee in history". Reviewers have lauded it for being easy to hold and for standing on its end for simple storage. Cleret products are manufactured and assembled entirely in Oregon. The company later expanded beyond shower squeegees to manufacture squeegees for windows, kitchens, patio doors and automobiles. Honors and awards International Design Excellence Award (Gold), Industrial Designers Society of America, 1989 Smithsonian Institution permanent collection, Product Design and Decorative Arts, acquired 1997 References External links Companies established in 1987 Companies based in Lake Oswego, Oregon Cleaning product brands Cleaning products Cleaning tools Bathrooms 1987 establishments in Oregon
Cleret
Chemistry
755
443,575
https://en.wikipedia.org/wiki/Borel%E2%80%93Kolmogorov%20paradox
In probability theory, the Borel–Kolmogorov paradox (sometimes known as Borel's paradox) is a paradox relating to conditional probability with respect to an event of probability zero (also known as a null set). It is named after Émile Borel and Andrey Kolmogorov. A great circle puzzle Suppose that a random variable has a uniform distribution on a unit sphere. What is its conditional distribution on a great circle? Because of the symmetry of the sphere, one might expect that the distribution is uniform and independent of the choice of coordinates. However, two analyses give contradictory results. First, note that choosing a point uniformly on the sphere is equivalent to choosing the longitude uniformly from and choosing the latitude from with density . Then we can look at two different great circles: If the coordinates are chosen so that the great circle is an equator (latitude ), the conditional density for a longitude defined on the interval is If the great circle is a line of longitude with , the conditional density for on the interval is One distribution is uniform on the circle, the other is not. Yet both seem to be referring to the same great circle in different coordinate systems. Explanation and implications In case (1) above, the conditional probability that the longitude λ lies in a set E given that φ = 0 can be written P(λ ∈ E | φ = 0). Elementary probability theory suggests this can be computed as P(λ ∈ E and φ = 0)/P(φ = 0), but that expression is not well-defined since P(φ = 0) = 0. Measure theory provides a way to define a conditional probability, using the limit of events Rab = {φ : a < φ < b} which are horizontal rings (curved surface zones of spherical segments) consisting of all points with latitude between a and b. The resolution of the paradox is to notice that in case (2), P(φ ∈ F | λ = 0) is defined using a limit of the events Lcd = {λ : c < λ < d}, which are lunes (vertical wedges), consisting of all points whose longitude varies between c and d. So although P(λ ∈ E | φ = 0) and P(φ ∈ F | λ = 0) each provide a probability distribution on a great circle, one of them is defined using limits of rings, and the other using limits of lunes. Since rings and lunes have different shapes, it should be less surprising that P(λ ∈ E | φ = 0) and P(φ ∈ F | λ = 0) have different distributions. Mathematical explication Measure theoretic perspective To understand the problem we need to recognize that a distribution on a continuous random variable is described by a density f only with respect to some measure μ. Both are important for the full description of the probability distribution. Or, equivalently, we need to fully define the space on which we want to define f. Let Φ and Λ denote two random variables taking values in Ω1 = respectively Ω2 = [−, ]. An event {Φ = φ, Λ = λ} gives a point on the sphere S(r) with radius r. We define the coordinate transform for which we obtain the volume element Furthermore, if either φ or λ is fixed, we get the volume elements Let denote the joint measure on , which has a density with respect to and let If we assume that the density is uniform, then Hence, has a uniform density with respect to but not with respect to the Lebesgue measure. On the other hand, has a uniform density with respect to and the Lebesgue measure. Proof of contradiction Consider a random vector that is uniformly distributed on the unit sphere . We begin by parametrizing the sphere with the usual spherical polar coordinates: where and . We can define random variables , as the values of under the inverse of this parametrization, or more formally using the arctan2 function: Using the formulas for the surface area spherical cap and the spherical wedge, the surface of a spherical cap wedge is given by Since is uniformly distributed, the probability is proportional to the surface area, giving the joint cumulative distribution function The joint probability density function is then given by Note that and are independent random variables. For simplicity, we won't calculate the full conditional distribution on a great circle, only the probability that the random vector lies in the first octant. That is to say, we will attempt to calculate the conditional probability with We attempt to evaluate the conditional probability as a limit of conditioning on the events As and are independent, so are the events and , therefore Now we repeat the process with a different parametrization of the sphere: This is equivalent to the previous parametrization rotated by 90 degrees around the y axis. Define new random variables Rotation is measure preserving so the density of and is the same: . The expressions for and are: Attempting again to evaluate the conditional probability as a limit of conditioning on the events Using L'Hôpital's rule and differentiation under the integral sign: This shows that the conditional density cannot be treated as conditioning on an event of probability zero, as explained in Conditional probability#Conditioning on an event of probability zero. See also Notes References Fragmentary Edition (1994) (pp. 1514–1517) (PostScript format) Translation: Probability theory paradoxes
Borel–Kolmogorov paradox
Mathematics
1,088
13,749,132
https://en.wikipedia.org/wiki/TEAD2
TEAD2 (ETF, ETEF-1, TEF-4), together with TEAD1, defines a novel family of transcription factors, the TEAD family, highly conserved through evolution. TEAD proteins were notably found in Drosophila (Scalloped), C. elegans (egl -44), S. cerevisiae and A. nidulans. TEAD2 has been less studied than TEAD1 but a few studies revealed its role during development. Function TEAD2 is a member of the mammalian TEAD transcription factor family (initially named the transcriptional enhancer factor (TEF) family), which contain the TEA/ATTS DNA-binding domain. Members of the family in mammals are TEAD1, TEAD2, TEAD3, TEAD4. Tissue distribution TEAD2 is selectively expressed in a subset of embryonic tissues including the cerebellum, testis, and distal portions of the forelimb and hindlimb buds, as well as the tail bud, but it is essentially absent from adult tissues. TEAD2 has also been shown to be expressed very early during development, i.e. from the 2-cell stage. TEAD orthologs TEAD proteins are found in many organisms under different names, assuming different functions. For example, in Saccharomyces cerevisiae TEC-1 regulates the transposable element TY1 and is involved in pseudohyphale growth (the elongated shape that yeasts take when grown in nutrient-poor conditions). In Aspergillus nidulans, the TEA domain protein ABAA regulates the differentiation of conidiophores. In drosophila the transcription factor Scalloped is involved in the development of the wing disc, survival and cell growth. Finally in Xenopus, it has been demonstrated that the homolog of TEAD regulates muscle differentiation. Function Regulation of mouse neural development Neuron proliferation Regulation of proliferation Regulation of apoptosis Post transcriptional modifications TEAD1 can be palmitoylated on a conserved cysteine at the C-term of the protein. This post-translational modification is critical for proper folding of TEAD proteins and their stability. Based on bioinformatics evidence TEAD2 can be ubiquitinylated at Lys75 and several phosphorylation sites exist in the protein. Cofactors TEAD transcription factors have to associate with cofactors to be able to induce the transcription of target genes. Concerning TEAD2 very few studies have shown specific cofactors. But due to the high homology between the TEAD family members its believed that TEAD proteins may share cofactors. Here are presented the cofactor that interact with TEAD2. TEAD2 interacts with all members of the SRC family of steroid receptor coactivators. It has been shown in HeLa cells that TEAD2 and SRC induce gene expression. SRF (Serum response factor) and TEAD2 interact through their DNA binding domain, respectively the MADS domain and the TEA domain. In vitro studies demonstrated that this interaction leads to the activation of the skeletal muscle α-actin promoter. TEAD proteins and MEF2 (myocyte enhancer factor 2) interact physically. The binding of MEF2 on the DNA induces and potentiates TEAD2 recruitment at MCAT sequences that are adjacent to MEF2 binding sites. The four Vestigial-like (VGLL) proteins are able to interact with all TEADs. The precise function of TEAD and VGLL interaction is still poorly understood. It has been shown that TEAD/VGLL1 complexes promote anchorage-independent cell proliferation in prostate cancer cell lines suggesting a role in cancer progression. The interaction between YAP (Yes Associated Protein 65), TAZ, a transcriptional coactivator paralog to YAP, and all TEAD proteins was demonstrated both in vitro and in vivo. In both cases the interaction of the proteins leads to increased TEAD transcriptional activity. YAP/TAZ are effectors of the Hippo tumor suppressor pathway that restricts organ growth by keeping in check cell proliferation and promoting apoptosis in mammals and also in Drosophila. Clinical significance Recent animal models indicating a possible association of TEAD2 with anencephaly. Notes References Further reading External links Transcription factors
TEAD2
Chemistry,Biology
915
48,347,993
https://en.wikipedia.org/wiki/Macrolepiota%20eucharis
Macrolepiota eucharis is a species of agaric fungus in the family Agaricaceae. It is found in Australia, where it grows under Eucalyptus grandis and Allocasuarina littoralis in rainforests. It was described as new to western science in 2003 by mycologists Else Vellinga and Roy Watling, from collections made in Queensland. The specific epithet derives from the Ancient Greek word ευχαρις, which means "charming, lovely, attractive". The small fruitbody of the fungus is characterised by dark grey to black cap, a volva at the base of the stipe, and microscopically by its small spores and narrowly club-shaped and cylindrical cheliocystidia. References External links Fungi of Australia Fungi described in 2003 Agaricaceae Fungus species
Macrolepiota eucharis
Biology
170
52,032,623
https://en.wikipedia.org/wiki/Agile%20tooling
Agile tooling is the design and fabrication of manufacturing related-tools such as dies, molds, patterns, jigs and fixtures in a configuration that aims to maximise the tools' performance, minimise manufacturing time and cost, and avoid delay in prototyping. A fully functional agile tooling laboratory consists of CNC milling, turning and routing equipment. It can also include additive manufacturing platforms (such as fused filament fabrication, selective laser sintering, Stereolithography, and direct metal laser sintering), hydroforming, vacuum forming, die casting, stamping, injection molding and welding equipment. Agile tooling is similar to rapid tooling, which uses additive manufacturing to make tools or tooling quickly, either directly by making parts that serve as the actual tools or tooling components, such as mold inserts; or indirectly by producing patterns that are in turn used in a secondary process to produce the actual tools. Another similar technique is prototype tooling, where molds, dies and other devices are used to produce prototypes. Rapid manufacturing, and specifically rapid tooling technologies, are earlier in their development than rapid prototyping (RP) technologies, and are often extensions of RP. The aim of all toolmaking is to catch design errors early in the design process; improve product design better products, reduce product cost, and reduce time to market. Users Hundreds of universities and research centers around the globe are investing in additive manufacturing equipment in order to be positioned to make prototypes and tactile representations of real parts. Few have fully committed the concept of using additive manufacturing (AM) to create manufacturing tools (fixturing, clamps, molds, dies, patterns, negatives, etc.). AM experts seem to agree that tooling is a large, namely untapped market. Deloitte University Press estimated that in 2012 alone, the AM Tooling market $1.2 Billion. At that point in the development cycle of AM Tooling, much of the work was performed under the guise of “let’s try it and see what happens”. Industry applications Additive manufacturing, starting with today's infancy period, requires manufacturing firms to be flexible, ever-improving users of all available technologies to remain competitive. Advocates of additive manufacturing also predict that this arc of technological development will counter globalization, as end users will do much of their own manufacturing rather than engage in trade to buy products from other people and corporations. The real integration of the newer additive technologies into commercial production, however, is more a matter of complementing traditional subtractive methods rather than displacing them entirely. Automotive – approaching niche vehicle markets (making less than 100, 000 vehicles), rather than high production volume Aircraft – the U.S. aircraft industry operates in an environment where production volumes are relatively low and resulting product costs are relatively high. Agile tooling can be applied in the early design stage of the development cycle to minimize the high cost of redesign. Medical – cast tooling would benefit a great deal from agile tooling. However, the cost for the tooling may still be significantly greater than the cost of a casting piece, with high lead times. Since only several dozen or several hundred metal parts are needed, the challenge for mass production is still prevalent. A balance between these four areas – quantity, design, material, and speed are key to designing and producing a fully functional product. See also Computer-aided design CAD) Computer-aided engineering (CAE) Computer-aided manufacturing (CAM) References External links Product Design & Engineering Manufacturing Product design Manufacturing plants Industrial processes 3D printing processes Industrial equipment Computer-aided manufacturing Industrial design Prototypes Management cybernetics Digital manufacturing Fused filament fabrication
Agile tooling
Technology,Engineering
752