id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
469,760 | https://en.wikipedia.org/wiki/Metric%20map | In the mathematical theory of metric spaces, a metric map is a function between metric spaces that does not increase any distance. These maps are the morphisms in the category of metric spaces, Met. Such functions are always continuous functions.
They are also called Lipschitz functions with Lipschitz constant 1, nonexpansive maps, nonexpanding maps, weak contractions, or short maps.
Specifically, suppose that and are metric spaces and is a function from to . Thus we have a metric map when, for any points and in ,
Here and denote the metrics on and respectively.
Examples
Consider the metric space with the Euclidean metric. Then the function is a metric map, since for , .
Category of metric maps
The function composition of two metric maps is another metric map, and the identity map on a metric space is a metric map, which is also the identity element for function composition. Thus metric spaces together with metric maps form a category Met. Met is a subcategory of the category of metric spaces and Lipschitz functions. A map between metric spaces is an isometry if and only if it is a bijective metric map whose inverse is also a metric map. Thus the isomorphisms in Met are precisely the isometries.
Strictly metric maps
One can say that is strictly metric if the inequality is strict for every two different points. Thus a contraction mapping is strictly metric, but not necessarily the other way around. Note that an isometry is never strictly metric, except in the degenerate case of the empty space or a single-point space.
Multivalued version
A mapping from a metric space to the family of nonempty subsets of is said to be Lipschitz if there exists such that
for all , where is the Hausdorff distance. When , is called nonexpansive, and when , is called a contraction.
See also
References
Lipschitz maps
Metric geometry
Theory of continuous functions | Metric map | Mathematics | 397 |
6,219,166 | https://en.wikipedia.org/wiki/Starry%20Plough%20%28flag%29 | The Starry Plough banner () is a flag which was originally used by the Irish Citizen Army, a socialist Irish republican movement, and subsequently adopted by other Irish political organizations.
Composition
The original Starry Plough was designed by William H. Megahy, though the concept may have originated with George William Russell, for the Irish Citizen Army and showed silver stars on a green background. The flag depicts an asterism (an identifiable part of a constellation) of the constellation Ursa Major, called The Plough (or "Starry Plough") in Ireland and Britain, the Big Dipper in North America, and various other names worldwide. Two of the Plough's seven stars point to Polaris, the North Star. James Connolly, co-founder of the Irish Citizen Army with Jack White and James Larkin, said the significance of the banner was that a free Ireland would control its own destiny from the plough to the stars.
The sword as the ploughshare is also a biblical reference in Isaiah 2:3-4. In the bible verse, God pushes his followers to turn their weapons into tools, turning the means for war into the means for peace. The marriage of Catholic tradition, the biblical reference being integral to the flag's design, with socialist concepts, like the working class and the oppressor forcing them to take up their plowshares as arms, leaves the Starry Plough flag with complexity and nuanced implications, which culminate in a very wide range of interpretations.
History
The original Starry Plough was unveiled on 5 April 1914 and flown over the Imperial Hotel by the Irish Citizen Army during the 1916 Easter Rising. The 1916 flag is on display at the National Museum, Collins Barracks, in Dublin.
At public performances of The Plough and the Stars, the Seán O'Casey play which takes its name from the flag, riots were known to break out when the Starry Plough appeared.
During the 1930s the design changed to a blue banner which was designed by members of the Republican Congress, and was adopted as the emblem of the Irish Labour movement, including the Labour Party. Labour adopted the rose as its official emblem in 1991 but continued to use the Starry Plough for ceremonial occasions, and in 2021 the party reverted to using the Starry Plough as their primary symbol (this time with white stars on a red background). It is also used by Irish republicans and has been carried alongside the Irish tricolour and Irish provincial flags and the sunburst flag, as well as the red flag at Provisional IRA, Continuity IRA, Real IRA, Official IRA, Irish People's Liberation Organisation and Irish National Liberation Army rallies and funerals.
The flag, and alternative versions of it, are also used by Saoradh, Éirígí, the Connolly Youth Movement, Republican Sinn Féin, Labour Youth, Ógra Shinn Féin, Communist Party of Ireland, the Republican Socialist Youth Movement, and socialist Celtic F.C. supporters. In the past it was used by the Sligo–Leitrim Independent Socialist Organisation before it merged with the Irish Labour Party. The flag was draped on the coffin of the Independent TD Tony Gregory during his funeral.
The older banner featuring the plough is still occasionally used today by the Irish Republican Socialist Party, Sinn Féin, the Workers' Party (formerly known as Official Sinn Féin), and many other socialist republican parties.
While similar to the flag of the US state of Alaska, it predates the latter by more than a decade.
In Northern Ireland, the flag is often burned by loyalist activists in protest.
See also
The Plough and the Stars, the play by Seán O'Casey (1926)
Flag of Alaska (a similar design)
Plough flag (a similar design)
Otava flag (a similar design)
Sunburst flag (another Irish nationalist flag)
References
Flags introduced in 1914
Flags of Ireland
Ursa Major
Easter Rising | Starry Plough (flag) | Astronomy | 798 |
41,125,081 | https://en.wikipedia.org/wiki/Connective%20spectrum | In algebraic topology, a branch of mathematics, a connective spectrum is a spectrum whose homotopy sets of negative degrees are zero.
References
External links
Why are connective spectra called “connective”?
Algebraic topology
Spectra (topology) | Connective spectrum | Mathematics | 49 |
61,613,630 | https://en.wikipedia.org/wiki/Deterministic%20Networking | Deterministic Networking (DetNet) is an effort by the IETF DetNet Working Group to study implementation of deterministic data paths for real-time applications with extremely low data loss rates, packet delay variation (jitter), and bounded latency, such as audio and video streaming, industrial automation, and vehicle control.
DetNet operates at the IP Layer 3 routed segments using a software-defined networking layer to provide IntServ and DiffServ integration, and delivers service over lower Layer 2 bridged segments using technologies such as MPLS and IEEE 802.1 Time-Sensitive Networking. Deterministic Networking aims to migrate time-critical, high-reliability industrial control and audio-video applications from special-purpose Fieldbus networks (HDMI, CAN bus, PROFIBUS, RS-485, RS-422/RS-232, and I²C) to packet networks and IP in particular. DetNet will support both the new applications and existing IT applications on the same physical network.
To support real-time applications, DetNet implements reservation of data plane resources in intermediate nodes along the data flow path, calculation of explicit routes that do not depend on network topology, and redistribute data packets over time and/or space to deliver data even with the loss of one path.
Rationale
Standard IT infrastructure cannot efficiently handle latency-sensitive data. Switches and routers use fundamentally uncertain algorithms for processing packet/frames, which may result in sporadic data flow.
A common solution for smoothing out these flows is to increase buffer sizes, but this has a negative effect on delivery latency because data has to fill the buffers before transmission to the next switch or router can start.
IEEE Time-Sensitive Networking (TSN) task group has defined deterministic algorithms for queuing, shaping and scheduling which allow each node to allocate bandwidth and latency according to requirements of each data flow, by computing the buffer size at the network switch. The same algorithms can be employed at higher network layers to improve delivery of IP packets and provide interoperability with TSN hardware when available.
Requirements
Applications from different fields often have fundamentally similar requirements, which may include:
Time synchronization at each node (routers/bridge)across the entire network, with accuracy from nanoseconds to microseconds.
Deterministic data flow, which shall support:
unicast or multicast packets;
guaranteed minimum and maximum latency endpoint-to-endpoint across the entire network, with tight jitter when required;
Ethernet packet loss ratio from 10−9 to 10−12, wireless mesh networks around 10−5;
high utilization of the available network bandwidth (no need for massive over-provisioning);
flow processing without throttling, congestion feedback, or other network-defined transmission delay;
a fixed transmission schedule, or a maximum bandwidth and packet size.
Scheduling, shaping, limiting, and controlling transmission at each node.
Protection against misbehaving nodes (in both the data and the control planes): a flow cannot affect other flows even under high load.
Reserving resources in nodes that carry the flow.
Operation
Resource allocation
To reduce contention related packet loss, resources such as buffer space or link bandwidth can be assigned to the flow along the path from source to destination. Maintaining adequate buffer storage at each node also limits maximum end-to-end latency.
The maximum transmission rate and maximum packet size have to be explicitly defined for each flow.
Each network node along the path shall not exceed these data rates, as any packet sent out of scheduled time requires additional buffering on the next node, which may exceed its allocated resources.
To limit data rates, traffic policing and shaping functions are applied at the ingress ports. This also protects regular IT traffic from misbehaving DetNet sources.
Time-of-execution fields in the packets and sub-microsecond time synchronization across all nodes are used to ensure minimum end-to-end latency and eliminate irregular delivery (jitter). Jitter reduces the perceived quality of audiovisual applications, and control network applications built around serial communication protocols cannot handle jitter at all.
Service protection
Packet loss can also result from media errors and equipment failures. Packet replication and elimination and packet encoding provide service protection from these failures.
Replication and elimination works by spreading the data across several explicit paths and reassembling them in-order near the destination. Sequence number or timestamp is added to DetNet flow or transport protocol packet, then duplicate packets are eliminated and out-of-order packets are reordered, based on sequencing information and transmission logs,
Adhering to the flow latency constraints also imposes constraints on misordering, as out-of-order packets impact the jitter and require additional buffering.
Different path lengths also require additional buffering to equalize the delays and ensure bandwidth constraints after failure recovery.
Replication and elimination may be used by multiple DetNet nodes to improve protection against multiple failures.
Packet encoding uses multiple transmission units for each packet, adding redundancy and error correction information from multiple packets to each transmission unit.
Explicit routes
In mesh networks, topology events such as failure or recovery can impact data flow even in remote network segments. A side effect of route changes is out-of-order packet delivery.
Real-time networks are often based on physical rings with a simple control protocol and two ports per device for redundant paths, though at a cost of increased hop count and latency.
DetNet routes are typically explicitly defined and do not change (at least immediately) in response to network topology events, so there are no interruptions from routing or bridging protocol negotiations.
Explicit routes can be established with RSVP-TE, Segment Routing, IS-IS, MPLS-TE label-switched path (LSP), or a software-defined networking layer.
Traffic engineering
IETF Traffic Engineering Architecture and Signaling (TEAS) work group maintains MPLS-TE LSP and RSVP-TE protocols. These traffic Engineering (TE) routing protocols translate DetNet flow specification to IEEE 802.1 TSN controls for queuing, shaping, and scheduling algorithms, such as IEEE 802.1Qav credit-based shaper, IEEE802.1Qbv time-triggered shaper with a rotating time scheduler, IEEE802.1Qch synchronized double and triple buffering, 802.1Qbu/802.3br Ethernet packet pre-emption, and 802.1CB frame replication and elimination for reliability. Protocol interworking defined by IEEE 802.1CB is used to advertise TSN sub-network capabilities to DetNet flows via the Active Destination MAC and VLAN Stream identification functions. DetNet flows are matched by destination MAC address, VLAN ID and priority parameters to Stream ID and QoS requirements for talkers and listeners in the AVB/TSN sub-network.
Use cases
IETF foresees the following use cases:
pro audio and video (Audio Video Bridging);
electrical generation and distribution;
building automation systems (BAS);
wireless industrial mesh networks;
cellular radio (fronthaul/backhaul);
industrial machine to machine (M2M) networks;
mining industry (remote vehicle control);
private blockchain;
network slicing.
See also
Audio over Ethernet
Audio over IP
Internet standards
References
External links
Deterministic Networking (detnet) Working Group
Internet Standards
Industrial Ethernet
Control engineering
Audio engineering
Automotive electronics
Network protocols | Deterministic Networking | Engineering | 1,526 |
23,048,616 | https://en.wikipedia.org/wiki/Handheld%20Isothermal%20Silver%20Standard%20Sensor | The Handheld Isothermal Silver Standard Sensor (HISSS) project was sponsored by DARPA in the 2000s to develop a hand-held sensor that is capable of identifying biological weapon threats across the entire spectrum including bacteria, viruses and toxins. The program began in early part of the 21st century with the following goals:
DNA detection without polymerase chain reaction (PCR)
RNA detection without PCR or reverse transcription
antibody-based protein detection at sensitivities unachievable by traditional methods
The final goal was to give field units the ability to detect threat agents across the complete spectrum of biological warfare weapons.
The main contractor for this project was Northrop Grumman with subcontractors Ionian Technologies and Ribomed.
References
External links
HISSS Project
Biological warfare | Handheld Isothermal Silver Standard Sensor | Biology | 156 |
15,236,129 | https://en.wikipedia.org/wiki/NPHP3 | Nephrocystin-3 is a protein that in humans is encoded by the NPHP3 gene.
This gene encodes a protein containing a coiled-coil (CC) domain, a tubulin-tyrosine ligase (TTL) domain, and a tetratrico peptide repeat (TPR) domain. The encoded protein interacts with nephrocystin and may function in renal tubular development and function. Mutations in this gene are associated with nephronophthisis type 3. Multiple splice variants have been described but their full-length nature has not been determined.
An association with renal-hepatic-pancreatic dysplasia has been described.
References
Further reading | NPHP3 | Chemistry | 150 |
7,219,097 | https://en.wikipedia.org/wiki/Synechococcus | Synechococcus (from the Greek synechos, in succession, and the Greek kokkos, granule) is a unicellular cyanobacterium that is very widespread in the marine environment. Its size varies from 0.8 to 1.5 μm. The photosynthetic coccoid cells are preferentially found in well–lit surface waters where it can be very abundant (generally 1,000 to 200,000 cells per ml). Many freshwater species of Synechococcus have also been described.
The genome of S. elongatus strain PCC7002 has a size of 3.4 Mbp, whereas the oceanic strain WH8102 has a genome of size 2.4 Mbp.
Introduction
Synechococcus is one of the most important components of the prokaryotic autotrophic picoplankton in the temperate to tropical oceans. The genus was first described in 1979, and was originally defined to include "small unicellular cyanobacteria with ovoid to cylindrical cells that reproduce by binary traverse fission in a single plane and lack sheaths". This definition of the genus Synechococcus contained organisms of considerable genetic diversity and was later subdivided into subgroups based on the presence of the accessory pigment phycoerythrin. The marine forms of Synechococcus are coccoid cells between 0.6 and 1.6 μm in size. They are Gram-negative cells with highly structured cell walls that may contain projections on their surface. Electron microscopy frequently reveals the presence of phosphate inclusions, glycogen granules, and more importantly, highly structured carboxysomes.
Cells are known to be motile by a gliding method and a novel uncharacterized, nonphototactic swimming method that does not involve flagellar motion. While some cyanobacteria are capable of photoheterotrophic or even chemoheterotrophic growth, all marine Synechococcus strains appear to be obligate photoautotrophs that are capable of supporting their nitrogen requirements using nitrate, ammonia, or in some cases urea as a sole nitrogen source. Marine Synechococcus species are traditionally not thought to fix nitrogen.
In the last decade, several strains of Synechococcus elongatus have been produced in laboratory environments to include the fastest growing cyanobacteria to date, Synechococcus elongatus UTEX 2973. S. elongatus UTEX 2973 is a mutant hybrid from UTEX 625 and is most closely related to S. elongatus PCC 7942 with 99.8% similarity. It has the shortest doubling time at “1.9 hours in a BG11 medium at 41°C under continuous 500 μmoles photons·m−2·s−1 white light with 3% CO2”.
Pigments
The main photosynthetic pigment in Synechococcus is chlorophyll a, while its major accessory pigments are phycobiliprotein. The four commonly recognized phycobilins are phycocyanin, allophycocyanin, allophycocyanin B and phycoerythrin. In addition Synechococcus also contains zeaxanthin but no diagnostic pigment for this organism is known. Zeaxanthin is also found in Prochlorococcus, red algae and as a minor pigment in some chlorophytes and eustigmatophytes. Similarly, phycoerythrin is also found in rhodophytes and some cryptomonads.
Phylogeny
Phylogenetic description of Synechococcus is difficult. Isolates are morphologically very similar, yet exhibit a G+C content ranging from 39 to 71%, illustrating the large genetic diversity of this provisional taxon. Initially, attempts were made to divide the group into three subclusters, each with a specific range of genomic G+C content. The observation that open-ocean isolates alone nearly span the complete G+C spectrum, however, indicates that Synechococcus is composed of at least several species. Bergey's Manual (Herdman et al. 2001) now divides Synechococcus into five clusters (equivalent to genera) based on morphology, physiology, and genetic traits.
Cluster 1 includes relatively large (1–1.5 μm) nonmotile obligate photoautotrophs that exhibit low salt tolerance. Reference strains for this cluster are PCC6301 (formerly Anacycstis nidulans) and PCC6312, which were isolated from fresh water in Texas and California, respectively. Cluster 2 also is characterized by low salt tolerance. Cells are obligate photoautrotrophs, lack phycoerythrin, and are thermophilic. The reference strain PCC6715 was isolated from a hot spring in Yellowstone National Park. Cluster 3 includes phycoerythrin-lacking marine Synechococcus species that are euryhaline, i.e. capable of growth in both marine and freshwater environments. Several strains, including the reference strain PCC7003, are facultative heterotrophs and require vitamin B12 for growth. Cluster 4 contains a single isolate, PCC7335. This strain is obligate marine. This strain contains phycoerthrin and was first isolated from the intertidal zone in Puerto Peñasco, Mexico. The last cluster contains what had previously been referred to as ‘marine A and B clusters’ of Synechococcus. These cells are truly marine and have been isolated from both the coastal and the open ocean. All strains are obligate photoautrophs and are around 0.6–1.7 μm in diameter. This cluster is, however, further divided into a population that either contains (cluster 5.1) or does not contain (cluster 5.2) phycoerythrin. The reference strains are WH8103 for the phycoerythrin-containing strains and WH5701 for those strains that lack this pigment.
More recently, Badger et al. (2002) proposed the division of the cyanobacteria into a α- and a β-subcluster based on the type of rbcL (large subunit of ribulose 1,5-bisphosphate carboxylase/oxygenase) found in these organisms. α-cyanobacteria were defined to contain a form IA, while β-cyanobacteria were defined to contain a form IB of this gene. In support for this division Badger et al. analyze the phylogeny of carboxysomal proteins, which appear to support this division. Also, two particular bicarbonate transport systems appear to only be found in α-cyanobacteria, which lack carboxysomal carbonic anhydrases.
The complete phylogenetic tree of 16S rRNA sequences of Synechococcus revealed at least 12 groups, which morphologically correspond to Synechococcus, but they have not derived from the common ancestor. Moreover, it has been estimated based on molecular dating that the first Synechococcus lineage has appeared 3 billion years ago in thermal springs with subsequent radiation to marine and freshwater environments.
As of 2020, the morphologically similar "Synechococcus collective" has been split into 15 genera under 5 different orders:
Synechococcales (Cyanobium, Inmanicoccus, Lacustricoccus gen. Nov., Parasynechococcus, Pseudosynechococcus, Regnicoccus, Synechospongium gen. nov., Synechococcus and Vulcanococcus);
Cyanobacteriales (Limnothrix);
Leptococcales (Brevicoccus and Leptococcus);
Thermosynechococcales (Stenotopis and Thermosynechococcus); and
Neosynechococcales (Neosynechococcus).
(gen. nov. means that the genus is newly created in 2020).
Ecology and distribution
Synechococcus has been observed to occur at concentrations ranging between a few cells to 106 cells per ml in virtually all regions of oceanic euphotic zone except in samples from the McMurdo Sound and Ross Ice Shelf in Antarctica. Cells are generally much more abundant in nutrient-rich environments than in the oligotrophic ocean and prefer the upper, well-lit portion of the euphotic zone. Synechococcus has also been observed to occur at high abundances in environments with low salinities and/or low temperatures. It is usually far outnumbered by Prochlorococcus in all environments where they co-occur. Exceptions to this rule are areas of permanently enriched nutrients such as upwelling areas and coastal watersheds. In the nutrient-depleted areas of the oceans, such as the central gyres, Synechococcus is apparently always present, although only at low concentrations, ranging from a few to 4×10³ cells per ml. Vertically Synechococcus is usually relatively equitably distributed throughout the mixed layer and exhibits an affinity for the higher-light areas. Below the mixed layer, cell concentrations rapidly decline. Vertical profiles are strongly influenced by hydrologic conditions and can be very variable both seasonally and spatially. Overall, Synechococcus abundance often parallels that of Prochlorococcus in the water column. In the Pacific high-nutrient, low-chlorophyll zone and in temperate open seas where stratification was recently established both profiles parallel each other and exhibit abundance maxima just about the subsurface chlorophyll maximum.
The factors controlling the abundance of Synechococcus still remain poorly understood, especially considering that even in the most nutrient-depleted regions of the central gyres, where cell abundances are often very low, population growth rates are often high and not drastically limited. Factors such as grazing, viral mortality, genetic variability, light adaptation, and temperature, as well as nutrients are certainly involved, but remain to be investigated on a rigorous and global scale. Despite the uncertainties, a relationship probably exists between ambient nitrogen concentrations and Synechococcus abundance, with an inverse relationship to Prochlorococcus in the upper euphotic zone, where light is not limiting. One environment where Synechococcus thrives particularly well is coastal plumes of major rivers. Such plumes are coastally enriched with nutrients such as nitrate and phosphate, which drives large phytoplankton blooms. High productivity in coastal river plumes is often associated with large populations of Synechococcus and elevated form IA (cyanobacterial) rbcL mRNA.
Prochlorococcus is thought to be at least 100 times more abundant than Synechococcus in warm oligotrophic waters. Assuming average cellular carbon concentrations, it has thus been estimated that Prochlorococcus accounts for at least 22 times more carbon in these waters, thus may be of much greater significance to the global carbon cycle than Synechococcus.
Evolutionary history
Free floating viruses have been found carrying photosynthetic genes, and Synechococcus samples have been found to have viral proteins associated with photosynthesis. It is estimated 10% of all photosynthesis on earth is carried out with viral genes. Not all viruses immediately kill their hosts, 'temperate' viruses co-exist with their host until stresses or nearing end of natural life span make them switch their host to virus production; if a mutation occurs that stops this final step, the host can carry the virus genes with no ill effects. And if a healthy host reproduces while infectious, its offspring can be infectious as well. It is likely such a process gave Synechococcus photosynthesis.
DNA recombination, repair and replication
Marine Synechococcus species possess a set of genes that function in DNA recombination, repair and replication. This set of genes includes the recBCD gene complex whose product, exonuclease V, functions in recombinational repair of DNA, and the umuCD gene complex whose product, DNA polymerase V, functions in error-prone DNA replication. Some Synechococcus strains are naturally competent for genetic transformation, and thus can take up extracellular DNA and recombine it into their own genome. Synechococcus strains also encode the gene lexA that regulates an SOS response system, that is likely similar to the well-studied E. coli SOS system that is employed in the response to DNA damage.
Species
Synechococcus ambiguus Skuja
Synechococcus arcuatus var. calcicolus Fjerdingstad
Synechococcus bigranulatus Skuja
Synechococcus brunneolus Rabenhorst
Synechococcus caldarius Okada
Synechococcus capitatus A. E. Bailey-Watts & J. Komárek
Synechococcus carcerarius Norris
Synechococcus elongatus (Nägeli) Nägeli
Synechococcus endogloeicus F. Hindák
Synechococcus epigloeicus F. Hindák
Synechococcus ferrunginosus Wawrik
Synechococcus intermedius Gardner
Synechococcus koidzumii Yoneda
Synechococcus lividus Copeland
Synechococcus marinus Jao
Synechococcus minutissimus Negoro
Synechococcus mundulus Skuja
Synechococcus nidulans (Pringsheim) Komárek
Synechococcus rayssae Dor
Synechococcus rhodobaktron Komárek & Anagnostidis
Synechococcus roseo-persicinus Grunow
Synechococcus roseo-purpureus G. S. West
Synechococcus salinarum Komárek
Synechococcus salinus Frémy
Synechococcus sciophilus Skuja
Synechococcus sigmoideus (Moore & Carter) Komárek
Synechococcus spongiarum Usher et al.
Synechococcus subsalsus Skuja
Synechococcus sulphuricus Dor
Synechococcus vantieghemii (Pringsheim) Bourrelly
Synechococcus violaceus Grunow
Synechococcus viridissimus Copeland
Synechococcus vulcanus Copeland
See also
Gloeomargarita lithophora
Photosynthetic picoplankton
Prochlorococcus
Synechocystis, another cyanobacterial model organism
References
Further reading
External links
Cyanobacteria genera
Synechococcales
Marine microorganisms | Synechococcus | Biology | 3,082 |
5,574,966 | https://en.wikipedia.org/wiki/Split-cycle%20engine | The split-cycle engine is a type of internal combustion engine.
Design
In a conventional Otto cycle engine, each cylinder performs four strokes per cycle: intake, compression, power, and exhaust. This means that two revolutions of the crankshaft are required for each power stroke. The split-cycle engine divides these four strokes between two paired cylinders: one for intake and compression, and another for power and exhaust. Compressed air is transferred from the compression cylinder to the power cylinder through a crossover passage. Fuel is then injected and fired to produce the power stroke.
History
The Backus Water Motor Company of Newark, New Jersey was producing an early example of a split cycle engine as far back as 1891. The engine, of "a modified A form, with the crank-shaft at the top", was water-cooled and consisted of one working cylinder and one compressing cylinder of equal size and utilized a hot-tube ignitor system. It was produced in sizes ranging from 1/2 to and the company had plans to offer a scaled-up version capable of or more.
The Atkinson differential engine was a two piston, single cylinder four-stroke engine that also used a displacer piston to provide the fuel air mixture for use by the power piston. However, the power piston did the compression.
The twingle engine (U.S. English) or split-single engine (British English) is a twin cylinder (or more) two-stroke engine; more precisely, it has one or more U-tube cylinders that each use a pair of pistons, one in each arm of the U. However, both pistons in each pair are used for power (and the underside of both supplies fuel air mixture, if crankcase scavenging is used), and they only differ in that one piston works the transfer port to provide the fuel air mixture for use in both cylinders and the other piston works the exhaust port, so that the burnt mixture is exhausted via that cylinder. Unlike the Scuderi both cylinders are connected to the combustion chamber. As neither piston works as a displacer piston at all, this engine has nothing whatsoever to do with the split cycle engine apart from a purely coincidental similarity of the names.
The Scuderi engine is a design of a split-cycle, internal combustion engine invented by Carmelo J. Scuderi. The Scuderi Group, an engineering and licensing company based in West Springfield, Massachusetts and founded by Carmelo Scuderi’s children, said that the prototype was completed and was unveiled to the public on April 20, 2009.
The Tour Engine is an opposed-cylinder split-cycle internal combustion engine that uses a novel Spool Shuttle Crossover Valve (SSCV) to transfer fuel/air charge from the cold to hot cylinder. The first prototype was completed in June 2008. Tour Engine was funded by grants from the Israel Ministry of National Infrastructures, Energy and Water Resources and ARPA-E
Another split-cycle design, using an external combustion chamber, is the Zajac engine.
New Zealand scam - Rick Mayne's Split Cell engine
In 2009 investigative journalist Gerard Ryle reported a scam by New Zealander Rick Mayne that lost investors 100Ms of NZ dollars. Rick Mayne claimed success with a Split Cycle engine that used a multitude of small cylinders arranged in a radial arrangement with pistons operated by a Geneva mechanism. This scam engine was never successfully run in a meaningful demonstration, but significant capital was raised from unsuspecting investors and lost, through a share plan.
Ryle reported on the Rick Mayne scam, along with other scams involving fuel saving, in his book Firepower; and on ABC radio in 2009:
Even British newspaper the Independent was taken in by the scam, as was British racing driver Jack Brabham
References
Engine technology
Piston engines | Split-cycle engine | Technology | 777 |
24,547,311 | https://en.wikipedia.org/wiki/Covenant%20of%20Mayors | The Covenant of Mayors is a European co-operation movement involving local and regional authorities. Signatories of the Covenant of Mayors voluntarily commit to increasing energy efficiency and the use of renewable energy sources on their territories. By their commitment, they support the European Union 20% reduction objective to be reached by 2020.
After the European Union climate and energy package was adopted in 2008, the European Commission launched the Covenant of Mayors to endorse and support the efforts deployed by local authorities in the implementation of sustainable energy policies.
Covenant of Mayors Signatories
European local authorities of all sizes – from small villages to capitals and major metropolitan areas – are eligible to sign up as Covenant of Mayors Signatories.
Cities, towns and other urban areas have a crucial role to play in mitigating climate change, as they consume three quarter of the energy produced in the European Union and are responsible for a similar share of emissions. Local authorities are also in a position to change citizens' behaviour and address climate and energy questions in a comprehensive manner, notably by conciliating public and private interests and by integrating sustainable energy issues into overall local development goals.
Formal undertakings
To meet the reduction targets they set themselves, signatories commit to a series of steps and accept to report and be monitored on their actions. Within predefined time frames, they formally undertake to fulfil the following:
Develop adequate administrative structures, including allocation of sufficient human resources, in order to undertake the necessary actions;
Prepare a Baseline Emission Inventory;
Submit a Sustainable Energy Action Plan within the year following the official adhesion to the Covenant of Mayors initiative, and including concrete measures leading to at least 20% reduction of emissions by 2020;
Submit an implementation report at least every second year after submission of their Sustainable Energy Action Plan for evaluation, monitoring and verification purposes
To comply with the crucial necessity of mobilising local stakeholders in the development of the Sustainable Energy Action Plans, signatories also undertake to:
Share experience and know-how with other local authorities;
Organise Local Energy Days to raise citizens awareness of sustainable development and energy efficiency;
Attend or contribute to the Covenant of Mayors annual ceremony, thematic workshops and discussion group meetings;
Spread the message of the Covenant in the appropriate fora and, in particular, encourage other mayors to join the Covenant.
Sustainable Energy Action Plans
To reach and try to exceed the European Union energy and climate objectives, Covenant of Mayors signatories commit to develop a Sustainable Energy Action Plan (SEAP), within a year following their adhesion to the initiative. This action plan, approved by the municipal council, outlines the activities and measures foreseen by signatories to fulfil their commitments, with corresponding time frames and assigned responsibilities.
Various technical and methodological supporting materials (including the "SEAP Guidebook" and template, reports on existing methodologies and tools, etc.) offer practical guidance and clear recommendations on the whole SEAP development process. Based on the practical experiences of local authorities and developed in close co-operation with the European Commission Joint Research Centre, this support package provides Covenant signatories with key principles and a clear step-by-step approach. All documents are downloadable on the www.eumayors.eu website library.
Coordination and support
Covenant Coordinators and Supporters
Covenant Signatories do not always possess the adequate tools and resources to prepare a Baseline Emission Inventory, draft the related Sustainable Energy Action Plan and finance the actions featured in the latter. In light of this, provinces, regions, networks and groupings of municipalities have a crucial role to play in helping signatories honour their commitments.
Covenant Coordinators are public authorities from different government levels (national, regional, provincial) which provide strategic guidance to signatories, as well as financial and technical support in the development and implementation of their Sustainable Energy Action Plans. The Commission distinguishes between "Territorial Coordinators", which are sub-national decentralised authorities – including provinces, regions and public groupings of municipalities, and "National Coordinators", which include national public bodies – such as national energy agencies and ministries of energy.
Covenant Supporters are European, national and regional networks and associations of local authorities which leverage their lobbying, communication and networking activities to promote the Covenant of Mayors initiative and support the commitments of its signatories.
Covenant of Mayors Office
Promotional, technical and administrative assistance is provided on a daily basis to Covenant signatories and stakeholders by the Covenant of Mayors Office (CoMO), managed by a consortium of local and regional authorities' networks, led by Energy Cities and composed of CEMR, Climate Alliance, Eurocities and FEDARENE. Funded by the European Commission, the CoMO is responsible for the overall co-ordination of the initiative.
Institutions of the European Union
To support the elaboration and implementation the signatories' Sustainable Energy Action Plans, the European Commission has contributed to the development of financial facilities particularly targeting Covenant of Mayors signatories, among which the European Local Energy Assistance (ELENA) facility, set up in co-operation with the European Investment Bank, for large-scale projects, and ELENA-KfW which, established in partnership with the German Group KfW, offers a complementary approach to mobilise sustainable investments of small and medium-sized municipalities.
Alongside the European Commission, the Covenant benefits from full institutional support, including from the Committee of the Regions, which has supported the initiative since its inception; the European Parliament, where the two first signing ceremonies were held; and the European Investment Bank, which assists local authorities in unlocking their investment potentials.
Joint Research Centre
The Joint Research Centre of the European Commission is responsible for providing technical and scientific support to the initiative. It works in close collaboration with the Covenant of Mayors Office to equip signatories with clear technical guidelines and templates to assist delivery of their Covenant of Mayors commitments, as well as to monitor implementation and results.
See also
C40
CIVITAS (European Union)
Climate Action
Climate change
Directorate-General for Energy (European Commission)
Eltis
Energy conservation
Energy policy
European Union
Framework Programmes for Research and Technological Development
Intelligent Energy Europe
Interreg
Joint Research Centre
ManagEnergy
Renewable energy
Sustainable energy
References
External links
Covenant of Mayors official Website
Directorate-General for Energy
Joint Research Centre
Energy Cities
Climate Alliance
Council of European Municipalities and Regions
Eurocities
Fedarene
Urban planning
International climate change organizations
European Union
Municipal international relations | Covenant of Mayors | Engineering | 1,287 |
2,819 | https://en.wikipedia.org/wiki/Aerodynamics | Aerodynamics ( aero (air) + (dynamics)) is the study of the motion of air, particularly when affected by a solid object, such as an airplane wing. It involves topics covered in the field of fluid dynamics and its subfield of gas dynamics, and is an important domain of study in aeronautics. The term aerodynamics is often used synonymously with gas dynamics, the difference being that "gas dynamics" applies to the study of the motion of all gases, and is not limited to air. The formal study of aerodynamics began in the modern sense in the eighteenth century, although observations of fundamental concepts such as aerodynamic drag were recorded much earlier. Most of the early efforts in aerodynamics were directed toward achieving heavier-than-air flight, which was first demonstrated by Otto Lilienthal in 1891. Since then, the use of aerodynamics through mathematical analysis, empirical approximations, wind tunnel experimentation, and computer simulations has formed a rational basis for the development of heavier-than-air flight and a number of other technologies. Recent work in aerodynamics has focused on issues related to compressible flow, turbulence, and boundary layers and has become increasingly computational in nature.
History
Modern aerodynamics only dates back to the seventeenth century, but aerodynamic forces have been harnessed by humans for thousands of years in sailboats and windmills, and images and stories of flight appear throughout recorded history, such as the Ancient Greek legend of Icarus and Daedalus. Fundamental concepts of continuum, drag, and pressure gradients appear in the work of Aristotle and Archimedes.
In 1726, Sir Isaac Newton became the first person to develop a theory of air resistance, making him one of the first aerodynamicists. Dutch-Swiss mathematician Daniel Bernoulli followed in 1738 with Hydrodynamica in which he described a fundamental relationship between pressure, density, and flow velocity for incompressible flow known today as Bernoulli's principle, which provides one method for calculating aerodynamic lift. In 1757, Leonhard Euler published the more general Euler equations which could be applied to both compressible and incompressible flows. The Euler equations were extended to incorporate the effects of viscosity in the first half of the 1800s, resulting in the Navier–Stokes equations. The Navier–Stokes equations are the most general governing equations of fluid flow but are difficult to solve for the flow around all but the simplest of shapes.
In 1799, Sir George Cayley became the first person to identify the four aerodynamic forces of flight (weight, lift, drag, and thrust), as well as the relationships between them, and in doing so outlined the path toward achieving heavier-than-air flight for the next century. In 1871, Francis Herbert Wenham constructed the first wind tunnel, allowing precise measurements of aerodynamic forces. Drag theories were developed by Jean le Rond d'Alembert, Gustav Kirchhoff, and Lord Rayleigh. In 1889, Charles Renard, a French aeronautical engineer, became the first person to reasonably predict the power needed for sustained flight. Otto Lilienthal, the first person to become highly successful with glider flights, was also the first to propose thin, curved airfoils that would produce high lift and low drag. Building on these developments as well as research carried out in their own wind tunnel, the Wright brothers flew the first powered airplane on December 17, 1903.
During the time of the first flights, Frederick W. Lanchester, Martin Kutta, and Nikolai Zhukovsky independently created theories that connected circulation of a fluid flow to lift. Kutta and Zhukovsky went on to develop a two-dimensional wing theory. Expanding upon the work of Lanchester, Ludwig Prandtl is credited with developing the mathematics behind thin-airfoil and lifting-line theories as well as work with boundary layers.
As aircraft speed increased designers began to encounter challenges associated with air compressibility at speeds near the speed of sound. The differences in airflow under such conditions lead to problems in aircraft control, increased drag due to shock waves, and the threat of structural failure due to aeroelastic flutter. The ratio of the flow speed to the speed of sound was named the Mach number after Ernst Mach who was one of the first to investigate the properties of the supersonic flow. Macquorn Rankine and Pierre Henri Hugoniot independently developed the theory for flow properties before and after a shock wave, while Jakob Ackeret led the initial work of calculating the lift and drag of supersonic airfoils. Theodore von Kármán and Hugh Latimer Dryden introduced the term transonic to describe flow speeds between the critical Mach number and Mach 1 where drag increases rapidly. This rapid increase in drag led aerodynamicists and aviators to disagree on whether supersonic flight was achievable until the sound barrier was broken in 1947 using the Bell X-1 aircraft.
By the time the sound barrier was broken, aerodynamicists' understanding of the subsonic and low supersonic flow had matured. The Cold War prompted the design of an ever-evolving line of high-performance aircraft. Computational fluid dynamics began as an effort to solve for flow properties around complex objects and has rapidly grown to the point where entire aircraft can be designed using computer software, with wind-tunnel tests followed by flight tests to confirm the computer predictions. Understanding of supersonic and hypersonic aerodynamics has matured since the 1960s, and the goals of aerodynamicists have shifted from the behaviour of fluid flow to the engineering of a vehicle such that it interacts predictably with the fluid flow. Designing aircraft for supersonic and hypersonic conditions, as well as the desire to improve the aerodynamic efficiency of current aircraft and propulsion systems, continues to motivate new research in aerodynamics, while work continues to be done on important problems in basic aerodynamic theory related to flow turbulence and the existence and uniqueness of analytical solutions to the Navier–Stokes equations.
Fundamental concepts
Understanding the motion of air around an object (often called a flow field) enables the calculation of forces and moments acting on the object. In many aerodynamics problems, the forces of interest are the fundamental forces of flight: lift, drag, thrust, and weight. Of these, lift and drag are aerodynamic forces, i.e. forces due to air flow over a solid body. Calculation of these quantities is often founded upon the assumption that the flow field behaves as a continuum. Continuum flow fields are characterized by properties such as flow velocity, pressure, density, and temperature, which may be functions of position and time. These properties may be directly or indirectly measured in aerodynamics experiments or calculated starting with the equations for conservation of mass, momentum, and energy in air flows. Density, flow velocity, and an additional property, viscosity, are used to classify flow fields.
Flow classification
Flow velocity is used to classify flows according to speed regime. Subsonic flows are flow fields in which the air speed field is always below the local speed of sound. Transonic flows include both regions of subsonic flow and regions in which the local flow speed is greater than the local speed of sound. Supersonic flows are defined to be flows in which the flow speed is greater than the speed of sound everywhere. A fourth classification, hypersonic flow, refers to flows where the flow speed is much greater than the speed of sound. Aerodynamicists disagree on the precise definition of hypersonic flow.
Compressible flow accounts for varying density within the flow. Subsonic flows are often idealized as incompressible, i.e. the density is assumed to be constant. Transonic and supersonic flows are compressible, and calculations that neglect the changes of density in these flow fields will yield inaccurate results.
Viscosity is associated with the frictional forces in a flow. In some flow fields, viscous effects are very small, and approximate solutions may safely neglect viscous effects. These approximations are called inviscid flows. Flows for which viscosity is not neglected are called viscous flows. Finally, aerodynamic problems may also be classified by the flow environment. External aerodynamics is the study of flow around solid objects of various shapes (e.g. around an airplane wing), while internal aerodynamics is the study of flow through passages inside solid objects (e.g. through a jet engine).
Continuum assumption
Unlike liquids and solids, gases are composed of discrete molecules which occupy only a small fraction of the volume filled by the gas. On a molecular level, flow fields are made up of the collisions of many individual of gas molecules between themselves and with solid surfaces. However, in most aerodynamics applications, the discrete molecular nature of gases is ignored, and the flow field is assumed to behave as a continuum. This assumption allows fluid properties such as density and flow velocity to be defined everywhere within the flow.
The validity of the continuum assumption is dependent on the density of the gas and the application in question. For the continuum assumption to be valid, the mean free path length must be much smaller than the length scale of the application in question. For example, many aerodynamics applications deal with aircraft flying in atmospheric conditions, where the mean free path length is on the order of micrometers and where the body is orders of magnitude larger. In these cases, the length scale of the aircraft ranges from a few meters to a few tens of meters, which is much larger than the mean free path length. For such applications, the continuum assumption is reasonable. The continuum assumption is less valid for extremely low-density flows, such as those encountered by vehicles at very high altitudes (e.g. 300,000 ft/90 km) or satellites in Low Earth orbit. In those cases, statistical mechanics is a more accurate method of solving the problem than is continuum aerodynamics. The Knudsen number can be used to guide the choice between statistical mechanics and the continuous formulation of aerodynamics.
Conservation laws
The assumption of a fluid continuum allows problems in aerodynamics to be solved using fluid dynamics conservation laws. Three conservation principles are used:
Conservation of mass Conservation of mass requires that mass is neither created nor destroyed within a flow; the mathematical formulation of this principle is known as the mass continuity equation.
Conservation of momentum The mathematical formulation of this principle can be considered an application of Newton's Second Law. Momentum within a flow is only changed by external forces, which may include both surface forces, such as viscous (frictional) forces, and body forces, such as weight. The momentum conservation principle may be expressed as either a vector equation or separated into a set of three scalar equations (x,y,z components).
Conservation of energy The energy conservation equation states that energy is neither created nor destroyed within a flow, and that any addition or subtraction of energy to a volume in the flow is caused by heat transfer, or by work into and out of the region of interest.
Together, these equations are known as the Navier–Stokes equations, although some authors define the term to only include the momentum equation(s). The Navier–Stokes equations have no known analytical solution and are solved in modern aerodynamics using computational techniques. Because computational methods using high speed computers were not historically available and the high computational cost of solving these complex equations now that they are available, simplifications of the Navier–Stokes equations have been and continue to be employed. The Euler equations are a set of similar conservation equations which neglect viscosity and may be used in cases where the effect of viscosity is expected to be small. Further simplifications lead to Laplace's equation and potential flow theory. Additionally, Bernoulli's equation is a solution in one dimension to both the momentum and energy conservation equations.
The ideal gas law or another such equation of state is often used in conjunction with these equations to form a determined system that allows the solution for the unknown variables.
Branches of aerodynamics
Aerodynamic problems are classified by the flow environment or properties of the flow, including flow speed, compressibility, and viscosity. External aerodynamics is the study of flow around solid objects of various shapes. Evaluating the lift and drag on an airplane or the shock waves that form in front of the nose of a rocket are examples of external aerodynamics. Internal aerodynamics is the study of flow through passages in solid objects. For instance, internal aerodynamics encompasses the study of the airflow through a jet engine or through an air conditioning pipe.
Aerodynamic problems can also be classified according to whether the flow speed is below, near or above the speed of sound. A problem is called subsonic if all the speeds in the problem are less than the speed of sound, transonic if speeds both below and above the speed of sound are present (normally when the characteristic speed is approximately the speed of sound), supersonic when the characteristic flow speed is greater than the speed of sound, and hypersonic when the flow speed is much greater than the speed of sound. Aerodynamicists disagree over the precise definition of hypersonic flow; a rough definition considers flows with Mach numbers above 5 to be hypersonic.
The influence of viscosity on the flow dictates a third classification. Some problems may encounter only very small viscous effects, in which case viscosity can be considered to be negligible. The approximations to these problems are called inviscid flows. Flows for which viscosity cannot be neglected are called viscous flows.
Incompressible aerodynamics
An incompressible flow is a flow in which density is constant in both time and space. Although all real fluids are compressible, a flow is often approximated as incompressible if the effect of the density changes cause only small changes to the calculated results. This is more likely to be true when the flow speeds are significantly lower than the speed of sound. Effects of compressibility are more significant at speeds close to or above the speed of sound. The Mach number is used to evaluate whether the incompressibility can be assumed, otherwise the effects of compressibility must be included.
Subsonic flow
Subsonic (or low-speed) aerodynamics describes fluid motion in flows which are much lower than the speed of sound everywhere in the flow. There are several branches of subsonic flow but one special case arises when the flow is inviscid, incompressible and irrotational. This case is called potential flow and allows the differential equations that describe the flow to be a simplified version of the equations of fluid dynamics, thus making available to the aerodynamicist a range of quick and easy solutions.
In solving a subsonic problem, one decision to be made by the aerodynamicist is whether to incorporate the effects of compressibility. Compressibility is a description of the amount of change of density in the flow. When the effects of compressibility on the solution are small, the assumption that density is constant may be made. The problem is then an incompressible low-speed aerodynamics problem. When the density is allowed to vary, the flow is called compressible. In air, compressibility effects are usually ignored when the Mach number in the flow does not exceed 0.3 (about 335 feet (102 m) per second or 228 miles (366 km) per hour at 60 °F (16 °C)). Above Mach 0.3, the problem flow should be described using compressible aerodynamics.
Compressible aerodynamics
According to the theory of aerodynamics, a flow is considered to be compressible if the density changes along a streamline. This means that – unlike incompressible flow – changes in density are considered. In general, this is the case where the Mach number in part or all of the flow exceeds 0.3. The Mach 0.3 value is rather arbitrary, but it is used because gas flows with a Mach number below that value demonstrate changes in density of less than 5%. Furthermore, that maximum 5% density change occurs at the stagnation point (the point on the object where flow speed is zero), while the density changes around the rest of the object will be significantly lower. Transonic, supersonic, and hypersonic flows are all compressible flows.
Transonic flow
The term Transonic refers to a range of flow velocities just below and above the local speed of sound (generally taken as Mach 0.8–1.2). It is defined as the range of speeds between the critical Mach number, when some parts of the airflow over an aircraft become supersonic, and a higher speed, typically near Mach 1.2, when all of the airflow is supersonic. Between these speeds, some of the airflow is supersonic, while some of the airflow is not supersonic.
Supersonic flow
Supersonic aerodynamic problems are those involving flow speeds greater than the speed of sound. Calculating the lift on the Concorde during cruise can be an example of a supersonic aerodynamic problem.
Supersonic flow behaves very differently from subsonic flow. Fluids react to differences in pressure; pressure changes are how a fluid is "told" to respond to its environment. Therefore, since sound is, in fact, an infinitesimal pressure difference propagating through a fluid, the speed of sound in that fluid can be considered the fastest speed that "information" can travel in the flow. This difference most obviously manifests itself in the case of a fluid striking an object. In front of that object, the fluid builds up a stagnation pressure as impact with the object brings the moving fluid to rest. In fluid traveling at subsonic speed, this pressure disturbance can propagate upstream, changing the flow pattern ahead of the object and giving the impression that the fluid "knows" the object is there by seemingly adjusting its movement and is flowing around it. In a supersonic flow, however, the pressure disturbance cannot propagate upstream. Thus, when the fluid finally reaches the object it strikes it and the fluid is forced to change its properties – temperature, density, pressure, and Mach number—in an extremely violent and irreversible fashion called a shock wave. The presence of shock waves, along with the compressibility effects of high-flow velocity (see Reynolds number) fluids, is the central difference between the supersonic and subsonic aerodynamics regimes.
Hypersonic flow
In aerodynamics, hypersonic speeds are speeds that are highly supersonic. In the 1970s, the term generally came to refer to speeds of Mach 5 (5 times the speed of sound) and above. The hypersonic regime is a subset of the supersonic regime. Hypersonic flow is characterized by high temperature flow behind a shock wave, viscous interaction, and chemical dissociation of gas.
Associated terminology
The incompressible and compressible flow regimes produce many associated phenomena, such as boundary layers and turbulence.
Boundary layers
The concept of a boundary layer is important in many problems in aerodynamics. The viscosity and fluid friction in the air is approximated as being significant only in this thin layer. This assumption makes the description of such aerodynamics much more tractable mathematically.
Turbulence
In aerodynamics, turbulence is characterized by chaotic property changes in the flow. These include low momentum diffusion, high momentum convection, and rapid variation of pressure and flow velocity in space and time. Flow that is not turbulent is called laminar flow.
Aerodynamics in other fields
Engineering design
Aerodynamics is a significant element of vehicle design, including road cars and trucks where the main goal is to reduce the vehicle drag coefficient, and racing cars, where in addition to reducing drag the goal is also to increase the overall level of downforce. Aerodynamics is also important in the prediction of forces and moments acting on sailing vessels. It is used in the design of mechanical components such as hard drive heads. Structural engineers resort to aerodynamics, and particularly aeroelasticity, when calculating wind loads in the design of large buildings, bridges, and wind turbines.
The aerodynamics of internal passages is important in heating/ventilation, gas piping, and in automotive engines where detailed flow patterns strongly affect the performance of the engine.
Environmental design
Urban aerodynamics are studied by town planners and designers seeking to improve amenity in outdoor spaces, or in creating urban microclimates to reduce the effects of urban pollution. The field of environmental aerodynamics describes ways in which atmospheric circulation and flight mechanics affect ecosystems.
Aerodynamic equations are used in numerical weather prediction.
Ball-control in sports
Sports in which aerodynamics are of crucial importance include soccer, table tennis, cricket, baseball, and golf, in which most players can control the trajectory of the ball using the "Magnus effect".
See also
Aeronautics
Aerostatics
Aviation
Insect flight – how bugs fly
List of aerospace engineering topics
List of engineering topics
Nose cone design
Fluid dynamics
Computational fluid dynamics
References
Further reading
General aerodynamics
Subsonic aerodynamics
Obert, Ed (2009). . Delft; About practical aerodynamics in industry and the effects on design of aircraft. .
Transonic aerodynamics
Supersonic aerodynamics
Hypersonic aerodynamics
History of aerodynamics
Aerodynamics related to engineering
Ground vehicles
Fixed-wing aircraft
Helicopters
Missiles
Model aircraft
Related branches of aerodynamics
Aerothermodynamics
Aeroelasticity
Boundary layers
Turbulence
External links
NASA's Guide to Aerodynamics. .
Aerodynamics for Students
Aerodynamics for Pilots (archived)
Aerodynamics and Race Car Tuning (archived)
Aerodynamic Related Projects. .
eFluids Bicycle Aerodynamics. .
Application of Aerodynamics in Formula One (F1) (archived)
Aerodynamics in Car Racing. .
Aerodynamics of Birds. .
NASA Aerodynamics Index
Aerodynamics
Aerospace engineering
Energy in transport | Aerodynamics | Physics,Chemistry,Engineering | 4,401 |
70,449,007 | https://en.wikipedia.org/wiki/IEEE%20Computer%20Society%20Charles%20Babbage%20Award | In 1989, the International Parallel and Distributed Processing Symposium established the Charles Babbage Award to be given each year to a conference participant in recognition of exceptional contributions to the field. In almost all cases, the award is given to one of the invited keynote speakers at the conference. The selection was made by the steering committee chairs, upon recommendation from the Program Chair and General Chair who have been responsible for the technical program of the conference, including inviting the speakers. It is presented immediately following the selected speaker's presentation at the conference, and he or she is given a plaque that specifies the nature of their special contribution to the field that is being recognized by IPDPS.
In 2019, the management of the IEEE CS Babbage Award was transferred to the IEEE Computer Society's Awards Committee.
Past recipients:
1989 - Irving S. Reed
1990 - H.T. Kung
1991 - Harold S. Stone
1992 - David Kuck
1993 - K. Mani Chandy
1994 - Arvind
1995 - Richard Karp
1997 - Frances E. Allen
1998 - Jim Gray
1999 - K. Mani Chandy
2000 - Michael O. Rabin
2001 - Thomson Leighton
2002 - Steve Wallach
2003 - Michel Cosnard
2004 - Christos Papadimitriou
2005 - Yale N. Patt
2006 - Bill Dally
2007 - Mike Flynn
2008 - Joel Saltz
2009 - Wen-Mei Hwu
2010 - Burton Smith
2011 - Jack Dongarra
2012 - Chris Johnson
2013 - James Demmel
2014 - Peter Kogge
2015 - Alan Edelman
2017 - Mateo Valero. "For contributions to parallel computation through brilliant technical work, mentoring PhD students, and building an incredibly productive European research environment."
2019 - Ian Foster. "For his outstanding contributions in the areas of parallel computing languages, algorithms, and technologies for scalable distributed applications."
2020 - Yves Robert. "For contributions to parallel algorithms and scheduling techniques."
2021 - Guy Blelloch. "For contributions to parallel programming, parallel algorithms, and the interface between them."
2022 - Dhabaleswar K. (DK) Panda. "For contributions to high performance and scalable communication in parallel and high-end computing systems."
2023 - Keshav K Pingali. "For contributions to programmability of high-performance parallel computing on irregular algorithms and graph algorithms."
2024 - Franck Cappello. "For pioneering contributions and inspiring leadership in distributed computing, high-performance computing, resilience, and data reduction."
See also
List of computer science awards
List of awards named after people
References
External links
IEEE Computer Society Charles Babbage Award
Awards established in 1989
Computer science awards
IEEE society and council awards | IEEE Computer Society Charles Babbage Award | Technology | 548 |
9,525,603 | https://en.wikipedia.org/wiki/Breezeway | A breezeway is an architectural feature similar to a hallway that allows the passage of a breeze between structures to accommodate high winds, allow aeration, or provide aesthetic design variation.
Often, a breezeway is a simple roof connecting two structures (such as a house and a garage); sometimes, it can be much more like a tunnel with windows on either side. It may also refer to a hallway between two wings of a larger building – such as between a house and a garage – that lacks heating and cooling but allows sheltered passage. Breezeways have been used to house restaurants as well.
One of the earliest breezeway designs to be architecturally designed and published was designed by Frank Lloyd Wright in 1900 for the B. Harley Bradley House in Kankakee, Illinois. However, breezeway features had come into use in vernacular architecture long before this, as for example with the dogtrot breezeway that originally connected the two elements of a double log cabin on the North American frontier.
A side-deck is the upper deck outboard of any structure such as a coachroof or doghouse, also called a breezeway.
See also
Carport
Pergola
Skyway
Transom (architecture)
References
External links
Residential breezeway image
Rooms | Breezeway | Engineering | 250 |
20,866,283 | https://en.wikipedia.org/wiki/SBC%20%28codec%29 | SBC, or low-complexity subband codec, is an audio subband codec specified by the Bluetooth Special Interest Group (SIG) for the Advanced Audio Distribution Profile (A2DP). SBC is a digital audio encoder and decoder used to transfer data to Bluetooth audio output devices like headphones or loudspeakers. It can also be used on the Internet. It was designed with Bluetooth bandwidth limitations and processing power in mind to obtain a reasonably good audio quality at medium bit rates with low computational complexity. As of A2DP version 1.3, the Low Complexity Subband Coding remains the default codec and its implementation is mandatory for devices supporting that profile, but vendors are free to add their own codecs to match their needs.
At CES 2020 the Bluetooth SIG announced LC3 as the successor of SBC. LC3 is used in the LE Audio protocol based on the Bluetooth 5.2 Core Specification.
Design
SBC supports mono and stereo streams, and certain sampling frequencies up to 48 kHz. Maximum bitrate required to be supported by decoders is 320 kbit/s for mono and 512 kbit/s for stereo streams. It uses 4 or 8 subbands, an adaptive bit allocation algorithm in combination with an adaptive block PCM quantizer. Frans de Bont has based the SBC audio codec on his earlier work, and – in parts – on the MPEG-1 Audio Layer II standard. In addition, the SBC is based on the algorithms described in the EP-0400755B1. The patent owners wrote that they allow the free usage of SBC in Bluetooth applications with a goal of boosting the use of this technology.
Variants
Overview
Middle and High Quality
A2DP recommends encoders to support Middle Quality and High Quality presets as specified in the above table. As a result, most operating systems are using the High Quality profile as the default or even the only one supported encoding profile.
Higher quality variants
However, A2DP requires decoders to support higher quality streams, up to 512 kbit/s, and there are some experimental encoders that use this feature: for example, SBC XQ, used by Lineage OS,. With higher bit rate, audio quality is comparable to aptX HD (529 kbit/s).
FastStream
While A2DP officially supports only one-way audio streams, CSR has found a way to send a voice-back stream opposite to the main stereo stream, making it possible to use A2DP in headsets with microphones. It was implemented in the FastStream codec, which is the SBC codec with set parameters and the voice-back stream added.
Implementations
The A2DP test specification (V1.0) contains a reference implementation of the encoder and decoder for the SBC codec. A Linux implementation is available at BlueZ - The Linux Bluetooth stack.
See also
Audio codec
aptX
Bluetooth profile
Adaptive differential pulse-code modulation
List of codecs
References
Audio codecs
Bluetooth | SBC (codec) | Technology | 631 |
13,037,086 | https://en.wikipedia.org/wiki/Trichlorophenylsilane | Trichlorophenylsilane is a compound with formula Si(C6H5)Cl3.
Similarly to other alkylchlorosilanes, trichlorophenylsilane is a possible precursor to silicone. It hydrolyses in water to give HCl and phenylsilantriol, with the latter condensating to a polymeric substance.
See also
Methyltrichlorosilane
Organochlorosilanes
Carbosilanes
Phenyl compounds | Trichlorophenylsilane | Chemistry | 105 |
30,381,186 | https://en.wikipedia.org/wiki/Khekadaengoside | Khekadaengoside is any one of several chemical compounds isolated from certain plants, notably Trichosanthes tricuspidata. They can be seen as derivatives of the triterpene hydrocarbon cucurbitane (), more specifically from cucurbitacins H and L.
They include:
Khekadaengoside A from T. tricuspidata
Khekadaengoside B from T. tricuspidata
Khekadaengoside D from the fruits of T. tricuspidata
Khekadaengoside K from the fruits of T. tricuspidata
References
Triterpene glycosides | Khekadaengoside | Chemistry | 139 |
63,165,747 | https://en.wikipedia.org/wiki/%28Cyclooctatetraene%29iron%20tricarbonyl | (Cyclooctatetraene)iron tricarbonyl is the organoiron compound with the formula (C8H8)Fe(CO)3. Like other examples of (diene)Fe(CO)3 complexes, it is an orange, diamagnetic solid. Although several isomers are possible, only the η4-C8H8 complex is observed. The complex is an example of a ring-whizzer, since, on the NMR time-scale, the Fe(CO)3 center migrates around the rim of the cyclooctatetraene ligand.
References
Carbonyl complexes
Organoiron compounds
Half sandwich compounds
Iron(0) compounds | (Cyclooctatetraene)iron tricarbonyl | Chemistry | 145 |
3,741,564 | https://en.wikipedia.org/wiki/Type%20%28model%20theory%29 | In model theory and related areas of mathematics, a type is an object that describes how a (real or possible) element or finite collection of elements in a mathematical structure might behave. More precisely, it is a set of first-order formulas in a language L with free variables x1, x2,..., xn that are true of a set of n-tuples of an L-structure . Depending on the context, types can be complete or partial and they may use a fixed set of constants, A, from the structure . The question of which types represent actual elements of leads to the ideas of saturated models and omitting types.
Formal definition
Consider a structure for a language L. Let M be the universe of the structure. For every A ⊆ M, let L(A) be the language obtained from L by adding a constant ca for every a ∈ A. In other words,
A 1-type (of ) over A is a set p(x) of formulas in L(A) with at most one free variable x (therefore 1-type) such that for every finite subset p0(x) ⊆ p(x) there is some b ∈ M, depending on p0(x), with (i.e. all formulas in p0(x) are true in when x is replaced by b).
Similarly an n-type (of ) over A is defined to be a set p(x1,...,xn) = p(x) of formulas in L(A), each having its free variables occurring only among the given n free variables x1,...,xn, such that for every finite subset p0(x) ⊆ p(x) there are some elements b1,...,bn ∈ M with .
A complete type of over A is one that is maximal with respect to inclusion. Equivalently, for every either or . Any non-complete type is called a partial type.
So, the word type in general refers to any n-type, partial or complete, over any chosen set of parameters (possibly the empty set).
An n-type p(x) is said to be realized in if there is an element b ∈ Mn such that . The existence of such a realization is guaranteed for any type by the compactness theorem, although the realization might take place in some elementary extension of , rather than in itself.
If a complete type is realized by b in , then the type is typically denoted and referred to as the complete type of b over A.
A type p(x) is said to be isolated by , for , if for all we have . Since finite subsets of a type are always realized in , there is always an element b ∈ Mn such that φ(b) is true in ; i.e. , thus b realizes the entire isolated type. So isolated types will be realized in every elementary substructure or extension. Because of this, isolated types can never be omitted (see below).
A model that realizes the maximum possible variety of types is called a saturated model, and the ultrapower construction provides one way of producing saturated models.
Examples of types
Consider the language L with one binary relation symbol, which we denote as . Let be the structure for this language, which is the ordinal with its standard well-ordering. Let denote the first-order theory of .
Consider the set of L(ω)-formulas . First, we claim this is a type. Let be a finite subset of . We need to find a that satisfies all the formulas in . Well, we can just take the successor of the largest ordinal mentioned in the set of formulas . Then this will clearly contain all the ordinals mentioned in . Thus we have that is a type.
Next, note that is not realized in . For, if it were there would be some that contains every element of .
If we wanted to realize the type, we might be tempted to consider the structure , which is indeed an extension of that realizes the type. Unfortunately, this extension is not elementary, for example, it does not satisfy . In particular, the sentence is satisfied by this structure and not by .
So, we wish to realize the type in an elementary extension. We can do this by defining a new L-structure, which we will denote . The domain of the structure will be where is the set of integers adorned in such a way that . Let denote the usual order of . We interpret the symbol in our new structure by . The idea being that we are adding a "-chain", or copy of the integers, above all the finite ordinals. Clearly any element of realizes the type . Moreover, one can verify that this extension is elementary.
Another example: the complete type of the number 2 over the empty set, considered as a member of the natural numbers, would be the set of all first-order statements (in the language of Peano arithmetic), describing a variable x, that are true when x = 2. This set would include formulas such as , , and . This is an example of an isolated type, since, working over the theory of the naturals, the formula implies all other formulas that are true about the number 2.
As a further example, the statements
and
describing the square root of 2 are consistent with the axioms of ordered fields, and can be extended to a complete type. This type is not realized in the ordered field of rational numbers, but is realized in the ordered field of reals. Similarly, the infinite set of formulas (over the empty set) {x>1, x>1+1, x>1+1+1, ...} is not realized in the ordered field of real numbers, but is realized in the ordered field of hyperreals. Similarly, we can specify a type that is realized by an infinitesimal hyperreal that violates the Archimedean property.
The reason it is useful to restrict the parameters to a certain subset of the model is that it helps to distinguish the types that can be satisfied from those that cannot. For example, using the entire set of real numbers as parameters one could generate an uncountably infinite set of formulas like , , ... that would explicitly rule out every possible real value for x, and therefore could never be realized within the real numbers.
Stone spaces
It is useful to consider the set of complete n-types over A as a topological space. Consider the following equivalence relation on formulas in the free variables x1,..., xn with parameters in A:
One can show that if and only if they are contained in exactly the same complete types.
The set of formulas in free variables x1,...,xn over A up to this equivalence relation is a Boolean algebra (and is canonically isomorphic to the set of A-definable subsets of Mn). The complete n-types correspond to ultrafilters of this Boolean algebra. The set of complete n-types can be made into a topological space by taking the sets of types containing a given formula as a basis of open sets. This constructs the Stone space associated to the Boolean algebra, which is a compact, Hausdorff, and totally disconnected space.
Example. The complete theory of algebraically closed fields of characteristic 0 has quantifier elimination, which allows one to show that the possible complete 1-types (over the empty set) correspond to:
Roots of a given irreducible non-constant polynomial over the rationals with leading coefficient 1. For example, the type of square roots of 2. Each of these types is an isolated point of the Stone space.
Transcendental elements, which are not roots of any non-zero polynomial. This type is a point in the Stone space that is closed but not isolated.
In other words, the 1-types correspond exactly to the prime ideals of the polynomial ring Q[x] over the rationals Q: if r is an element of the model of type p, then the ideal corresponding to p is the set of polynomials with r as a root (which is only the zero polynomial if r is transcendental). More generally, the complete n-types correspond to the prime ideals of the polynomial ring Q[x1,...,xn], in other words to the points of the prime spectrum of this ring. (The Stone space topology can in fact be viewed as the Zariski topology of a Boolean ring induced in a natural way from the Boolean algebra. While the Zariski topology is not in general Hausdorff, it is in the case of Boolean rings.) For example, if q(x,y) is an irreducible polynomial in two variables, there is a 2-type whose realizations are (informally) pairs (x,y) of elements with q(x,y)=0.
Omitting types theorem
Given a complete n-type p one can ask if there is a model of the theory that omits p, in other words there is no n-tuple in the model that realizes p.
If p is an isolated point in the Stone space, i.e. if {p} is an open set, it is easy to see that every model realizes p (at least if the theory is complete). The omitting types theorem says that conversely if p is not isolated then there is a countable model omitting p (provided that the language is countable).
Example: In the theory of algebraically closed fields of characteristic 0, there is a 1-type represented by elements that are transcendental over the prime field Q. This is a non-isolated point of the Stone space (in fact, the only non-isolated point). The field of algebraic numbers is a model omitting this type, and the algebraic closure of any
transcendental extension of the rationals is a model realizing this type.
All the other types are "algebraic numbers" (more precisely, they are the sets of first-order statements satisfied by some given algebraic number), and all such types are realized in all algebraically closed fields of characteristic 0.
References
Concepts in logic
Mathematical logic
Model theory | Type (model theory) | Mathematics | 2,107 |
227,100 | https://en.wikipedia.org/wiki/Silver%20nitrate | Silver nitrate is an inorganic compound with chemical formula . It is a versatile precursor to many other silver compounds, such as those used in photography. It is far less sensitive to light than the halides. It was once called lunar caustic because silver was called luna by ancient alchemists who associated silver with the moon. In solid silver nitrate, the silver ions are three-coordinated in a trigonal planar arrangement.
Synthesis and structure
Albertus Magnus, in the 13th century, documented the ability of nitric acid to separate gold and silver by dissolving the silver. Indeed silver nitrate can be prepared by dissolving silver in nitric acid followed by evaporation of the solution. The stoichiometry of the reaction depends upon the concentration of nitric acid used.
3 Ag + 4 HNO3 (cold and diluted) → 3 AgNO3 + 2 H2O + NO
Ag + 2 HNO3 (hot and concentrated) → AgNO3 + H2O + NO2
The structure of silver nitrate has been examined by X-ray crystallography several times. In the common orthorhombic form stable at ordinary temperature and pressure, the silver atoms form pairs with Ag---Ag contacts of 3.227 Å. Each Ag+ center is bonded to six oxygen centers of both uni- and bidentate nitrate ligands. The Ag-O distances range from 2.384 to 2.702 Å.
Reactions
A typical reaction with silver nitrate is to suspend a rod of copper in a solution of silver nitrate and leave it for a few hours. The silver nitrate reacts with copper to form hairlike crystals of silver metal and a blue solution of copper nitrate:
2 AgNO3 + Cu → Cu(NO3)2 + 2 Ag
Silver nitrate decomposes when heated:
2 AgNO3(l) → 2 Ag(s) + O2(g) + 2 NO2(g)
Qualitatively, decomposition is negligible below the melting point, but becomes appreciable around 250 °C and fully decomposes at 440 °C.
Most metal nitrates thermally decompose to the respective oxides, but silver oxide decomposes at a lower temperature than silver nitrate, so the decomposition of silver nitrate yields elemental silver instead.
Uses
Precursor to other silver compounds
Silver nitrate is the least expensive salt of silver; it offers several other advantages as well. It is non-hygroscopic, in contrast to silver fluoroborate and silver perchlorate. In addition, it is relatively stable to light, and it dissolves in numerous solvents, including water. The nitrate can be easily replaced by other ligands, rendering AgNO3 versatile. Treatment with solutions of halide ions gives a precipitate of AgX (X = Cl, Br, I). When making photographic film, silver nitrate is treated with halide salts of sodium or potassium to form insoluble silver halide in situ in photographic gelatin, which is then applied to strips of tri-acetate or polyester. Similarly, silver nitrate is used to prepare some silver-based explosives, such as the fulminate, azide, or acetylide, through a precipitation reaction.
Treatment of silver nitrate with base gives dark grey silver oxide:
2 AgNO3 + 2 NaOH → Ag2O + 2 NaNO3 + H2O
Halide abstraction
The silver cation, , reacts quickly with halide sources to produce the insoluble silver halide, which is a cream precipitate if is used, a white precipitate if is used and a yellow precipitate if is used. This reaction is commonly used in inorganic chemistry to abstract halides:
(aq) + (aq) → AgX(s)
where = , , or .
Other silver salts with non-coordinating anions, namely silver tetrafluoroborate and silver hexafluorophosphate are used for more demanding applications.
Similarly, this reaction is used in analytical chemistry to confirm the presence of chloride, bromide, or iodide ions. Samples are typically acidified with dilute nitric acid to remove interfering ions, e.g. carbonate ions and sulfide ions. This step avoids confusion of silver sulfide or silver carbonate precipitates with that of silver halides. The color of precipitate varies with the halide: white (silver chloride), pale yellow/cream (silver bromide), yellow (silver iodide). AgBr and especially AgI photo-decompose to the metal, as evidenced by a grayish color on exposed samples.
The same reaction was used on steamships in order to determine whether or not boiler feedwater had been contaminated with seawater. It is still used to determine if moisture on formerly dry cargo is a result of condensation from humid air, or from seawater leaking through the hull.
Organic synthesis
Silver nitrate is used in many ways in organic synthesis, e.g. for deprotection and oxidations. binds alkenes reversibly, and silver nitrate has been used to separate mixtures of alkenes by selective absorption. The resulting adduct can be decomposed with ammonia to release the free alkene. Silver nitrate is highly soluble in water but is poorly soluble in most organic solvents, except acetonitrile (111.8 g/100 g, 25 °C).
Biology
In histology, silver nitrate is used for silver staining, for demonstrating reticular fibers, proteins and nucleic acids. For this reason it is also used to demonstrate proteins in PAGE gels. It can be used as a stain in scanning electron microscopy.
Cut flower stems can be placed in a silver nitrate solution, which prevents the production of ethylene. This delays ageing of the flower.
Indelible ink
Silver nitrate produces long-lasting stain when applied to skin and is one of indelible ink’s ingredients. An electoral stain makes use of this to mark a finger of people who have voted in an election, allowing easy identification to prevent double-voting.
In addition to staining skin, silver nitrate has a history of use in stained glass. In the 14th century, artists began using a "silver stain" (also known as a yellow stain) made from silver nitrate to create a yellow effect on clear glass. The stain would produce a stable color that could range from pale lemon to deep orange or gold. Silver stain was often used with glass paint, and was applied to the opposite side of the glass as the paint. It was also used to create a mosaic effect by reducing the number of pieces of glass in a window. Despite the age of the technique, this process of creating stained glass remains almost entirely unchanged.
Medicine
Silver salts have antiseptic properties. In 1881 Credé introduced a method known as Credé's prophylaxis, which used of dilute (2%) solutions of silver nitrate in newborn babies' eyes at birth to prevent contraction of gonorrhea from the mother, which could cause blindness via ophthalmia neonatorum. (Modern antibiotics are now used instead).
Fused silver nitrate, shaped into sticks, was traditionally called "lunar caustic". It is used as a cauterizing agent, for example to remove granulation tissue around a stoma. General Sir James Abbott noted in his journals that in India in 1827 it was infused by a British surgeon into wounds in his arm resulting from the bite of a mad dog to cauterize the wounds and prevent the onset of rabies.
Silver nitrate is used to cauterize superficial blood vessels in the nose to help prevent nosebleeds.
Dentists sometimes use silver nitrate-infused swabs to heal oral ulcers. Silver nitrate is used by some podiatrists to kill cells located in the nail bed.
The Canadian physician C. A. Douglas Ringrose researched the use of silver nitrate for sterilization procedures, believing that silver nitrate could be used to block and corrode the fallopian tubes. The technique was ineffective.
Disinfection
Much research has been done in evaluating the ability of the silver ion at inactivating Escherichia coli, a microorganism commonly used as an indicator for fecal contamination and as a surrogate for pathogens in drinking water treatment. Concentrations of silver nitrate evaluated in inactivation experiments range from 10–200 micrograms per liter as Ag+.
Silver's antimicrobial activity saw many applications prior to the discovery of modern antibiotics, when it fell into near disuse. Its association with argyria made consumers wary and led them to turn away from it when given an alternative.
Against warts
Repeated daily application of silver nitrate can induce adequate destruction of cutaneous warts, but occasionally pigmented scars may develop. In a placebo-controlled study of 70 patients, silver nitrate given over nine days resulted in clearance of all warts in 43% and improvement in warts in 26% one month after treatment compared to 11% and 14%, respectively, in the placebo group.
Safety
As an oxidant, silver nitrate should be properly stored away from organic compounds. It reacts explosively with ethanol. Despite its common usage in extremely low concentrations to prevent gonorrhea and control nosebleeds, silver nitrate is still very toxic and corrosive. Brief exposure will not produce any immediate side effects other than the purple, brown or black stains on the skin, but upon constant exposure to high concentrations, side effects will be noticeable, which include burns. Long-term exposure may cause eye damage. Silver nitrate is known to be a skin and eye irritant. Silver nitrate has not been thoroughly investigated for potential carcinogenic effect.
Silver nitrate is currently unregulated in water sources by the United States Environmental Protection Agency. However, if more than 1 gram of silver is accumulated in the body, a condition called argyria may develop. Argyria is a permanent cosmetic condition in which the skin and internal organs turn a blue-gray color. The United States Environmental Protection Agency used to have a maximum contaminant limit for silver in water until 1990, when it was determined that argyria did not impact the function of any affected organs despite the discolouration. Argyria is more often associated with the consumption of colloidal silver solutions rather than with silver nitrate, since it is only used at extremely low concentrations to disinfect the water. However, it is still important to be wary before ingesting any sort of silver-ion solution.
References
External links
International Chemical Safety Card 1116
NIOSH Pocket Guide to Chemical Hazards
History of Kodak: About Film and Imaging
https://www.cofesilver.com/en/silver_bar :silver bar explanation. pricing investing
13th century in science
Antiseptics
Electron microscopy stains
Nitrates
Photographic chemicals
Silver compounds
Staining dyes
Alchemical substances
Light-sensitive chemicals
Oxidizing agents
Chemical tests | Silver nitrate | Chemistry | 2,272 |
77,094,171 | https://en.wikipedia.org/wiki/PKS%200451-28 | PKS 0451-28 (full name PKS 0451-282), also known as MRC 0451-282, is a quasar located in the constellation of Caelum. Its redshift is 2.55, estimating the object to be located nearly 10.8 billion light-years away from Earth.
Characteristics
Observed by the 20-GHz Australia Telescope Compact Array radio survey, PKS 0451-28 is classified as a blazar. It is a type of an active extragalactic object launching out a relativistic astrophysical jet towards the direction of Earth with the observer's line of sight.
The emitted radiation from PKS 0451-28 shows a strong variability across its entire electro-magnetic spectrum. As a source of non-thermal emission, from radio to high energy (HE; >100 MeV) or very high energy (VHE; >100 GeV) γ-ray bands, the jets of PKS 0451-28 are known to cover the entire spectrum. This tend to vary in a short time-scales such as in minute scales within the γ-ray band causing an increase in luminosity. The flux variation in PKS 0451–28, the observed superluminal motion, high degrees of polarization, and other features observed are explained by the relativistic beaming effects.
Moreover, PKS 0451-28 is a flat-spectrum radio quasar (FSRQ). It has a strong emission lines (EW >5 Å) and contains a powerful radio source observed by NuSTAR, with a visual magnitude of 16.7 and redshift of 0.9, which its radio fluxes have been catalogued at 1.8 Jy at 5 GHz and 3 Jy at 31 GHz respectively.
Observations of PKS 0451-28
According to researchers, the γ-ray luminosity in PKS 0451-28 is found to exceed 1048 erg s−1 with the highest γ-ray luminosity of (5.54 ± 0.06) × 1048 erg s−1, that is estimated for another blazar, B3 1343+451. Naturally, compared to the distribution of all BL Lacs and FSRQs that are considered γ-ray-emitting, in the Γγ−Lγ plane, the blazars observed, are considered to occupy the highest luminosity range.
Interestingly, PKS 0451-28 appears as a bright X-ray emitter, but however does not have signs of distinguishable features in the X-ray band, only having a flux and photon index similar to those of the other considered sources in blazars. Along with other studied blazars like PKS 0537-286, PKS 1351-108, PKS 0438-43, PKS 0834-20 and TXS 0222+185, a thermal blue-bump component is found in PKS 0451–28, suggesting emission directly from its disc.
Researchers also noted the X-ray flux in PKS 0451-28 is known to be consistent, remaining at (9.52 ± 1.21) × 10−14 erg cm−2 s−1 compared to a few blazars like PKS 0438−43, whose X-ray flux was in a bright X-ray state on December 15, 2016, with a flux of (1.09 ± 0.16) × 10−11 erg cm−2 s−1 as compared with the flux of (1.30 ± 0.31) × 10−11 erg cm−2 s−1 in the quiescent state.
Moreover, the adaptively binned light curves for PKS 0451-28 shows show several episodes of γ-rays brightening, whereas the γ-ray flux increase within day scales is observed. The peak γ-ray flux of (2.20 ± 0.50) × 10−7 photon cm−2 s−1 in PKS 0451-28 is found to be above 163.2 MeV. During the observation, it has a MJD of 56968.60 ± 0.79 with 9.64σ, corresponding to a flux of (3.70 ± 0.84) × 10−7 photon cm−2 s−1 above 100 MeV. During this period, Γγ was 2.06 ± 0.19. This shows only the photon index of PKS 0451-28 varies in time; the variation is highly significant in which the blazar shows a value of P(χ2) ≤ 10−5.
Disc luminosity
The disc luminosity of PKS 0451-28 is estimated to be Ld ≃ (1.09−10.94) × 1046 erg s−1 according to researchers calculating the energetics of the considered source for the blazar by using modelling results.
Supermassive black hole and jet luminosity
The supermassive black hole in PKS 0451-28 has a solar mass of within (1.69−5.35) × 109 M⊙ as calculated by researchers through a traditional virial method. Around 5–16 percent is contributed by the Eddington luminosity.
As for jet power in PKS 0451–28, it is in the form of the magnetic field (LB) and relativistic electrons (Le). Researchers calculated the jet power as L = πR2c Γ2Ui, where Ui is either electron (Ue) or magnetic field (UB) energy density. Furthermore, the jet luminosity (defined as L = Le + LB) is ≤1.41 × 1046 erg s−1 for PKS 0451–28. It is found to be lower compared to the disc Ld ≃ (1.09−10.94) × 1046 erg s−1 although it has a significant correlation with the broad-line luminosity in the blazar, hence supporting the theory of jet power having a closer bond with accretion.
The jet power is found to have an approximate value of logLBLR ~ (0.98 ± 0.07)logPjet for all blazars including PKS 0451–28. The values are consistent with the theoretical predicted coefficient of logLBLR-logLjet relation. Results do support the jets in blazars like PKS 0451–28, are powered by energy extraction from both accretion and black hole spin as observed by Fermi. This finds PKS 0451-28 is a powerful blazar with high luminosity and of the same order calculated for other blazars studied both distant and nearby since the jet power do not differ substantially and those that are usually estimated for bright FSRQs.
References
Blazars
Quasars
Caelum
2824135
Active galaxies | PKS 0451-28 | Astronomy | 1,427 |
36,808,384 | https://en.wikipedia.org/wiki/Tar%20spot | Tar spot may refer to:
Phyllachora maydis, a fungus species that cause tar spot disease of maize
Rhytisma acerinum, a fungus species that causes tar spot disease of maples | Tar spot | Biology | 45 |
648,096 | https://en.wikipedia.org/wiki/Dangling%20else | The dangling else is a problem in programming of parser generators in which an optional else clause in an if–then(–else) statement can make nested conditional statements ambiguous. Formally, the reference context-free grammar of the language is ambiguous, meaning there is more than one correct parse tree.
In many programming languages, one may write conditionally executed code in two forms: the if-then form, or the if-then-else form. (The else clause is optional.):
if a then s
if b then s1 else s2
Ambiguous interpretation becomes possible when there are nested statements; specifically when an if-then-else form replaces the statement s inside the above if-then construct:
if a then if b then s1 else s2
In this example, s1 gets executed if and only if a is true and b is true. But what about s2? One person might be sure that s2 gets executed whenever a is false (by attaching the else to the first if), while another person might be sure that s2 gets executed only when a is true and b is false (by attaching the else to the second if). In other words, someone could interpret the previous statement as being equivalent to either of the following unambiguous statements:
if a then { if b then s1 } else s2
if a then { if b then s1 else s2 }
The dangling-else problem dates back to ALGOL 60, and subsequent languages have resolved it in various ways. In LR parsers, the dangling else is the archetypal example of a shift-reduce conflict.
Avoiding ambiguity while keeping the syntax
This problem often comes up in compiler construction, especially scannerless parsing. The convention when dealing with the dangling else is to attach the else to the nearby if statement, allowing for unambiguous context-free grammars, in particular. Programming languages like Pascal, C, and Java follow this convention, so there is no ambiguity in the semantics of the language, though the use of a parser generator may lead to ambiguous grammars. In these cases alternative grouping is accomplished by explicit blocks, such as begin...end in Pascal and {...} in C.
Depending on the compiler construction approach, one may take different corrective actions to avoid ambiguity:
If the parser is produced by an SLR, LR(1), or LALR LR parser generator, the programmer will often rely on the generated parser feature of preferring shift over reduce whenever there is a conflict. Alternatively, the grammar can be rewritten to remove the conflict, at the expense of an increase in grammar size (see below).
If the parser is hand-written, the programmer may use a non-ambiguous context-free grammar. Alternatively, one may rely on a non-context-free grammar or a parsing expression grammar.
Avoiding ambiguity by changing the syntax
The problem can also be solved by making explicit the link between an else and its if, within the syntax. This usually helps avoid human errors.
Possible solutions are:
Having an "end if" symbol delimiting the end of the if construct. Examples of such languages are ALGOL 68, Ada, Eiffel, PL/SQL, Visual Basic, Modula-2, and AppleScript.
Disallowing the statement following a "then" to be an "if" itself (it may however be a pair of statement brackets containing only an if-then-clause). ALGOL 60 follows this approach.
Requiring braces (parentheses) when an "else" follows an "if".
Requiring every "if" to be paired with an "else". To avoid a similar problem concerning semantics rather than syntax, Racket deviates from Scheme by considering an if without a fallback clause to be an error, effectively distinguishing conditional expressions (i.e if) from conditional statements (i.e when and unless, which do not have fallback clauses).
Using different keywords for the one-alternative and two-alternative "if" statements. S-algol uses if e do s for the one-alternative case and if e1 then e2 else e3 for the general case.
Requiring braces unconditionally, like Swift. This is effectively true in Python as its indentation rules delimit every block, not just those in "if" statements.
Examples
Concrete examples follow.
C
In C, the grammar reads, in part:
statement = ...
| selection-statement
selection-statement = ...
| IF ( expression ) statement
| IF ( expression ) statement ELSE statement
Thus, without further rules, the statement
if (a) if (b) s; else s2;
could ambiguously be parsed as if it were either:
if (a)
{
if (b)
s;
else
s2;
}
or:
if (a)
{
if (b)
s;
}
else
s2;
The C standard clarifies that an else block is associated with the nearest if. Therefore, the first tree is chosen.
Avoiding the conflict in LR parsers
The above example could be rewritten in the following way to remove the ambiguity :
statement: open_statement
| closed_statement
;
open_statement: IF '(' expression ')' statement
| IF '(' expression ')' closed_statement ELSE open_statement
;
closed_statement: non_if_statement
| IF '(' expression ')' closed_statement ELSE closed_statement
;
non_if_statement: ...
;
Any other statement-related grammar rules may also have to be duplicated in this way if they may directly or indirectly end with a statement or selection-statement non-terminal.
However, we give grammar that includes both of if and while statements.
statement: open_statement
| closed_statement
;
open_statement: IF '(' expression ')' statement
| IF '(' expression ')' closed_statement ELSE open_statement
| WHILE '(' expression ')' open_statement
;
closed_statement: simple_statement
| IF '(' expression ')' closed_statement ELSE closed_statement
| WHILE '(' expression ')' closed_statement
;
simple_statement: ...
;
Finally, we give the grammar that forbids ambiguous IF statements.
statement: open_statement
| closed_statement
;
open_statement: IF '(' expression ')' statement
| IF '(' expression ')' closed_statement ELSE open_statement
| WHILE '(' expression ')' open_statement
;
closed_statement: simple_statement
| IF '(' expression ')' closed_statement ELSE closed_statement
| WHILE '(' expression ')' closed_statement
;
simple_statement: ...
;
With this grammar the statement if (a) if (b) c else d can only be parsed one way, because the other interpretation (if (a) {if (b) c} else d) is produced as
statement
open_statement
IF '(' expression ')' closed_statement ELSE open_statement
'if' '(' 'a' ')' closed_statement 'else' 'd'
and then the parsing fails trying to match closed_statement to "if (b) c". An attempt with closed_statement fails in the same way. The other parse, if (a) {if (b) c else d}) succeeds:
statement
open_statement
IF '(' expression ')' statement
IF '(' expression ')' closed_statement
IF '(' a ')' (IF '(' expression ')' closed_statement ELSE closed_statement)
IF '(' a ')' (IF '(' b ')' c ELSE 'd')
See also
Lexer hack
Most vexing parse
References
Ambiguity
Computer programming
Conditional constructs
Parsing | Dangling else | Technology,Engineering | 1,630 |
74,800,397 | https://en.wikipedia.org/wiki/Mohan%20Edirisinghe | Mohan Jayantha Edirisinghe is a biomaterials engineer who is the Bonfield Chair of Biomaterials in the Department of Mechanical Engineering at University College London. Edirisinghe studies new materials forming methodologies, with a focus on the development of new biostructures. He was appointed an Order of the British Empire in the 2021 New Year Honours for his services to Biomedical Engineering.
Early life and education
Edirisinghe was born in Sri Lanka. He was educated at the St Thomas College, Mount Lavinia. He studied at the University of Moratuwa, in a joint course between the British Council and University of Leeds. He eventually moved to Leeds for his postgraduate studies, where he completed a doctorate on alloy additions and how they impact the properties of cast iron and then a Doctor of Science degree was awarded for his materials research in 2000 by the University of Leeds.
Research and career
Edirisinghe works on materials forming and manufacturing for healthcare and drug delivery. He has developed complex nanofibres that can be used to generate antimicrobial filters, and nanobubbles that facilitate new modes of drug delivery. He developed electrohydrodyanmic printing methods (e.g. electro-spinning and electro-spraying) for biomedical applications. In 2010, Edirisinghe's inventions led to AtoCap, a spin-out company who focus on the encapsulation of generic drugs (e.g. antibiotics and chemotherapeutics) into a complex capsule. He has studied the protein composition of milk, and identified that casein, which contributes 80% of the protein in dairy milk, has anti-microbial and anti-inflammatory properties. Edirisinghe demonstrated that it could be incorporated into biodegradable plastic bandages to accelerate wound healing.
During the early days of the COVID-19 pandemic, Edirisinghe worked with the Royal Academy of Engineering to create new respirator masks. His antiviral masks were developed and also worked on air filters that could be used in care homes, schools and on public transport.
Awards and honours
2005 Royal Society Brian Mercer (Innovation) Feasibility Award
2009 IOM3 Kroll Medal
2010 Materials Science Venture Prize
2012 UK Biomaterials Society Presidents Prize
2013 Royal Society Brian Mercer (Innovation) Feasibility Award
2015 Elected Fellow of the Royal Academy of Engineering
2017 Royal Academy of Engineering Armourers & Brasier's Prize
2017 Royal Society Brian Mercer (Innovation) Feasibility Award
2017 IOM3 Chapman Medal
2020 Elected Fellow of the European Academy of Sciences
2023 Royal Academy of Engineering Colin Campbell Mitchell Award
2024 Royal Society Clifford Paterson Medal and Lecture
Selected publications
C J Luo; M Nangrejo, M Edirisinghe (2010). "A novel method of selecting solvents for polymer electrospinning". Polymer. 51 (7): 1654-1662. https://doi.org/10.1016/j.polymer.2010.01.031
* The full list of publications can be accessed here.
References
Living people
Alumni of S. Thomas' College, Mount Lavinia
Alumni of the University of Leeds
Academics of University College London
Sri Lankan emigrants to the United Kingdom
Bioengineers
Fellows of the Royal Academy of Engineering
21st-century British engineers
Members of the Order of the British Empire
Year of birth missing (living people) | Mohan Edirisinghe | Engineering,Biology | 691 |
65,504,712 | https://en.wikipedia.org/wiki/Italian%20Union%20of%20Chemical%2C%20Energy%20and%20Resource%20Workers | The Italian Union of Chemical, Energy and Resource Workers (, UILCER) was a trade union representing manufacturing and utility workers in Italy.
The union was founded in the summer of 1994, when the Italian Union of Chemical and Allied Industries merged with the Italian Union of Oil and Gas Workers. Like both its predecessors, it affiliated to the Italian Union of Labour. From 1995, it was led by Romano Bellissima. On 25 March 1999, it merged with the Italian Union of Public Service Workers, to form the Italian Union of Chemical, Energy and Manufacturing Workers.
References
Chemical industry in Italy
Chemical industry trade unions
Trade unions in Italy
Trade unions established in 1994
Trade unions disestablished in 1999 | Italian Union of Chemical, Energy and Resource Workers | Chemistry | 140 |
193,366 | https://en.wikipedia.org/wiki/Asafoetida | Asafoetida (; also spelled asafetida) is the dried latex (gum oleoresin) exuded from the rhizome or tap root of several species of Ferula, perennial herbs of the carrot family. It is produced in Iran, Afghanistan, Central Asia, northern India and Northwest China (Xinjiang). Different regions have different botanical sources.
Asafoetida has a pungent smell, as reflected in its name, lending it the common name of "stinking gum". The odour dissipates upon cooking; in cooked dishes, it delivers a smooth flavour reminiscent of leeks or other onion relatives. Asafoetida is also known colloquially as "devil's dung" in English (and similar expressions in many other languages).
Etymology and other names
The English name is derived from asa, a Latinised form of Persian 'mastic', and Latin 'stinky'.
Other names include, with its pungent odour having resulted in many unpleasant names:
Composition
Typical asafoetida contains about 40–64% resin, 25% endogeneous gum, 10–17% volatile oil, and 1.5–10% ash. The resin portion contains asaresinotannols A and B, ferulic acid, umbelliferone, and four unidentified compounds. The volatile oil component is rich in various organosulfide compounds, such as 2-butyl-propenyl-disulfide, diallyl sulfide, diallyl disulfide (also present in garlic) and dimethyl trisulfide, which is also responsible for the odour of cooked onions. The organosulfides are primarily responsible for the odour and flavour of asafoetida.
Botanical sources
Many Ferula species are utilised as the sources of asafoetida. Most of them are characterised by abundant sulphur-containing compounds in the essential oil.
Ferula foetida is the source of asafoetida in Eastern Iran, western Afghanistan, western Pakistan and Central Asia (Karakum Desert, Kyzylkum Desert). It is one of the most widely distributed asafoetida-producing species and often mistaken for F. assa-foetida. It has sulphur-containing compounds in the essential oil.
Ferula assa-foetida is endemic to Southern Iran and is the source of asafoetida there. It has sulphur-containing compounds in the essential oil. Although it is often considered the main source of asafoetida on the international market, this notion is attributable to the fact that several Ferula species acting as the major sources are often misidentified as F. assa-foetida. In fact, the production of asafoetida from F. assa-foetida is confined to its native range, namely Southern Iran, outside which the sources of asafoetida are other species.
Ferula pseudalliacea and Ferula rubricaulis are endemic to western and southwestern Iran. They are sometimes considered conspecific with F. assa-foetida.
Ferula lutensis and Ferula alliacea are the sources of asafoetida in Eastern Iran. They have sulphur-containing compounds in the essential oil.
Ferula latisecta is the source of asafoetida in Eastern Iran and southern Turkmenistan. It has sulphur-containing compounds in the essential oil.
Ferula sinkiangensis and Ferula fukanensis are endemic to Xinjiang, China. They are the sources of asafoetida in China. They have sulphur-containing compounds in the essential oil.
Ferula narthex is native to Afghanistan, northern Pakistan and Kashmir. Although it is often listed as the source of asafoetida, one report states that it lacks sulphur-containing compounds in the essential oil.
Uses
Cooking
This spice is used as a digestive aid, in food as a condiment, and in pickling. It plays a critical flavouring role in Indian vegetarian cuisine by acting as a savory enhancer. Used along with turmeric, it is a standard component of lentil curries, such as dal, chickpea curries, and vegetable dishes, especially those based on potato and cauliflower. Asafoetida is quickly heated in hot oil before it's sprinkled on the food. It is sometimes used to harmonise sweet, sour, salty, and spicy components in food. The spice is added to the food as it's tempered.
In its pure form, it is sold in the form of chunks of resin, small quantities of which are scraped off for use. The odour of the pure resin is so strong that the pungent smell will contaminate other spices stored nearby if it is not stored in an airtight container.
When adapting recipes for those with garlic allergy or intolerance, asafoetida can be used as a substitute.
Cultivation and manufacture
The resin-like gum comes from the dried sap extracted from the stem and roots, and is used as a spice. The resin is greyish-white when fresh, but dries to a dark amber colour. The asafoetida resin is difficult to grate and is traditionally crushed between stones or with a hammer. Today, the most commonly available form is compounded asafoetida, a fine powder containing 30% asafoetida resin, along with rice flour or maida (white wheat flour) and gum arabic.
Ferula assa-foetida is a monoecious, herbaceous, perennial plant of the family Apiaceae. It grows to high, with a circular mass of leaves. Stem leaves have wide sheathing petioles. Flowering stems are high and thick and hollow, with a number of schizogenous ducts in the cortex containing the resinous gum. Flowers are pale greenish yellow produced in large compound umbels. Fruits are oval, flat, thin, reddish brown and have a milky juice. Roots are thick, massive, and pulpy. They yield a resin similar to that of the stems. All parts of the plant have the distinctive fetid smell.
History
Asafoetida was familiar in the early Mediterranean, having come by land across Iran. It was brought to Europe by an expedition of Alexander the Great, who, after returning from a trip to northeastern ancient Persia, thought that he had found a plant almost identical to the famed silphium of Cyrene in North Africa—though less tasty. Dioscorides, in the first century, wrote, "the Cyrenaic kind, even if one just tastes it, at once arouses a humour throughout the body and has a very healthy aroma, so that it is not noticed on the breath, or only a little; but the Median [Iranian] is weaker in power and has a nastier smell." Nevertheless, it could be substituted for silphium in cooking, which was fortunate, because a few decades after Dioscorides' time, the true silphium of Cyrene became extinct, and asafoetida became more popular amongst physicians, as well as cooks.
Asafoetida is also mentioned numerous times in Jewish literature, such as the Mishnah. Maimonides also writes in the Mishneh Torah "In the rainy season, one should eat warm food with much spice, but a limited amount of mustard and asafoetida [ ]."
While it is generally forgotten now in Europe, it is widely used in India. Asafoetida is mentioned in the Bhagavata Purana (7:5:23-24), which states that one must not have eaten hing before worshipping the deity. Asafoetida is eaten by Brahmins and Jains. Devotees of the Hare Krishna movement also use hing in their food, as they are not allowed to consume onions or garlic. Their food has to be presented to Lord Krishna for sanctification (to become Prasadam) before consumption and onions and garlic cannot be offered to Krishna.
Asafoetida was described by a number of Arab and Islamic scientists and pharmacists. Avicenna discussed the effects of asafoetida on digestion. Ibn al-Baitar and Fakhr al-Din al-Razi described some positive medicinal effects on the respiratory system.
After the fall of Rome and until the 16th century, asafoetida was rare in Europe, and if ever encountered, it was viewed as a medicine. "If used in cookery, it would ruin every dish because of its dreadful smell", asserted Garcia de Orta's European guest. "Nonsense", Garcia replied, "nothing is more widely used in every part of India, both in medicine and in cookery."
During the Italian Renaissance, asafoetida was used as part of the exorcism ritual.
See also
Ammoniacum
Chaat masala
Durian, a fruit with a pungent odour many find disagreeable
Muskroot
South Asian pickle
Turmeric
References
External links
Saudi Aramco World article on the history of asafoetida
Antiflatulents
Edible Apiaceae
Ferula
Indian spices
Medicinal plants of Asia
Resins
Spices | Asafoetida | Physics | 1,943 |
8,436,735 | https://en.wikipedia.org/wiki/Kekul%C3%A9%20Program | Kekulé was a computer program named after the chemist Friedrich August Kekulé von Stradonitz. The program was created starting in about 1990 by Joe McDaniel and Jason Balmuth while at Fein-Marquart Associates with funding from the National Cancer Institute under a Small Business Innovative Research Grant.
Overview
The program was created to satisfy a need at the NCI for entering chemical structures into a database. The format required for the database was a connection table while the published form of a structure was a drawing. The program could take a scanned image of the drawn structure and automatically read the atom labels (characters) and lines between atoms (bonds) to create the connection table for input into the database.
NCI has ceased to use the program.
Several articles describing the internal operation of the program were written and published in refereed journals such as the Journal of Chemical Information and Computer Sciences.
References
Chemistry software | Kekulé Program | Chemistry | 185 |
32,670,685 | https://en.wikipedia.org/wiki/Stauropteridaceae | Stauropteridaceae is a family of ferns or fern-like plants from the Devonian and Upper Carboniferous. It is the only family placed in the order Stauropteridales.
References
Ferns
Devonian plants
Pennsylvanian plants
Devonian first appearances
Pennsylvanian extinctions
Prehistoric plant families | Stauropteridaceae | Biology | 62 |
30,992,863 | https://en.wikipedia.org/wiki/Proaftn | Proaftn is a fuzzy classification method that belongs to the class of supervised learning algorithms. The acronym Proaftn stands for: (PROcédure d'Affectation Floue pour la problématique du Tri Nominal), which means in English: Fuzzy Assignment Procedure for Nominal Sorting.
The method enables to determine the fuzzy indifference relations by generalizing the indices (concordance and discordance) used in the ELECTRE III method. To determine the fuzzy indifference relations, PROAFTN uses the general scheme of the discretization technique described in, that establishes a set of pre-classified cases called a training set.
To resolve the classification problems, Proaftn proceeds by the following stages:
Stage 1. Modeling of classes: In this stage, the prototypes of the classes are conceived using the two following steps:
Step 1. Structuring: The prototypes and their parameters (thresholds, weights, etc.) are established using the available knowledge given by the expert.
Step 2. Validation: We use one of the two following techniques in order to validate or adjust the parameters obtained in the first step through the assignment examples known as a training set.
Direct technique: It consists in adjusting the parameters through the training set and with the expert intervention.
Indirect technique: It consists in fitting the parameters without the expert intervention as used in machine learning approaches.
In multicriteria classification problem, the indirect technique is known as preference disaggregation analysis. This technique requires less cognitive effort than the former technique; it uses an automatic method to determine the optimal parameters, which minimize the classification errors.
Furthermore, several heuristics and metaheuristics were used to learn the multicriteria classification method Proaftn.
Stage 2. Assignment: After conceiving the prototypes, Proaftn proceeds to assign the new objects to specific classes.
References
External links
Site dedicated to the sorting problematic of MCDA
Machine learning
Statistical classification | Proaftn | Engineering | 397 |
77,882 | https://en.wikipedia.org/wiki/Jukebox | A jukebox is a partially automated music-playing device, usually a coin-operated machine, that plays a patron's selection from self-contained media. The classic jukebox has buttons with letters and numbers on them, which are used to select specific records. Some may use compact discs instead. Disc changers are similar devices for home use; they are small enough to fit on a shelf and can hold up to hundreds of discs, allowing them to be easily removed, replaced, or inserted by the user.
History
Coin-operated music boxes and player pianos were the first forms of automated coin-operated musical devices. These devices used paper rolls, metal disks, or metal cylinders to play a musical selection on an actual instrument, or on several actual instruments, enclosed within the device.
In the 1890s, these devices were joined by machines which used recordings instead of actual physical instruments.
In 1889, Louis Glass and William S. Arnold invented the nickel-in-the-slot phonograph, in San Francisco. This was an Edison Class M Electric Phonograph retrofitted with a device patented under the name of 'Coin Actuated Attachment for Phonograph'. The music was heard via one of four listening tubes.
In 1928, Justus P. Seeburg, who was manufacturing player pianos, combined an electrostatic loudspeaker with a record player that was coin-operated. This 'Audiophone' machine was wide and bulky because it had eight separate turntables mounted on a rotating Ferris wheel-like device, allowing patrons to select from eight different 10" 78rpm records.
Also in 1928, Homer E. Capehart and some backers founded the Capehart Automatic Phonograph Company, which brought out the Orchestrope. It was a device in which the tone arm slipped between each record in a vertical stack, playing that record on which the needle fell.
A similar system to Seeburg's Audiophone was employed by the Mills Novelty Company in their 1935 Dancemaster Automatic Phonograph. The Seeburg Symphonola "Trashcan" jukebox of 1938 holds 20 10" 78rpm records each in a shallow centreless drawer so that when the selected record's drawer opens, the turntable can rise through the open centre of the drawer to lift the record up to meet the pickup arm at the top of the mechanism, where it plays. Working examples of both these instruments may be seen and heard at the Musical Museum, Brentford, England.
Later versions of the jukebox included Seeburg's Selectophone with 10 turntables mounted vertically on a spindle. By maneuvering the tone arm up and down, the customer could select from 10 different records.
The word "jukebox" came into use in the United States beginning in 1940, apparently derived from the familiar usage "juke joint", derived from the Gullah word juke, which means "bawdy". Manufacturers of jukeboxes tried to avoid using the term, associated with unreputable places, for many years.
Wallboxes were an important, and profitable, part of any jukebox installation. Serving as a remote control, they enabled patrons to select tunes from their table or booth. One example is the Seeburg 3W1, introduced in 1949 as companion to the 100-selection Model M100A jukebox. Stereo sound became popular in the early 1960s, and wallboxes of the era were designed with built-in speakers to provide patrons a sample of this latest technology.
Jukeboxes were most popular from the 1940s through the mid-1960s, particularly during the 1950s. By the middle of the 1940s, three-quarters of the records produced in America went into jukeboxes. Billboard published a record chart measuring jukebox play during the 1950s, which briefly became a component of the Hot 100; by 1959, the jukebox's popularity had waned to the point where Billboard ceased publishing the chart and stopped collecting jukebox play data.
As of 2016, at least two companies still manufacture classically styled jukeboxes: Rockola, based in California, and Sound Leisure, based in Leeds in the UK. Both companies manufacture jukeboxes based on a CD playing mechanism. However, in April 2016, Sound Leisure showed a prototype of a "Vinyl Rocket" at the UK Classic Car Show. It stated that it would start production of the 140 7" vinyl selector (70 records) in summer of the same year.
Since 2018, Orphéau, based in Brittany in France manufactures the original styled "Sunflower" Jukebox with the first 12" vinyl record selector (20 records), on both sides.
Notable models
1927 LINK – Valued at US$40,000 and extremely rare
1940 Gabel Kuro – 78 rpm, the manufacturer's last model. Four or five are known to exist; valued at US$125,000
1942 Rock-Ola President – Only one is known to exist; valued at least US$150,000
1942 Rock-Ola Premier – 15 known to exist; valued at US$20,000
1942 Wurlitzer 950 – 75–90 known to exist; valued at US$35,000
1946 Wurlitzer Model 1015 – Called the "1015 bubbler", it offered 24 selections. More than 56,000 were sold in less than two years. Considered a pop culture icon, it was designed by Paul Fuller.
1952 Seeburg M100C – The jukebox exterior used in the credit sequences for Happy Days in seasons 1–10. It played up to fifty 45-RPM records, making it a 100-play. It was very colorful, with chrome glass tubes on the front, mirrors in the display, and rotating animation in the pilasters.
1967 Rock-Ola 434 Concerto – The jukebox interior used in the credit sequence for the 11th and final season of Happy Days. Like the Seeburg M100C, it played up to fifty 45-RPM records, but unlike the M100C, had a horizontal playback mechanism.
2018 Orphéau Sunflower Serie – The first jukebox that played up to twenty 33-RPM records on both sides.
Decline
Traditional jukeboxes once were an important source of income for record publishers. Jukeboxes received the newest recordings first. They became an important market-testing device for new music, since they tallied the number of plays for each title. They let listeners control the music outside of their home, before audio technology became portable. They played music on demand without commercials. They also offered high fidelity listening before home high fidelity equipment became affordable.
In 1995, the United States Postal Service issued a 25-cent stamp commemorating the jukebox.
Modern derivatives
Jukebox digital music players
The term "jukebox" was used to describe high-capacity, hard disk based digital audio play due to their amount of digital space allowing a great number of music to be stored and played. The term was popularised following the introduction of the Creative NOMAD Jukebox in 2000, which could store as many as 150 CDs of music on its six gigabyte hard drive. In later years, the "classic" iPod would become the most popular product in this category.
Digital jukebox and apps
While the number of traditional jukeboxes declined, digital jukeboxes, also called "social jukebox", have been introduced.
See also
BAL-AMi Jukeboxes
Boombox
Music box
Player piano
Rock-Ola
Seeburg 1000
Sound Leisure
Vending machine
Juke Box Jury
References
Further reading
External links
Musical culture
American inventions
Audio engineering
Commercial machines
Vending machines | Jukebox | Physics,Technology,Engineering | 1,566 |
2,336,235 | https://en.wikipedia.org/wiki/Hilbert%27s%20twentieth%20problem | Hilbert's twentieth problem is one of the 23 Hilbert problems set out in a celebrated list compiled in 1900 by David Hilbert. It asks whether all boundary value problems can be solved (that is, do variational problems with certain boundary conditions have solutions).
Introduction
Hilbert noted that there existed methods for solving partial differential equations where the function's values were given at the boundary, but the problem asked for methods for solving partial differential equations with more complicated conditions on the boundary (e.g., involving derivatives of the function), or for solving calculus of variation problems in more than 1 dimension (for example, minimal surface problems or minimal curvature problems)
Problem statement
The original problem statement in its entirety is as follows:
An important problem closely connected with the foregoing [referring to Hilbert's nineteenth problem] is the question concerning the existence of solutions of partial differential equations when the values on the boundary of the region are prescribed. This problem is solved in the main by the keen methods of H. A. Schwarz, C. Neumann, and Poincaré for the differential equation of the potential. These methods, however, seem to be generally not capable of direct extension to the case where along the boundary there are prescribed either the differential coefficients or any relations between these and the values of the function. Nor can they be extended immediately to the case where the inquiry is not for potential surfaces but, say, for surfaces of least area, or surfaces of constant positive gaussian curvature, which are to pass through a prescribed twisted curve or to stretch over a given ring surface. It is my conviction that it will be possible to prove these existence theorems by means of a general principle whose nature is indicated by Dirichlet's principle. This general principle will then perhaps enable us to approach the question: Has not every regular variation problem a solution, provided certain assumptions regarding the given boundary conditions are satisfied (say that the functions concerned in these boundary conditions are continuous and have in sections one or more derivatives), and provided also if need be that the notion of a solution shall be suitably extended?
Boundary value problems
In the field of differential equations, a boundary value problem is a differential equation together with a set of additional constraints, called the boundary conditions. A solution to a boundary value problem is a solution to the differential equation which also satisfies the boundary conditions.
To be useful in applications, a boundary value problem should be well posed. This means that given the input to the problem there exists a unique solution, which depends continuously on the input. Much theoretical work in the field of partial differential equations is devoted to proving that boundary value problems arising from scientific and engineering applications are in fact well-posed.
References
.
.
.
20
Calculus of variations | Hilbert's twentieth problem | Mathematics | 551 |
40,186,920 | https://en.wikipedia.org/wiki/Public%20Interest%20Research%20Group%20in%20Michigan | Public Interest Research Group in Michigan (PIRGIM) is a non-profit organization that is part of the state PIRG organizations.
PIRGIM has a history of working on a variety of issues, such as cleaning Michigan's waterways, toy safety, and chemical safety.
History
The PIRGs emerged in the early 1970s on U.S. college campuses. The PIRG model was proposed in the book Action for a Change by Ralph Nader and Donald Ross.
Among other early accomplishments, the PIRGs were responsible for much of the Container Container Deposit Legislation in the United States, also known as "bottle bills."
Notable members and alumni
Phil Radford
Affiliate organizations
The Fund for Public Interest Research
Environment Michigan
References
External links
U.S. Public Interest Research Group (U.S. PIRG)
The Student PIRGs
The Public Interest Network
Non-profit organizations based in Michigan
Public Interest Research Groups
Renewable energy commercialization
Environmental ethics
Consumer rights organizations | Public Interest Research Group in Michigan | Environmental_science | 195 |
10,919,454 | https://en.wikipedia.org/wiki/XXTEA | In cryptography, Corrected Block TEA (often referred to as XXTEA) is a block cipher designed to correct weaknesses in the original Block TEA.
XXTEA is vulnerable to a chosen-plaintext attack requiring 259 queries and negligible work. See cryptanalysis below.
The cipher's designers were Roger Needham and David Wheeler of the Cambridge Computer Laboratory, and the algorithm was presented in an unpublished technical report in October 1998 (Wheeler and Needham, 1998). It is not subject to any patents.
Formally speaking, XXTEA is a consistent incomplete source-heavy heterogeneous UFN (unbalanced Feistel network) block cipher. XXTEA operates on variable-length blocks that are some arbitrary multiple of 32 bits in size (minimum 64 bits). The number of full cycles depends on the block size, but there are at least six (rising to 32 for small block sizes). The original Block TEA applies the XTEA round function to each word in the block and combines it additively with its leftmost neighbour. Slow diffusion rate of the decryption process was immediately exploited to break the cipher. Corrected Block TEA uses a more involved round function which makes use of both immediate neighbours in processing each word in the block.
XXTEA is likely to be more efficient than XTEA for longer messages.
Needham & Wheeler make the following comments on the use of Block TEA:
For ease of use and general security the large block version is to be preferred when applicable for the following reasons.
A single bit change will change about one half of the bits of the entire block, leaving no place where the changes start.
There is no choice of mode involved.
Even if the correct usage of always changing the data sent (possibly by a message number) is employed, only identical messages give the same result and the information leakage is minimal.
The message number should always be checked as this redundancy is the check against a random message being accepted.
Cut and join attacks do not appear to be possible.
If it is not acceptable to have very long messages, they can be broken into chunks say of 60 words and chained analogously to the methods used for DES.
However, due to the incomplete nature of the round function, two large ciphertexts of 53 or more 32-bit words identical in all but 12 words can be found by a simple brute-force collision search requiring 296−N memory, 2N time and 2N+296−N chosen plaintexts, in other words with a total time*memory complexity of 296, which is actually 2wordsize*fullcycles/2 for any such cipher. It is currently unknown if such partial collisions pose any threat to the security of the cipher. Eight full cycles would raise the bar for such collision search above complexity of parallel brute-force attacks.
The unusually small size of the XXTEA algorithm would make it a viable option in situations where there are extreme constraints e.g. legacy hardware systems (perhaps embedded) where the amount of available RAM is minimal, or alternatively single-board computers such as the Raspberry Pi, Banana Pi or Arduino.
Cryptanalysis
An attack published in 2010 by E. Yarrkov presents a chosen-plaintext attack against full-round XXTEA with wide block, requiring 259 queries for a block size of 212 bytes or more, and negligible work. It is based on differential cryptanalysis.
To cipher "212 bytes or more" algorithm performs just 6 rounds, and carefully chosen bit patterns allows to detect and analyze avalanche effect.
Reference code
The original formulation of the Corrected Block TEA algorithm, published by David Wheeler and Roger Needham, is as follows:
#define DELTA 0x9e3779b9
#define MX ((z>>5^y<<2) + (y>>3^z<<4) ^ (sum^y) + (k[p&3^e]^z))
long btea(long* v, long n, long* k) {
unsigned long z=v[n-1], y=v[0], sum=0, e, DELTA=0x9e3779b9;
long p, q ;
if (n > 1) { /* Coding Part */
q = 6 + 52/n;
while (q-- > 0) {
sum += DELTA;
e = (sum >> 2) & 3;
for (p=0; p<n-1; p++) y = v[p+1], z = v[p] += MX;
y = v[0];
z = v[n-1] += MX;
}
return 0 ;
} else if (n < -1) { /* Decoding Part */
n = -n;
q = 6 + 52/n;
sum = q*DELTA ;
while (sum != 0) {
e = (sum >> 2) & 3;
for (p=n-1; p>0; p--) z = v[p-1], y = v[p] -= MX;
z = v[n-1];
y = v[0] -= MX;
sum -= DELTA;
}
return 0;
}
return 1;
}
According to Needham and Wheeler:
Note that the initialization of z is Undefined behavior for n < 1 which may cause a segmentation fault or other unwanted behavior – it would be better placed inside the 'Coding Part' block. Also, in the definition of MX some programmers would prefer to use bracketing to clarify operator precedence.
A clarified version including those improvements is as follows:
#include <stdint.h>
#define DELTA 0x9e3779b9
#define MX (((z>>5^y<<2) + (y>>3^z<<4)) ^ ((sum^y) + (key[(p&3)^e] ^ z)))
void btea(uint32_t *v, int n, uint32_t const key[4]) {
uint32_t y, z, sum;
unsigned p, rounds, e;
if (n > 1) { /* Coding Part */
rounds = 6 + 52/n;
sum = 0;
z = v[n-1];
do {
sum += DELTA;
e = (sum >> 2) & 3;
for (p=0; p<n-1; p++) {
y = v[p+1];
z = v[p] += MX;
}
y = v[0];
z = v[n-1] += MX;
} while (--rounds);
} else if (n < -1) { /* Decoding Part */
n = -n;
rounds = 6 + 52/n;
sum = rounds*DELTA;
y = v[0];
do {
e = (sum >> 2) & 3;
for (p=n-1; p>0; p--) {
z = v[p-1];
y = v[p] -= MX;
}
z = v[n-1];
y = v[0] -= MX;
sum -= DELTA;
} while (--rounds);
}
}
See also
RC4: A stream cipher that, just like XXTEA, is designed to be very simple to implement.
XTEA: Block TEA's precursor.
TEA: XTEA's precursor.
References
External links
a JavaScript implementation
a PHP implementation
a CL implementation
Broken block ciphers
Computer security in the United Kingdom
Feistel ciphers
History of computing in the United Kingdom
Science and technology in Cambridgeshire
Articles with example C code | XXTEA | Technology | 1,614 |
1,573,690 | https://en.wikipedia.org/wiki/Sling%20%28implant%29 | In surgery, a sling is an implant that is intended to provide additional support to a particular tissue. It usually consists of a synthetic mesh material in the shape of a narrow ribbon but sometimes a biomaterial (bovine or porcine) or the patient’s own tissue. The ends are usually attached to a fixed body part such as the skeleton.
In stress incontinence
In stress incontinence, a sling is a potential method of treatment, and is placed under the urethra through one vaginal incision and two small abdominal incisions. The idea is to replace the deficient pelvic floor muscles and provide a backboard of support under the urethra.
For this purpose, Pelvicol (a porcine dermal sling) implant sling had a comparable patient-determined success rate with TVT.
In female genital prolapse
Slings can also be used in the surgical management of female genital prolapse.
Chin sling
A chin sling is a synthetic lining used in chin augmentation to lift the tissues under the chin and neck. The sling is surgically implanted under the skin of the chin and hooked behind the ears, giving a more youthful appearance, and reversing the effects of aging such as accumulated fat, lost skin elasticity and stretched muscle lining, all of which cause the neck to droop and sag.
References
Implants (medicine)
Oral and maxillofacial surgery
Otorhinolaryngology
Plastic surgery
Medical_devices | Sling (implant) | Biology | 308 |
43,269,516 | https://en.wikipedia.org/wiki/Sample%20complexity | The sample complexity of a machine learning algorithm represents the number of training-samples that it needs in order to successfully learn a target function.
More precisely, the sample complexity is the number of training-samples that we need to supply to the algorithm, so that the function returned by the algorithm is within an arbitrarily small error of the best possible function, with probability arbitrarily close to 1.
There are two variants of sample complexity:
The weak variant fixes a particular input-output distribution;
The strong variant takes the worst-case sample complexity over all input-output distributions.
The No free lunch theorem, discussed below, proves that, in general, the strong sample complexity is infinite, i.e. that there is no algorithm that can learn the globally-optimal target function using a finite number of training samples.
However, if we are only interested in a particular class of target functions (e.g., only linear functions) then the sample complexity is finite, and it depends linearly on the VC dimension on the class of target functions.
Definition
Let be a space which we call the input space, and be a space which we call the output space, and let denote the product . For example, in the setting of binary classification, is typically a finite-dimensional vector space and is the set .
Fix a hypothesis space of functions . A learning algorithm over is a computable map from to . In other words, it is an algorithm that takes as input a finite sequence of training samples and outputs a function from to . Typical learning algorithms include empirical risk minimization, without or with Tikhonov regularization.
Fix a loss function , for example, the square loss , where . For a given distribution on , the expected risk of a hypothesis (a function) is
In our setting, we have , where is a learning algorithm and is a sequence of vectors which are all drawn independently from . Define the optimal riskSet , for each sample size . is a random variable and depends on the random variable , which is drawn from the distribution . The algorithm is called consistent if probabilistically converges to . In other words, for all , there exists a positive integer , such that, for all sample sizes , we have
The sample complexity of is then the minimum for which this holds, as a function of , and . We write the sample complexity as to emphasize that this value of depends on , and . If is not consistent, then we set . If there exists an algorithm for which is finite, then we say that the hypothesis space is learnable.
In others words, the sample complexity defines the rate of consistency of the algorithm: given a desired accuracy and confidence , one needs to sample data points to guarantee that the risk of the output function is within of the best possible, with probability at least .
In probably approximately correct (PAC) learning, one is concerned with whether the sample complexity is polynomial, that is, whether is bounded by a polynomial in and . If is polynomial for some learning algorithm, then one says that the hypothesis space is PAC-learnable. This is a stronger notion than being learnable.
Unrestricted hypothesis space: infinite sample complexity
One can ask whether there exists a learning algorithm so that the sample complexity is finite in the strong sense, that is, there is a bound on the number of samples needed so that the algorithm can learn any distribution over the input-output space with a specified target error. More formally, one asks whether there exists a learning algorithm , such that, for all , there exists a positive integer such that for all , we have
where , with as above. The No Free Lunch Theorem says that without restrictions on the hypothesis space , this is not the case, i.e., there always exist "bad" distributions for which the sample complexity is arbitrarily large.
Thus, in order to make statements about the rate of convergence of the quantity
one must either
constrain the space of probability distributions , e.g. via a parametric approach, or
constrain the space of hypotheses , as in distribution-free approaches.
Restricted hypothesis space: finite sample-complexity
The latter approach leads to concepts such as VC dimension and Rademacher complexity which control the complexity of the space . A smaller hypothesis space introduces more bias into the inference process, meaning that may be greater than the best possible risk in a larger space. However, by restricting the complexity of the hypothesis space it becomes possible for an algorithm to produce more uniformly consistent functions. This trade-off leads to the concept of regularization.
It is a theorem from VC theory that the following three statements are equivalent for a hypothesis space :
is PAC-learnable.
The VC dimension of is finite.
is a uniform Glivenko-Cantelli class.
This gives a way to prove that certain hypothesis spaces are PAC learnable, and by extension, learnable.
An example of a PAC-learnable hypothesis space
, and let be the space of affine functions on , that is, functions of the form for some . This is the linear classification with offset learning problem. Now, four coplanar points in a square cannot be shattered by any affine function, since no affine function can be positive on two diagonally opposite vertices and negative on the remaining two. Thus, the VC dimension of is , so it is finite. It follows by the above characterization of PAC-learnable classes that is PAC-learnable, and by extension, learnable.
Sample-complexity bounds
Suppose is a class of binary functions (functions to ). Then, is -PAC-learnable with a sample of size:
where is the VC dimension of .
Moreover, any -PAC-learning algorithm for must have sample-complexity:
Thus, the sample-complexity is a linear function of the VC dimension of the hypothesis space.
Suppose is a class of real-valued functions with range in . Then, is -PAC-learnable with a sample of size:
where is Pollard's pseudo-dimension of .
Other settings
In addition to the supervised learning setting, sample complexity is relevant to semi-supervised learning problems including active learning, where the algorithm can ask for labels to specifically chosen inputs in order to reduce the cost of obtaining many labels. The concept of sample complexity also shows up in reinforcement learning, online learning, and unsupervised algorithms, e.g. for dictionary learning.
Efficiency in robotics
A high sample complexity means that many calculations are needed for running a Monte Carlo tree search. It is equivalent to a model-free brute force search in the state space. In contrast, a high-efficiency algorithm has a low sample complexity. Possible techniques for reducing the sample complexity are metric learning and model-based reinforcement learning.
See also
Active learning (machine learning)
References
Machine learning | Sample complexity | Engineering | 1,379 |
1,948,276 | https://en.wikipedia.org/wiki/Tetramethylsilane | Tetramethylsilane (abbreviated as TMS) is the organosilicon compound with the formula Si(CH3)4. It is the simplest tetraorganosilane. Like all silanes, the TMS framework is tetrahedral. TMS is a building block in organometallic chemistry but also finds use in diverse niche applications.
Synthesis and reactions
TMS is a by-product of the production of methyl chlorosilanes, SiClx(CH3)4−x, via the direct process of reacting methyl chloride with silicon. The more useful products of this reaction are those for x = 1 (trimethylsilyl chloride), 2 (dimethyldichlorosilane), and 3 (methyltrichlorosilane).
TMS undergoes deprotonation upon treatment with butyllithium to give (H3C)3SiCH2Li. The latter, trimethylsilylmethyl lithium, is a relatively common alkylating agent.
In chemical vapor deposition, TMS is the precursor to silicon dioxide or silicon carbide, depending on the deposition conditions. In the formation of silicon carbide, carbosilanes, such as 1,3,5,7-tetramethyl-1,3,5,7-tetrasilaadamantane, are observed as intermediates.
Uses in NMR spectroscopy
Tetramethylsilane is the accepted internal standard for calibrating chemical shift for 1H, 13C and 29Si NMR spectroscopy in organic solvents (where TMS is soluble). In water, where it is not soluble, sodium salts of DSS, 2,2-dimethyl-2-silapentane-5-sulfonate, are used instead. Because of its high volatility, TMS can easily be evaporated, which is convenient for recovery of samples analyzed by NMR spectroscopy.
Because all twelve hydrogen atoms in a tetramethylsilane molecule are equivalent, its 1H NMR spectrum consists of a singlet.
The chemical shift of this singlet is assigned as δ 0, and all other chemical shifts are determined relative to it. The majority of compounds studied by 1H NMR spectroscopy absorb downfield of the TMS signal, thus there is usually no interference between the standard and the sample. Similarly, all four carbon atoms in a tetramethylsilane molecule are equivalent.
In a fully decoupled 13C NMR spectrum, the carbon in the tetramethylsilane appears as a singlet, allowing for easy identification. The chemical shift of this singlet is also set to be δ 0 in the 13C spectrum, and all other chemical shifts are determined relative to it.
References
Carbosilanes
Trimethylsilyl compounds | Tetramethylsilane | Chemistry | 586 |
2,984,476 | https://en.wikipedia.org/wiki/Kaup%E2%80%93Kupershmidt%20equation | The Kaup–Kupershmidt equation (named after David J. Kaup and Boris Abram Kupershmidt) is the nonlinear fifth-order partial differential equation
It is the first equation in a hierarchy of integrable equations with the Lax operator
.
It has properties similar (but not identical) to those of the better-known KdV hierarchy in which the Lax operator has order 2.
References
External links
Partial differential equations
Integrable systems | Kaup–Kupershmidt equation | Physics,Mathematics | 94 |
38,012,550 | https://en.wikipedia.org/wiki/Beta%20Octantis | Beta Octantis, Latinized from β Octantis, is a probable astrometric binary star system in the southern circumpolar constellation of Octans. It is faintly visible to the naked eye with an apparent visual magnitude of 4.13. Based upon an annual parallax shift of 21.85 mas as seen from Earth, it is located about 149 light years from the Sun. It is moving away from the Sun with a radial velocity of +19 km/s.
Based upon a stellar classification of A9IV-V, the visible component is an evolving, white-hued A-type star with a spectrum that shows mixed traits of a main sequence and a subgiant star. It has an estimated 2.27 times the mass of the Sun and 3.2 times the Sun's radius. The star is around 500 million years old with a projected rotational velocity of 49 km/s. It is radiating 42 times the Sun's luminosity from its photosphere at an effective temperature of 8,006 K.
References
A-type subgiants
A-type main-sequence stars
Octans
Octantis, Beta
Durchmusterung objects
214846
112405
8630
Astrometric binaries | Beta Octantis | Astronomy | 255 |
2,109,782 | https://en.wikipedia.org/wiki/Requirements%20management | Requirements management is the process of documenting, analyzing, tracing, prioritizing and agreeing on requirements and then controlling change and communicating to relevant stakeholders. It is a continuous process throughout a project. A requirement is a capability to which a project outcome (product or service) should conform.
Overview
The purpose of requirements management is to ensure that an organization documents, verifies, and meets the needs and expectations of its customers and internal or external stakeholders. Requirements management begins with the analysis and elicitation of the objectives and constraints of the organization. Requirements management further includes supporting planning for requirements, integrating requirements and the organization for working with them (attributes for requirements), as well as relationships with other information delivering against requirements, and changes for these.
The traceability thus established is used in managing requirements to report back fulfilment of company and stakeholder interests in terms of compliance, completeness, coverage, and consistency. Traceabilities also support change management as part of requirements management in understanding the impacts of changes through requirements or other related elements (e.g., functional impacts through relations to functional architecture), and facilitating introducing these changes.
Requirements management involves communication between the project team members and stakeholders, and adjustment to requirements changes throughout the course of the project. To prevent one class of requirements from overriding another, constant communication among members of the development team is critical. For example, in software development for internal applications, the business has such strong needs that it may ignore user requirements, or believe that in creating use cases, the user requirements are being taken care of.
Traceability
Requirements traceability is concerned with documenting the life of a requirement. It should be possible to trace back to the origin of each requirement and every change made to the requirement should therefore be documented in order to achieve traceability. Even the use of the requirement after the implemented features have been deployed and used should be traceable.
Requirements come from different sources, like the business person ordering the product, the marketing manager and the actual user. These people all have different requirements for the product. Using requirements traceability, an implemented feature can be traced back to the person or group that wanted it during the requirements elicitation. This can, for example, be used during the development process to prioritize the requirement, determining how valuable the requirement is to a specific user. It can also be used after the deployment when user studies show that a feature is not used, to see why it was required in the first place.
Requirements activities
At each stage in a development process, there are key requirements management activities and methods. To illustrate, consider a standard five-phase development process with Investigation, Feasibility, Design, Construction and Test, and Release stages.
Investigation
In Investigation, the first three classes of requirements are gathered from the users, from the business and from the development team. In each area, similar questions are asked; what are the goals, what are the constraints, what are the current tools or processes in place, and so on. Only when these requirements are well understood can functional requirements be developed.
In the common case, requirements cannot be fully defined at the beginning of the project. Some requirements will change, either because they simply weren’t extracted, or because internal or external forces at work affect the project in mid-cycle.
The deliverable from the Investigation stage is a requirements document that has been approved by all members of the team. Later, in the thick of development, this document will be critical in preventing scope creep or unnecessary changes. As the system develops, each new feature opens a world of new possibilities, so the requirements specification anchors the team to the original vision and permits a controlled discussion of scope change.
While many organizations still use only documents to manage requirements, others manage their requirements baselines using software tools. These tools allow requirements to be managed in a database, and usually have functions to automate traceability (e.g., by allowing electronic links to be created between parent and child requirements, or between test cases and requirements), electronic baseline creation, version control, and change management. Usually such tools contain an export function that allows a specification document to be created by exporting the requirements data into a standard document application.
Feasibility
In the Feasibility stage, costs of the requirements are determined. For user requirements, the current cost of work is compared to the future projected costs once the new system is in place. Questions such as these are asked: “What are data entry errors costing us now?” Or “What is the cost of scrap due to operator error with the current interface?” Actually, the need for the new tool is often recognized as these questions come to the attention of financial people in the organization.
Business costs would include, “What department has the budget for this?” “What is the expected rate of return on the new product in the marketplace?” “What’s the internal rate of return in reducing costs of training and support if we make a new, easier-to-use system?”
Technical costs are related to software development costs and hardware costs. “Do we have the right people to create the tool?” “Do we need new equipment to support expanded software roles?” This last question is an important type. The team must inquire into whether the newest automated tools will add sufficient processing power to shift some of the burden from the user to the system in order to save people time.
The question also points out a fundamental point about requirements management. A human and a tool form a system, and this realization is especially important if the tool is a computer or a new application on a computer. The human mind excels in parallel processing and interpretation of trends with insufficient data. The CPU excels in serial processing and accurate mathematical computation. The overarching goal of the requirements management effort for a software project would thus be to make sure the work being automated gets assigned to the proper processor. For instance, “Don’t make the human remember where she is in the interface. Make the interface report the human’s location in the system at all times.” Or “Don’t make the human enter the same data in two screens. Make the system store the data and fill in the second screen as needed.”
The deliverable from the Feasibility stage is the budget and schedule for the project.
Design
Assuming that costs are accurately determined and benefits to be gained are sufficiently large, the project can proceed to the Design stage. In Design, the main requirements management activity is comparing the results of the design against the requirements document to make sure that work is staying in scope.
Again, flexibility is paramount to success. Here’s a classic story of scope change in mid-stream that actually worked well. Ford auto designers in the early ‘80s were expecting gasoline prices to hit $3.18 per gallon by the end of the decade. Midway through the design of the Ford Taurus, prices had centered to around $1.50 a gallon. The design team decided they could build a larger, more comfortable, and more powerful car if the gas prices stayed low, so they redesigned the car. The Taurus launch set nationwide sales records when the new car came out, primarily because it was so roomy and comfortable to drive.
In most cases, however, departing from the original requirements to that degree does not work. So the requirements document becomes a critical tool that helps the team make decisions about design changes.
Construction and test
In the construction and testing stage, the main activity of requirements management is to make sure that work and cost stay within schedule and budget, and that the emerging tool does in fact meet the requirements set. A main tool used in this stage is prototype construction and iterative testing. For a software application, the user interface can be created on paper and tested with potential users, while the framework of the software is being built. Results of these tests are recorded in a user interface design guide and handed off to the design team when they are ready to develop the interface.
An important aspect of this stage is verification. This effort verifies that the requirement has been implemented correctly. There are 4 methods of verification: analysis, inspection, testing, and demonstration. Numerical software execution results or through-put on a network test, for example, provides analytical evidence that the requirement has been met. Inspection of vendor documentation or spec sheets also verifies requirements. Testing or demonstrating the software in a lab environment also verifies the requirements: a test type of verification will occur when test equipment not normally part of the lab (or system under test) is used. Comprehensive test procedures which outline the steps, and their expected results clearly identify what is to be seen as a result of performing the step. After the step or set of steps is completed the last step's expected result will call out what has been seen and then identify what requirement or requirements have been verified (identified by number). The requirement number, title and verbiage are tied together in another location in the test document.
Requirements change management
Hardly would any software development project be completed without some changes being asked of the project. The changes can stem from changes in the environment in which the finished product is envisaged to be used, business changes, regulation changes, errors in the original definition of requirements, limitations in technology, changes in the security environment and so on. The activities of requirements change management include receiving the change requests from the stakeholders, recording the received change requests, analyzing and determining the desirability and process of implementation, implementation of the change request, quality assurance for the implementation and closing the change request. Then the data of change requests be compiled, analyzed and appropriate metrics are derived and dovetailed into the organizational knowledge repository.
Release
Requirements management does not end with product release. From that point on, the data coming in about the application’s acceptability is gathered and fed into the Investigation phase of the next generation or release. Thus the process begins again.
Tooling
Acquiring a tool to support requirements management is no trivial matter and it needs to be undertaken as part of a broader process improvement initiative. It has long been a perception that a tool, once acquired and installed on a project, can address all of its requirements management-related needs. However, the purchase or development of a tool to support requirements management can be a costly decision. Organizations may get burdened with expensive support contracts, disproportionate effort can get misdirected towards learning to use the tool and configuring it to address particular needs, and inappropriate use that can lead to erroneous decisions. Organizations should follow an incremental process to make decisions about tools to support their particular needs from within the wider context of their development process and tooling. The tools are presented in Requirements traceability.
See also
Requirement
Requirements engineering
Requirements analysis
Requirements traceability
Requirements Engineering Specialist Group
Process area (CMMI):
Requirements Development (RD)
Requirements Management (REQM)
Product requirements document
Software quality
References
Further reading
Colin Hood, Simon Wiedemann, Stefan Fichtinger, Urte Pautz Requirements Management: Interface Between Requirements Development and All Other Engineering Processes Springer, Berlin 2007,
Requirements Management - A Practice Guide, PMI
External links
U.K. Office of Government Commerce (OGC) - Requirements management (archive; OGC website ceased activity on 1 October 2011)
CDC Unified Process Practices Guide - Requirements Management
International Requirements Engineering Board (IREB)
What is Requirements Management?
Product lifecycle management
Systems engineering
Software requirements
Systems Modeling Language
Management cybernetics | Requirements management | Engineering | 2,326 |
65,673,469 | https://en.wikipedia.org/wiki/Short-haul%20flight%20ban | A short-haul flight ban is a prohibition imposed by governments on airlines to establish and maintain a flight connection over a certain distance, or by organisations or companies on their employees for business travel using existing flight connections over a certain distance, in order to mitigate the environmental impact of aviation (most notably to reduce anthropogenic greenhouse gas emissions which is the leading cause of climate change). In the 21st century, several governments, organisations and companies have imposed restrictions and even prohibitions on short-haul flights, stimulating or pressuring travellers to opt for more environmentally friendly means of transportation, especially trains.
A portion of air travelers in short-haul routes connect to other flights at their destination. A blanket ban would have a significant impact on these travelers, as inadequate rail connectivity between airports and main railway hubs of cities generally result in longer overall travel times and disruption to travellers overall.
Definition
There is no consensus on what constitutes a 'short-haul flight'. In public discourse such as debates and surveys, the term is often not explicitly defined. The International Air Transport Association (IATA) defines a short-haul flight as "a flight with duration of 6 hours or fewer", and a long-haul flight takes longer than 6 hours. In practice, governments and organisations have set different standards, either according to the absolute distance between cities as the crow flies in hundreds of kilometres, or in terms of how many hours it would take a train to cover the same distance. As one example, the University of Groningen set limits according to both standards, namely prohibiting its personnel from flying distances shorter than 500 kilometres, or shorter than can be travelled by train in 6 hours. There was some confusion on how to calculate and reconcile both limits: as the crow flies, the distance between Groningen and Berlin is 465 km, but the road connection 577 km; moreover, the train travel time varies from 5.40 hours to 6.30 hours.
Overview
Governments
Governments generally impose short-haul flight bans on all citizens and businesses operating within their territory. Some exceptions for emergency situations are granted.
: As part of its COVID-19 crisis support programme for Austrian Airlines in June 2020, the conservative–green coalition government introduce a special tax of 30 euros on airline tickets for flights spanning less than 350 kilometres (an unprecedented environmental measure within the EU). The Lufthansa Group further agreed to drop domestic connections that could be travelled within three hours by train. As an example, as of November 2020, the train travel time between Vienna and Graz was still too long (3 hours and 1 minute) to replace flying (35 minutes, not counting security check and waiting times), but many other short-distance flights were replaced by train connections. Since Austrian Airlines was the only carrier to offer these short-haul connections at the time of the agreement, this constitutes a de-facto flight ban, even though no law has been enacted to ban such flights for airlines in general.
: On 3 June 2019, French MPs proposed to prohibit airline connections covering distances that could be travelled within 2.5 hours by train. French Finance Minister Bruno Le Maire stated in April 2020 and repeated in May 2020 that negotiations between the government and Air France–KLM on such a 2.5 hour short-distance ban were underway. On 9 June 2020, as part of its COVID-19 crisis support programme for France's aviation sector, Le Maire confirmed that 2.5 hour short-distance flights would be prohibited, while Air France–KLM's domestic flights would be reduced by 40%.
: In June 2013, Dutch MP Liesbeth van Tongeren (GreenLeft, previously Greenpeace Netherlands director) proposed to prohibit domestic flights in the Netherlands with the argument that they are needlessly inefficient, polluting and expensive, but Environment Secretary Wilma Mansveld (Labour Party) said such a ban would violate EU regulations that allow airlines to fly domestically. In March 2019, the House of Representatives of the Netherlands voted to prohibit commercial flights between Amsterdam Airport Schiphol and Brussels Airport (Zaventem). This distance of about 150 kilometres was covered by five return flights a day, most of them feeder flights: passengers from Brussels go to Amsterdam to embark on a long-distance flight from there, or vice versa. However, Infrastructure Minister Cora van Nieuwenhuizen (VVD) stated that such a ban was contrary to the European Commission's free market regulations and was thus not implemented.
(): In 2006, Walloon Minister of Transport André Antoine prohibited airline Jet4you from making a stopover in Liège during a Charleroi–Casablanca flight, arguing that short-haul flights of fewer than 100 kilometres caused too much environmental damage. In December 2006, the European Commission confirmed that the ban did not violate any aviation agreements with Morocco, with Commissioner Jacques Barrot stating: 'The national authorities are allowed to take such measures, especially for environmental reasons.' Jet4you sued the Walloon Government, but in November 2008 the Court of First Instance in Namur confirmed the legality of the short-haul ban, rejecting Jet4you's damages claim and ordering the airline to pay 15,000 euros for court proceedings. Minister Antoine marked this as a victory and again urged the Federal Government of Belgium to introduce a countrywide prohibition on short-haul flights (which had been considered by the previous Federal Transport Minister, Renaat Landuyt).
: There is no ban against short-haul flights, but several have disappeared due to lack of subsidy or competition from train and road travel. Several domestic air routes are subsidised by the government in order to have reasonable travel times between the capital and remote parts of the country. A principle has been set up when to subsidise air routes: only when no other way of travel, e.g. through a unsubsidised air route or any train route, allow four hours travel time between Stockholm Central station and any municipality centre. The idea is that same day business travel should be reasonably possible. Some air routes, and therefore airports, have been closed down due to this. Storuman Airport was closed because Vilhelmina Airport could be used for Storuman. Mora Airport was closed because train travel time went under 4 hours. More air routes have been unsubsidised, but remained on a commercial basis and municipal support, but closed during the period 2000-2020, such as from Stockholm to Borlänge, Jönköping, Karlstad, Linköping and Örebro. They have got competition from improved railways and roads or got Covid-problems. An air tax has also been introduced. Two commercially operated air routes have (as of 2023) competing train travel times below 3:30 and that is from Stockholm to Gothenburg and Växjö.
Organisations and businesses
Organisations, including government organisations and NGOs, as well as commercial companies, sometimes impose short-haul restrictions on their own employees for work-related travelling, usually recommending or ordering personnel to take the train instead. Some exceptions may be granted for emergencies or destinations that are difficult to reach by train. If an employee's flight does not comply to the rules set by their employer, the travel costs will not be reimbursed.
Greater London Authority: On 12 March 2008, Mayor of London Ken Livingstone banned short-haul flights for all 20,000 employees of the Greater London Authority (alias City Hall), Transport for London and London Development Agency. A City Hall report published that day stated that all travel within the UK and most continental European cities should be undertaken by rail, unless such a journey would take longer than 6 hours. A 2010 Transport for London report noted: "As train travel is less carbon-intensive than travelling by airplane many organisations now implement a ban on all short-haul flights where an equivalent journey by train of less than six hours is available".
BBC Worldwide (now BBC Studios): The British Broadcasting Corporation decided in October 2009 that all BBC Worldwide staff members were no longer allowed to fly domestically or on short-haul flights on the company's expenses, unless when travelling by train added more than three hours to their journeys. Additionally, they had to formally explain why a meeting could not be held using one of the BBC's five videoconferencing suites before they were cleared to book a long-haul flight. The measures were taken to reduce environmental impact and cut costs.
Environment Agency: The UK government's Bristol-based Environment Agency banned its staff from making short-haul flights in June 2010, covering all of England and Wales and several destinations in continental Europe including Paris and Brussels, mandating them to travel by train instead; Edinburgh and Glasgow would still be allowed by airplane "in exceptional circumstances". The Agency had already reduced its business car mileage by 24% in 2006–2010 and wanted to set the right example in aviation, too, in part addressing public criticism over the Department of Energy and Climate Change's many avoidable domestic flights.
Catholic Private University Linz: Since 2010, the KU Linz is reimbursing staff flights "only if the most convenient train connection exceeds a travel time of 8 hours and if, in addition, emissions have been compensated via atmosfair".
Klarna Bank AB: After the flight shame movement emerged in Sweden in 2017, the bank Klarna decided to prohibits all its employees from flying within Europe and discourage long-haul flights.
Tilburg University: The 'TiU employees business travel compensation' as adopted on 1 January 2018 states that, "due to sustainability considerations", trips to destinations abroad until 500 kilometres are "in principle" performed by public transport (meaning bus or train) or one's own mode of transport (mostly cars); beyond 500 kilometres, airplanes may be used. In case the rules are not obeyed, the TiU will not reimburse the travel costs. A February 2019 inquiry showed that, amongst the employees top 10 destinations within Europe in 2018, only one (London at no. #7) was within the 500 kilometre limit, apparently demonstrating the policy's success, although central oversight to compliance appeared to be lacking.
Ghent University: In June 2018, Ghent University introduced a sustainable travel policy to cut down its personnel's 5,300 annual flights (causing almost 15% of its emissions), most of which had destinations within Europe. Going forward, business flights were forbidden to 'green cities', meaning reachable by bus or train within 6 hours, or "if the travel time by train is no longer than the travel time by plane (duration of the flight + 2 hours, being the standard duration of travel time to the airport + duration of check-in + duration of transfer)"). For flights to 'orange cities', which are reachable by train within 8 hours, staff would be recommended but not required to take the bus or train as an alternative. Exceptions to these rules due to unusual circumstances might be granted after a formal request. All future business flights' carbon emissions had to be offset as well.
University of Groningen: In May 2019, the university announced that henceforth it would prohibit its personnel from flying distances shorter than 500 kilometres, or shorter than can be travelled by train in 6 hours. The ban sought to slash the approximately 5,500 annual flights taken by university staff to attend congresses and symposia abroad, causing 15 million kilograms of emissions in the previous 3 years.
University of Geneva: In September 2019, it was announced that the approximately 4,000 annual flights taken by university staff to attend conferences and meetings would be drastically reduced in order to contribute to emission cuts. Amongst other measures, more video conferencing would replace real-life events, flights over distances travelable by train in 4 hours and business class flights within Europe and the MENA region would be prohibited, and emissions created by unavoidable airplane travel would be compensated.
Eberswalde University for Sustainable Development: On 19 September 2019, Eberswalde became the first university in Germany to mandate its staff to avoid flying distances under 1000 kilometres, unless the train trip took longer than 10 hours, or permission was granted for exceptional circumstances. As a university focused on sustainability, it concluded it should take a leading role in more sustainable transport, including eliminating the annual short-haul flight emissions, which accounted for 10% of all of its emissions in 2018.
HTW Berlin: In late September 2019, the Berlin-based Hochschule für Technik und Wirtschaft announced it would scrap all staff short-haul flights travelable by train in 6 hours from 1 January 2020. The institutions' annual aviation emissions reportedly amounted to 263 tonnes; half of its business flights covered fewer than 750 kilometres.
Flemish Government: Since 1 October 2019, civil servants of the Flemish Government are no longer allowed to travel by airplane to destinations closer than 500 kilometres, or travelable by land within 6 hours. Exceptions were only permissible if "serious reasons" could be demonstrated.
SFB 1287 of the University of Potsdam: The 1287 Limits of Variability in Language department of the University of Potsdam no longer reimburses business flights shorter than 1000 kilometres or 12 hours train travel since 1 January 2020.
Institut für Energietechnik of the Hochschule für Technik Rapperswil: 88% of Institut für Energietechnik members voted in favour (with 6% abstentions) of introducing a short-haul flight ban, defined as 1000 kilometres or travelable by alternative means of transport within 12 hours, for personnel by the end of January 2020.
Wageningen University and Research: The WUR board announced a new sustainable travel policy in February 2020, mandating its staff (which flew 10,000 times in 2017, causing 200 tons of ) to travel by train for trips of 6 hours or less, with the train also being 'preferred' for trips taking 6 to 8 hours. Only "when there are 'exceptionally good reasons' and with the boss's approval", flying for shorter distances would be allowed; these reasons would be evaluated after a year.
Radboud University Nijmegen: In March 2020, on the Radboud Green Office's recommendation, the board announced employees were no longer allowed to take business flights travelable by train in 7 hours, beginning in September 2020. It also planned to set up a partnership with an external travel agency to regulate its employees' travelling behaviour without violating their privacy, and invest in better video conferencing technology to make travel unnecessary. According to research by two HAN students, the plan would save Radboud University about 10% of all its emissions.
Canton of Basel-Stadt: In June 2020, all government employees were prohibited from taking flights to destinations closer than 1000 kilometres to the city of Basel for environmental reasons.
Public debate
European Union
During a televised debate ahead of the 2019 European Parliament election in May 2019, European Commission presidential candidate Frans Timmermans proposed banning all short-haul flights in the European Union, with his opponent Manfred Weber partially agreeing that they should be reduced. Analysts pointed out that there was no agreed definition of the term 'short-haul flights', and that it could pose far-reaching implications for smaller regional airports that primarily serve domestic flights. In a September–October 2019 poll conducted by the European Investment Bank (EIB) amongst 28,088 EU citizens from the then 28 member states, 62% said they were in favour of banning 'short-haul flights'; the survey did not define the term.
Flanders
In August 2010, activist group Wiloo (Werkgroep rondom de Impact van de Luchthaven van Oostende op de Omgeving) demanded a short-haul flight ban and a domestic kerosene tax in Flanders, similar to the ones imposed in Wallonia in 2006 and the Netherlands in 2005 respectively, due to the rapid increase of pollutive domestic flights. A spokesperson said 700 flights (20%) in or out of Ostend were only 300 kilometres or less, adding that it was 12 times more expensive to transport passengers from Ostend to Brussels by airplane than by bus.
On 9 June 2020, during a lull in the COVID-19 pandemic in Belgium, Flemish Transport Minister participated in a short-haul flight of ASL Group from Brussels via Knokke to Antwerp, claiming she wanted to promote regional airports such as Antwerp, Ostend, Kortrijk during the aviation crisis, because she was "convinced that regional airports have a future in Flanders because of their economic importance." For several days, her action was fiercely criticised by citizens and environmental organisations, who argued regional airports were "not economically essential at all, but a source of damaging and perfectly avoidable emissions". Groen politician Imade Annouri remarked: "This is utterly sending the wrong signal. Several countries around us are abolishing short-haul flights and investing in high-speed rail instead. (...) Businessmen can perfectly take the train to European destinations." In light of the climate crisis, the Minister's decision was alleged to be "irresponsible to society". Peeters felt the need to apologise on three different occasions, first explaining she had accepted the proposal "to take part in a press flight because business flights are an essential pillar of our regional airports", eventually expressing regret and declaring she should not have embarked on the flight.
Germany
Timmermans' proposal triggered a fierce debate in Germany about banning short-haul flights (meaning shorter than 1,500 kilometres), with some politicians agreeing with him, others saying it went too far, and others supporting measures they deemed more appropriate. In mid-October 2019, the German Finance Ministry announced that it would not restrict short-distance flights, but would almost double the short-haul air passenger taxes instead, from 7.50 to 13.03 euros; medium-haul taxes would increase from 23.43 to 33.01 and long-haul taxes from 42.18 to 59.43 euros. Meanwhile, train tickets would become 10% cheaper.
By July 2019, most political parties in Germany, including the Left Party, the Social Democrats, the Green Party and the Christian Democrats, started to agree to move all governmental institutions remaining in Bonn (the former capital of West Germany) to Berlin (the official capital since German Reunification in 1990), because ministers and civil servants were flying between the two cities about 230,000 times a year, which was considered too impractical, expensive and environmentally damaging. The distance of 500 kilometres between Bonn and Berlin could only be travelled by train in 5.5 hours, so either the train connections required upgrading, or Bonn had to be abolished as the secondary capital.
Netherlands
Although in March 2019 almost all Dutch parliamentary parties agreed that train travel should replace short-distance aviation, there were also some practical problems to be solved before trains could become a viable alternative, such buying a combined train/plane ticket, the lack of a direct Thalys connection from Amsterdam Central and Paris-North to Brussels Airport (forcing passengers to switch trains in Brussels-South), and the fact that the Benelux train (which does directly connect Schiphol and Zaventem) takes over 2 hours (mostly due to the lack of a high-speed rail between Antwerp and Brussels). In November 2019, a Qatar Airways Boeing 777 cargo flight line from Doha to Mexico City with stopovers in Maastricht and Liège sparked controversy over "the most bizarre flight ever", as the distance between the latter two is only 38 kilometres and takes just 9 minutes, merely because a single Dutch customer requested their weekly package to be delivered in Maastricht rather than Liège. In response, two of the four Dutch government parties suggested prohibiting all flights shorter than 100 kilometres.
In September-October 2022, research by RTL Nieuws revealed that Dutch ministers, state secretaries and the royal couple Willem-Alexander and Máxima were increasingly making short business flights on the Dutch government aircraft PH-GOV (a Boeing 737-700), private aircraft or commercial airliners (a 38% increase compared to 2019), even though this was contrary to the January 2022 coalition agreement to discourage short flights. Moreover, many aircraft flew empty back and forth in inefficient and environmentally polluting ways, and most distances could in principle have been covered perfectly well by train, or, if necessary, scheduled flights. This was evident in part because some ministers such as Dijkgraaf (education) and Harbers (infrastructure) travelled by official car or train from The Hague to Luxembourg or Paris in the first eight months of 2022, but Prime Minister Rutte and Minister Hoekstra (foreign affairs) together made 8 out of 12 flights to Luxembourg or Paris, mostly by government aircraft PH-GOV. Aviation experts were critical of the needlessly polluting and expensive travel behaviour of the ministers who were supposed to set a good example, and private aviation was also unhappy with the many short flights because of the high costs. In response to RTL's findings, coalition parties D66 and ChristenUnie reacted critically to the cabinet. and coalition party CDA also raised parliamentary questions about short and environmentally polluting empty flights, for instance between Amsterdam and Rotterdam, for a limited gain of time for a minister. Opposition party GroenLinks wanted to field a motion to force the cabinet and the king to travel by train for trips shorter than 700 kilometres. The Ministry of Infrastructure confirmed that the climate impact of aviation needed to be reduced, although ministers also needed to be able to do their jobs efficiently. The Interior Ministry also said that short flights were often unnecessary: "The time savings with flying are very limited, flying has more logistical challenges and is less flexible in terms of time than a train connection." However, according to the State Information Service and the Ministry of Foreign Affairs, alternative transport was not possible for all trips by commercial private jets, "because the government plane was not available and other means of transport did not fit agendas."
Universities
Movements
In response to the 8 October 2018 IPCC report, more than 650 Danish academics from various disciplines published an open letter on 19 November 2018, calling on the managements of (Danish) universities to lead by example in combating climate change. Item number one on their five-point priority list was "drastically reducing flights and supporting climate-friendly alternatives". On 4 February 2019, 55 Dutch scientists, referring to the Danish initiative, published a similar "Climate Letter", including item no. #2: "Drastically reducing flights, with insightful targets, including through exercising critical consideration before travelling, using alternative modes of transport, and investing in climate-friendly alternatives and behavioural change to enable remote participation at academic consultations, conferences and exchanges." By 7 March 2019, all 14 Dutch universities (united in the VSNU) had expressed their support for the Climate Letter, which had been signed by almost 1,300 members of staff at that point. VSNU President Pieter Duisenberg stated: "The academic community can and must play a leading role in addressing climate change. This not only involves knowledge, but also whatever we as universities can do ourselves." Many Dutch universities were inspired by Ghent University's sustainable travel policy. In July 2019, Technische Universität Berlin professor Martina Schäfer similarly initiated a "Commitment to renounce short-haul (business) flights' (described as "travelable without flying in below 12 hours", or 1,000 kilometres), which was signed by over 1,700 German academics by 20 September 2019. The day before, Eberswalde University for Sustainable Development became the first Germany university to make the voluntary commitment to avoid flying distances shorter than 1000 kilometres or 10 hours train travel mandatory for all employees.
Discussions
Aside from advocating for more sustainable short-distance travel and arguing that the scientific community should lead by example, some academics have questioned the necessity and thereby justifications for many international flights in order to attend scientific conferences or researchers' meetings. Liesbeth Enneking (Erasmus University Rotterdam) stated that congresses have little added value, as researchers can already access their colleagues around the world through the online publication of their papers, and meeting peers in real life and speaking to them face to face is rarely important for their work. "Attending congresses is sometimes mostly just stimulating your ego, and a nice trip, (...) but for the planet's sake, this is a privilege that we can no longer afford on this scale", Enneking stated; she stopped flying in 2017. Cody Hochstenbach (University of Amsterdam) narrated how many short (for example, two days) international research meetings are:great to catch up with each other and to discover a new city, but seldom they are actually productive. I was therefore enormously surprised that a Japanese professor had flown all the way to attend this meeting [in Le Havre]. Moreover, he had a heavy jet lag and regularly fell asleep during the sessions. It's obviously an expensive affair to have someone flown in across half the planet for just two days. I find it even more insane that universities facilitate and even encourage this behaviour.Referring to arguments made by other academics, he added that this behaviour was a form of socioeconomic injustice towards many people with lower education and income who could never even afford such long flights. Individual scientists should take their responsibility and fulfil the burden of proof to demonstrate that their flights to such conferences are really useful, and cannot be replaced by trains. Climate lawyer Laura Burgers said: "Some scientific conferences abroad are no doubt useful, but we should be honest: often it's just fun to make a trip. Such advantages do not outweigh the environmental damage, however," recounting her experience of a conference where scientists discussed research that had already been published and thus "a waste of time and flight emissions".
While acknowledging that the current intensity should be reduced, other academics partially disagree, saying that, especially for young researchers, getting and staying in touch with their international colleagues in real life can really help to establish their network and advance their career, and make interactions easier and more complete than via video. Astrophysicist Ralph Wijers pointed out that his research projects, including trips he needed to make for them, were funded by several different organisations who required him to travel with the fewest expenses possible, often forcing him to take generally cheap plane tickets rather than relatively costly train tickets: 'We should address this on a larger scale: the more pollutive for the environment, the more expensive I think it should be.'
Alternate approaches
Some universities have consciously decided not to impose a formal ban on short-haul business flights, but instead encourage their employees to consider alternative modes of transportation, or to fully offset their carbon emissions, or to consider videoconferencing instead of flying to conferences and meetings, judging that such an alternate approach would still be sufficient to meet set environmental goals. For example, the University of Copenhagen's prorector stated in February 2020: "We're very keen to limit climate changes and we intend to reduce our total footprint even more. (...) The University's new travel policy does not impose a ban on air travelling, but sets out recommendations and suggestions for how to change travel habits. It is a matter of choice of transportation and providing alternatives to air travel. For example meetings and video conferences via digital platforms like Skype."
Leiden University has not introduced short-distance restrictions on flights, but set train travel as the norm for personnel journeys shorter than 6 hours or 500 kilometres since 2017. The university aimed to restrict thus-defined short-haul flights below 10% of all flights; since this was 5.7% in 2017 and further decreased to 4.5% in 2019, the policy was hailed as a success. In 2018, 90% of flight emissions were compensated by payments to, for example, the Fair Climate Fund.
In November 2019, Utrecht University chose not to impose a flight ban, but use various other measures, such as providing employees with information about alternatives, investing in better video conferencing facilities, a train zone map that calculates travel times, and compensation for train ticket purchases, to halve its number of flight kilometres by 2030. A flight carbon offset requirement was already imposed in 2018.
See also
Aviation taxation and subsidies
EU aviation fuel taxation
Mobility transition
Night flying restrictions (including night flight bans)
Single European Sky
References
Aviation and the environment
Aviation law
Environmental mitigation
Transport policy | Short-haul flight ban | Physics,Chemistry,Engineering | 5,820 |
44,597,982 | https://en.wikipedia.org/wiki/Rhizoclosmatium | Rhizoclosmatium is a genus of fungi classified in the family Chytriomycetaceae. It was circumscribed by Danish mycologist Henning Eiler Petersen in 1903. The genus contains four species.
References
External links
Chytridiomycota genera | Rhizoclosmatium | Biology | 65 |
9,107,270 | https://en.wikipedia.org/wiki/UTOPIA%20%28bioinformatics%20tools%29 | UTOPIA (User-friendly Tools for Operating Informatics Applications) is a suite of free tools for visualising and analysing bioinformatics data. Based on an ontology-driven data model, it contains applications for viewing and aligning protein sequences, rendering complex molecular structures in 3D, and for finding and using resources such as web services and data objects. There are two major components, the protein analysis suite and UTOPIA documents.
Utopia Protein Analysis suite
The Utopia Protein Analysis suite is a collection of interactive tools for analysing protein sequence and protein structure. Up front are user-friendly and responsive visualisation applications, behind the scenes a sophisticated model that allows these to work together and hides much of the tedious work of dealing with file formats and web services.
Utopia Documents
Utopia Documents brings a fresh new perspective to reading the scientific literature, combining the convenience and reliability of the Portable Document Format (pdf) with the flexibility and power of the web.
History
Between 2003 and 2005 work on UTOPIA was funded via The e-Science North West Centre based at The University of Manchester by the Engineering and Physical Sciences Research Council, UK Department of Trade And Industry, and the European Molecular Biology Network (EMBnet). Since 2005 work continues under the EMBRACE European Network of Excellence.
UTOPIA's CINEMA (Colour INteractive Editor for Multiple Alignments), a tool for Sequence Alignment, is the latest incarnation of software originally developed at The University of Leeds to aid the analysis of G protein-coupled receptors (GPCRs). SOMAP, a Screen Oriented Multiple Alignment Procedure was developed in the late 1980s on the VMS computer operating system, used a monochrome text-based VT100 video terminal, and featured context-sensitive help and pulldown menus some time before these were standard operating system features.
SOMAP was followed by a Unix tool called VISTAS (VIsualizing STructures And Sequences) which included the ability to render 3D molecular structure and generate plots and statistical representations of sequence properties.
The first tool under the CINEMA banner developed at The University of Manchester was a Java-based applet launched via web pages, which is still available but is no longer maintained. A standalone Java version, called CINEMA-MX, was also released but is no longer readily available.
A C++ version of CINEMA, called CINEMA5 was developed early on as part of the UTOPIA project, and was released as a stand-alone sequence alignment application. It has now been replaced by a version of the tool integrated with UTOPIA's other visualisation applications, and its name has reverted simply to CINEMA.
References
Bioinformatics software
Computational science
Engineering and Physical Sciences Research Council
Department of Computer Science, University of Manchester
Science and technology in Greater Manchester | UTOPIA (bioinformatics tools) | Mathematics,Biology | 547 |
74,366 | https://en.wikipedia.org/wiki/Square%20metre | The square metre (international spelling as used by the International Bureau of Weights and Measures) or square meter (American spelling) is the unit of area in the International System of Units (SI) with symbol m2. It is the area of a square with sides one metre in length.
Adding and subtracting SI prefixes creates multiples and submultiples; however, as the unit is exponentiated, the quantities grow exponentially by the corresponding power of 10. For example, 1 kilometre is 103 (one thousand) times the length of 1 metre, but 1 square kilometre is (103)2 (106, one million) times the area of 1 square metre, and 1 cubic kilometre is (103)3 (109, one billion) cubic metres.
SI prefixes applied
The square metre may be used with all SI prefixes used with the metre.
Unicode characters
Unicode has several characters used to represent metric area units, but these are for compatibility with East Asian character encodings and are meant to be used in new documents.
Instead, the Unicode superscript can be used, as in m².
Conversions
One square metre is equal to:
square kilometre (km2)
square centimetres (cm2)
hectares (ha)
decares (daa)
ares (a)
deciares (da)
centiare (ca)
acres
cents
square yards
square feet
square inches
See also
Conversion of units § Area
Orders of magnitude (area)
SI
SI prefix
Notes
External links
BIPM (SI maintenance agency) (home page)
BIPM brochure (SI reference)
Units of area
SI derived units | Square metre | Mathematics | 324 |
3,560,079 | https://en.wikipedia.org/wiki/Acamprosate | Acamprosate, sold under the brand name Campral, is a medication which reduces alcoholism cravings. It is thought to stabilize chemical signaling in the brain that would otherwise be disrupted by alcohol withdrawal. When used alone, acamprosate is not an effective therapy for alcohol use disorder in most individuals, as it only addresses withdrawal symptoms and not psychological dependence. It facilitates a reduction in alcohol consumption as well as full abstinence when used in combination with psychosocial support or other drugs that address the addictive behavior.
Serious side effects include allergic reactions, abnormal heart rhythms, and low or high blood pressure, while less serious side effects include headaches, insomnia, and impotence. Diarrhea is the most common side-effect. It is unclear if use is safe during pregnancy.
It is on the World Health Organization's List of Essential Medicines.
Medical uses
Acamprosate is useful when used along with counseling in the treatment of alcohol use disorder. Over three to twelve months it increases the number of people who do not drink at all and the number of days without alcohol. It appears to work as well as naltrexone for maintenance of abstinence from alcohol, however naltrexone works slightly better for reducing alcohol cravings and heavy drinking, and acamprosate tends to work more poorly outside of Europe where treatment services are less robust.
Contraindications
Acamprosate is primarily removed by the kidneys. A dose reduction is suggested in those with moderately impaired kidneys (creatinine clearance between 30 mL/min and 50 mL/min). It is also contraindicated in those who have a strong allergic reaction to acamprosate calcium or any of its components.
Adverse effects
The US label carries warnings about increases in suicidal behavior, major depressive disorder, and kidney failure.
Adverse effects that caused people to stop taking the drug in clinical trials included diarrhea, nausea, depression, and anxiety.
Potential adverse effects include headache, stomach pain, back pain, muscle pain, joint pain, chest pain, infections, flu-like symptoms, chills, heart palpitations, high blood pressure, fainting, vomiting, upset stomach, constipation, increased appetite, weight gain, edema, sleepiness, decreased sex drive, impotence, forgetfulness, abnormal thinking, abnormal vision, distorted sense of taste, tremors, runny nose, coughing, difficulty breathing, sore throat, bronchitis, and rashes.
Pharmacology
Pharmacodynamics
The pharmacodynamics of acamprosate are complex and not fully understood; However, it is believed to act as an NMDA receptor antagonist and positive allosteric modulator of GABAA receptors.
Its activity on those receptors is indirect, unlike that of most other agents used in this context. An inhibition of the GABA-B system is believed to cause indirect enhancement of GABAA receptors. The effects on the NMDA complex are dose-dependent; the product appears to enhance receptor activation at low concentrations, while inhibiting it when consumed in higher amounts, which counters the excessive activation of NMDA receptors in the context of alcohol withdrawal.
The product also increases the endogenous production of taurine.
Ethanol and benzodiazepines act on the central nervous system by binding to the GABAA receptor, increasing the effects of the inhibitory neurotransmitter GABA (i.e., they act as positive allosteric modulators at these receptors). In alcohol use disorder, one of the main mechanisms of tolerance is attributed to GABAA receptors becoming downregulated (i.e. these receptors become less sensitive to GABA). When alcohol is no longer consumed, these down-regulated GABAA receptor complexes are so insensitive to GABA that the typical amount of GABA produced has little effect, leading to physical withdrawal symptoms; since GABA normally inhibits neural firing, GABAA receptor desensitization results in unopposed excitatory neurotransmission (i.e., fewer inhibitory postsynaptic potentials occur through GABAA receptors), leading to neuronal over-excitation (i.e., more action potentials in the postsynaptic neuron). One of acamprosate's mechanisms of action is the enhancement of GABA signaling at GABAA receptors via positive allosteric receptor modulation. It has been purported to open the chloride ion channel in a novel way as it does not require GABA as a cofactor, making it less liable for dependence than benzodiazepines. Acamprosate has been successfully used to control tinnitus, hyperacusis, ear pain, and inner ear pressure during alcohol use due to spasms of the tensor tympani muscle.
In addition, alcohol also inhibits the activity of N-methyl-D-aspartate receptors (NMDARs). Chronic alcohol consumption leads to the overproduction (upregulation) of these receptors. Thereafter, sudden alcohol abstinence causes the excessive numbers of NMDARs to be more active than normal and to contribute to the symptoms of delirium tremens and excitotoxic neuronal death. Withdrawal from alcohol induces a surge in release of excitatory neurotransmitters like glutamate, which activates NMDARs. Acamprosate reduces this glutamate surge. The drug also protects cultured cells from excitotoxicity induced by ethanol withdrawal and from glutamate exposure combined with ethanol withdrawal.
The substance also helps re-establish a standard sleep architecture by normalizing stage 3 and REM sleep phases, which is believed to be an important aspect of its pharmacological activity.
Pharmacokinetics
Acamprosate is not metabolized by the human body. Acamprosate's absolute bioavailability from oral administration is approximately 11%, and its bioavailability is decreased when taken with food. Following administration and absorption of acamprosate, it is excreted unchanged (i.e., as acamprosate) via the kidneys.
Its absorption and elimination are very slow, with a tmax of 6 hours and an elimination half life of over 30 hours.
History
Acamprosate was developed by Lipha, a subsidiary of Merck KGaA. and was approved for marketing in Europe in 1989.
In October 2001 Forest Laboratories acquired the rights to market the drug in the US.
It was approved by the US Food and Drug Administration (FDA) in July 2004.
The first generic versions of acamprosate were launched in the US in 2013.
As of 2015, acamprosate was in development by Confluence Pharmaceuticals as a potential treatment for fragile X syndrome. The drug was granted orphan drug designation for this use by the FDA in 2013, and by the European Medicines Agency (EMA) in 2014.
Society and culture
Names
Acamprosate is the International Nonproprietary Name (INN) and the British Approved Name (BAN). Acamprosate calcium is the United States Adopted Name (USAN) and the Japanese Accepted Name (JAN). It is also technically known as N-acetylhomotaurine or as calcium acetylhomotaurinate.
It is sold under the brand name Campral.
Research
In addition to its apparent ability to help people refrain from drinking, some evidence suggests that acamprosate is neuroprotective (that is, it protects neurons from damage and death caused by the effects of alcohol withdrawal, and possibly other causes of neurotoxicity).
References
Acetamides
Addiction psychiatry
Drug rehabilitation
Drugs with unknown mechanisms of action
Drugs developed by AbbVie
Drugs developed by Merck
Neuroprotective agents
Sulfonic acids
Substance-related disorders
World Health Organization essential medicines | Acamprosate | Chemistry | 1,634 |
3,387,739 | https://en.wikipedia.org/wiki/San%20Francisco%20Sex%20Information | San Francisco Sex Information (SFSI) is an organization that provides free sex information via the World Wide Web, e-mail, telephone, and online social networking. SFSI also offers a bi-annual sex educator training program and various continuing education lectures, all located in San Francisco, California.
Overview
Founded in 1973 by Maggi Rubenstein, Margo Rila, and Tony Ayers as a telephone service, SFSI describes its mission as providing "free, confidential, accurate, non-judgmental information about sex and reproductive health." Graduates of its training program include Isadora Alman, Joani Blank, Dossie Easton, Susie Bright, Patrick Califia, Sybil Holiday, Andrea Nemerson, Carol Queen, David Lourea, Veronica Monet, Midori and Violet Blue.
The organization answers about 3,000 phone calls and about twice as many emails every year. Since its inception, SFSI's Basic Sex Educator training program has graduated over 1,900 trainees.
References
External links
Non-profit organizations based in San Francisco
Sex education in the United States
501(c)(3) organizations | San Francisco Sex Information | Biology | 229 |
63,209,416 | https://en.wikipedia.org/wiki/N-Nitrosoglyphosate | N-Nitrosoglyphosate is the nitrosamine degradation product and synthetic impurity of glyphosate herbicide.
The US EPA limits N-nitrosoglyphosate impurity to a maximum of 1 ppm in glyphosate formulated products. N-Nitrosoglyphosate can also form from the reaction of nitrates and glyphosate. Formation of N-nitrosoglyphosate has been observed in soils treated with sodium nitrite and glyphosate at elevated levels, though formation in soil is not expected at under typical field conditions.
References
Herbicides
Nitrosamines
Acetic acids
Phosphonic acids | N-Nitrosoglyphosate | Biology | 147 |
35,076,621 | https://en.wikipedia.org/wiki/Novikov%E2%80%93Shubin%20invariant | In mathematics, a Novikov–Shubin invariant, introduced by , is an invariant of a compact Riemannian manifold related to the spectrum of the Laplace operator acting on square-integrable differential forms on its universal cover.
The Novikov–Shubin invariant gives a measure of the density of eigenvalues around zero. It can be computed from a triangulation of the manifold, and it is a homotopy invariant. In particular, it does not depend on the chosen Riemannian metric on the manifold.
Notes
References
Differential geometry
Algebraic topology | Novikov–Shubin invariant | Mathematics | 116 |
845,722 | https://en.wikipedia.org/wiki/Tunnel%20magnetoresistance | Tunnel magnetoresistance (TMR) is a magnetoresistive effect that occurs in a magnetic tunnel junction (MTJ), which is a component consisting of two ferromagnets separated by a thin insulator. If the insulating layer is thin enough (typically a few nanometres), electrons can tunnel from one ferromagnet into the other. Since this process is forbidden in classical physics, the tunnel magnetoresistance is a strictly quantum mechanical phenomenon, and lies in the study of spintronics.
Magnetic tunnel junctions are manufactured in thin film technology. On an industrial scale the film deposition is done by magnetron sputter deposition; on a laboratory scale molecular beam epitaxy, pulsed laser deposition and electron beam physical vapor deposition are also utilized. The junctions are prepared by photolithography.
Phenomenological description
The direction of the two magnetizations of the ferromagnetic films can be switched individually by an external magnetic field. If the magnetizations are in a parallel orientation it is more likely that electrons will tunnel through the insulating film than if they are in the oppositional (antiparallel) orientation. Consequently, such a junction can be switched between two states of electrical resistance, one with low and one with very high resistance.
History
The effect was originally discovered in 1975 by Michel Jullière (University of Rennes, France) in Fe/Ge-O/Co-junctions at 4.2 K. The relative change of resistance was around 14%, and did not attract much attention. In 1991 Terunobu Miyazaki (Tohoku University, Japan) found a change of 2.7% at room temperature. Later, in 1994, Miyazaki found 18% in junctions of iron separated by an amorphous aluminum oxide insulator and Jagadeesh Moodera found 11.8% in junctions with electrodes of CoFe and Co. The highest effects observed at this time with aluminum oxide insulators was around 70% at room temperature.
Since the year 2000, tunnel barriers of crystalline magnesium oxide (MgO) have been under development. In 2001 Butler and Mathon independently made the theoretical prediction that using iron as the ferromagnet and MgO as the insulator, the tunnel magnetoresistance can reach several thousand percent. The same year, Bowen et al. were the first to report experiments showing a significant TMR in a MgO based magnetic tunnel junction [Fe/MgO/FeCo(001)].
In 2004, Parkin and Yuasa were able to make Fe/MgO/Fe junctions that reach over 200% TMR at room temperature. In 2008, effects of up to 604% at room temperature and more than 1100% at 4.2 K were observed in junctions of CoFeB/MgO/CoFeB by S. Ikeda, H. Ohno group of Tohoku University in Japan.
Applications
The read-heads of modern hard disk drives work on the basis of magnetic tunnel junctions. TMR, or more specifically the magnetic tunnel junction, is also the basis of MRAM, a new type of non-volatile memory. The 1st generation technologies relied on creating cross-point magnetic fields on each bit to write the data on it, although this approach has a scaling limit at around 90–130 nm. There are two 2nd generation techniques currently being developed: Thermal Assisted Switching (TAS) and Spin-transfer torque.
Magnetic tunnel junctions are also used for sensing applications. Today they are commonly used for position sensors and current sensors in various automotive, industrial and consumer applications. These higher performance sensors are replacing Hall sensors in many applications due to their improved performance.
Physical explanation
The relative resistance change—or effect amplitude—is defined as
where is the electrical resistance in the anti-parallel state, whereas is the resistance in the parallel state.
The TMR effect was explained by Jullière with the spin polarizations of the ferromagnetic electrodes. The spin polarization P is calculated from the spin dependent density of states (DOS) at the Fermi energy:
The spin-up electrons are those with spin orientation parallel to the external magnetic field, whereas the spin-down electrons have anti-parallel alignment with the external field. The relative resistance change is now given by the spin polarizations of the two ferromagnets, P1 and P2:
If no voltage is applied to the junction, electrons tunnel in both directions with equal rates. With a bias voltage U, electrons tunnel preferentially to the positive electrode. With the assumption that spin is conserved during tunneling, the current can be described in a two-current model. The total current is split in two partial currents, one for the spin-up electrons and another for the spin-down electrons. These vary depending on the magnetic state of the junctions.
There are two possibilities to obtain a defined anti-parallel state. First, one can use ferromagnets with different coercivities (by using different materials or different film thicknesses). And second, one of the ferromagnets can be coupled with an antiferromagnet (exchange bias). In this case the magnetization of the uncoupled electrode remains "free".
The TMR becomes infinite if P1 and P2 equal 1, i.e. if both electrodes have 100% spin polarization. In this case the magnetic tunnel junction becomes a switch, that switches magnetically between low resistance and infinite resistance. Materials that come into consideration for this are called ferromagnetic half-metals. Their conduction electrons are fully spin-polarized. This property is theoretically predicted for a number of materials (e.g. CrO2, various Heusler alloys) but its experimental confirmation has been the subject of subtle debate. Nevertheless, if one considers only those electrons that enter into transport, measurements by Bowen et al. of up to 99.6% spin polarization at the interface between La0.7Sr0.3MnO3 and SrTiO3 pragmatically amount to experimental proof of this property.
The TMR decreases with both increasing temperature and increasing bias voltage. Both can be understood in principle by magnon excitations and interactions with magnons, as well as due to tunnelling with respect to localized states induced by oxygen vacancies (see Symmetry Filtering section hereafter).
Symmetry-filtering in tunnel barriers
Prior to the introduction of epitaxial magnesium oxide (MgO), amorphous aluminum oxide was used as the tunnel barrier of the MTJ, and typical room temperature TMR was in the range of tens of percent. MgO barriers increased TMR to hundreds of percent. This large increase reflects a synergetic combination of electrode and barrier electronic structures, which in turn reflects the achievement of structurally ordered junctions. Indeed, MgO filters the tunneling transmission of electrons with a particular symmetry that are fully spin-polarized within the current flowing across body-centered cubic Fe-based electrodes. Thus, in the MTJ's parallel (P) state of electrode magnetization, electrons of this symmetry dominate the junction current. In contrast, in the MTJ's antiparallel (AP) state, this channel is blocked, such that electrons with the next most favorable symmetry to transmit dominate the junction current. Since those electrons tunnel with respect to a larger barrier height, this results in the sizeable TMR.
Beyond these large values of TMR across MgO-based MTJs, this impact of the barrier's electronic structure on tunnelling spintronics has been indirectly confirmed by engineering the junction's potential landscape for electrons of a given symmetry. This was first achieved by examining how the electrons of a lanthanum strontium manganite half-metallic electrode with both full spin (P=+1 ) and symmetry polarization tunnel across an electrically biased SrTiO3 tunnel barrier. The conceptually simpler experiment of inserting an appropriate metal spacer at the junction interface during sample growth was also later demonstrated
.
While theory, first formulated in 2001, predicts large TMR values associated with a 4eV barrier height in the MTJ's P state and 12eV in the MTJ's AP state, experiments reveal barrier heights as low as 0.4eV. This contradiction is lifted if one takes into account the localized states of oxygen vacancies in the MgO tunnel barrier. Extensive solid-state tunnelling spectroscopy experiments across MgO MTJs revealed in 2014 that the electronic retention on the ground and excited states of an oxygen vacancy, which is temperature-dependent, determines the tunnelling barrier height for electrons of a given symmetry, and thus crafts the effective TMR ratio and its temperature dependence. This low barrier height in turn enables the high current densities required for spin-transfer torque, discussed hereafter.
Spin-transfer torque in magnetic tunnel junctions (MTJs)
The effect of spin-transfer torque has been studied and applied widely in MTJs, where there is a tunnelling barrier sandwiched between a set of two ferromagnetic electrodes such that there is (free) magnetization of the right electrode, while assuming that the left electrode (with fixed magnetization) acts as spin-polarizer. This may then be pinned to some selecting transistor in a magnetoresistive random-access memory device, or connected to a preamplifier in a hard disk drive application.
The spin-transfer torque vector, driven by the linear response voltage, can be computed from the expectation value of the torque operator:
where is the gauge-invariant nonequilibrium density matrix for the steady-state transport, in the zero-temperature limit, in the linear-response regime, and the torque operator is obtained from the time derivative of the spin operator:
Using the general form of a 1D tight-binding Hamiltonian:
where total magnetization (as macrospin) is along the unit vector and the Pauli matrices properties involving arbitrary classical vectors , given by
it is then possible to first obtain an analytical expression for (which can be expressed in compact form using , and the vector of Pauli spin matrices ).
The spin-transfer torque vector in general MTJs has two components: a parallel and perpendicular component:
A parallel component:
And a perpendicular component:
In symmetric MTJs (made of electrodes with the same geometry and exchange splitting), the spin-transfer torque vector has only one active component, as the perpendicular component disappears:
.
Therefore, only vs. needs to be plotted at the site of the right electrode to characterise tunnelling in symmetric MTJs, making them appealing for production and characterisation at an industrial scale.
Note:
In these calculations the active region (for which it is necessary to calculate the retarded Green's function) should consist of the tunnel barrier + the right ferromagnetic layer of finite thickness (as in realistic devices). The active region is attached to the left ferromagnetic electrode (modeled as semi-infinite tight-binding chain with non-zero Zeeman splitting) and the right N electrode (semi-infinite tight-binding chain without any Zeeman splitting), as encoded by the corresponding self-energy terms.
Discrepancy between theory and experiment
Theoretical tunnelling magneto-resistance ratios of 10000% have been predicted. However, the largest that have been observed are only 604%. One suggestion is that grain boundaries could be affecting the insulating properties of the MgO barrier; however, the structure of films in buried stack structures is difficult to determine. The grain boundaries may act as short circuit conduction paths through the material, reducing the resistance of the device. Recently, using new scanning transmission electron microscopy techniques, the grain boundaries within FeCoB/MgO/FeCoB MTJs have been atomically resolved. This has allowed first principles density functional theory calculations to be performed on structural units that are present in real films. Such calculations have shown that the band gap can be reduced by as much as 45%.
In addition to grain boundaries, point defects such as boron interstitial and oxygen vacancies could be significantly altering the tunnelling magneto-resistance. Recent theoretical calculations have revealed that boron interstitials introduce defect states in the band gap potentially reducing the TMR further
These theoretical calculations have also been backed up by experimental evidence showing the nature of boron within the MgO layer between two different systems and how the TMR is different.
See also
Quantum tunneling
Magnetoresistance
Giant Magnetoresistance (GMR)
Spin-transfer torque
References
Electric and magnetic fields in matter
Spintronics
Magnetoresistance | Tunnel magnetoresistance | Physics,Chemistry,Materials_science,Engineering | 2,614 |
34,912,228 | https://en.wikipedia.org/wiki/Ian%20T.%20Baldwin | Ian Thomas Baldwin (born 1958) is an American ecologist.
Scientific career
Baldwin studied biology and chemistry at Dartmouth College in Hanover, New Hampshire, and graduated 1981 with an AB. In 1989 he graduated with a PhD in chemical ecology from Cornell University, Ithaca, New York, Section of Neurobiology and Behavior. He was an Assistant (1989), Associate (1993) and Full Professor (1996) in the Department of Biology at SUNY Buffalo. In 1996 he became the Founding Director of the Max Planck Institute for Chemical Ecology where he heads the Department of Molecular Ecology. In 1999 he was appointed Honorary Professor at Friedrich Schiller University in Jena, Germany. In 2002 he founded the International Max Planck Research School at the Max Planck Institute in Jena.
Baldwin's scientific work is devoted to understanding the traits that allow plants to survive in the real world. To achieve this, he has developed a molecular toolbox for the native tobacco, Nicotiana attenuata (coyote tobacco), and a graduate program that trains "genome-enabled field biologists" to combine genomic and molecular genetic tools with field work to understand the genes that matter for plant-herbivore, -pollinator, -plant, -microbial interactions in nature. He has been a driver behind the Open Access publication efforts of the Max Planck Society and is one of the senior editors of the open access journal eLife.
Since November 2020, the Department of Molecular Ecology is led by Acting Director Sarah O’Connor. The former Director Ian Baldwin now serves as Leader of the Research Group of a Scientific Member of the Max Planck Society (FG WiMi, Forschungsgruppe Wissenschaftliches Mitglied) and he continues his research at the Institute in this role.
Awards and honors
Presidential Young Investigator Award 1991
Silverstein-Simeone Award of the International Society of Chemical Ecology 1998
Extraordinary member of the Berlin-Brandenburg Academy of Sciences and Humanities (since 2001)
Tansley Lecture, British Ecological Society, 2009
European Research Council (ERC) Advanced Grant 2011
Elected Member of the National Academy of Sciences 2013
Elected Member of the German Academy of Sciences Leopoldina 2013
Elected Member of the European Molecular Biology Organization EMBO
International Award of the Jean-Marie Delwart Foundation 2014
Elected Fellow of the American Association for the Advancement of Science 2016
Selected publications
Schultz, J. C., Baldwin, I. T. (1982): Oak leaf quality declines in response to defoliation by Gypsy moth larvae. Science, 217, 149–151.
Karban, R., Baldwin, I. T. (1997): Induced responses to herbivory. Chicago: Univ. of Chicago Press.
Kessler, A., Baldwin, I. T. (2001): Defensive function of herbivore-induced plant volatile emissions in nature. Science, 291(5511), 2141–2144.
Kessler, A., Halitschke, R., Baldwin, I. T. (2004): Silencing the jasmonate cascade: Induced plant defenses and insect populations. Science, 305(5684), 665–668.
Baldwin, I. T., Halitschke, R., Paschold, A., von Dahl, C. C., Preston, C. A. (2006): Volatile signaling in plant-plant interactions: "Talking trees" in the genomics era. Science, 311(5762), 812–815.
Kessler, D., Gase, K., Baldwin, I. T. (2008): Field experiments with transformed plants reveal the sense of floral scents. Science, 321(5893), 1200–1202.
Kessler, D., Diezel, C., Baldwin, I. T. (2010): Changing pollinators as a means of escaping herbivores. Current Biology, 20, 237–242.
Allmann, S., Baldwin, I. T. (2010): Insects betray themselves in nature to predators by rapid isomerization of green leaf volatiles. Science, 329, 1075–1078.
Weinhold, A., Baldwin I.T. (2011): Trichome-derived O-acyl sugars are a first meal for caterpillars that tags them for predation. Proceedings of the National Academy of Sciences of the United States of America, 108(19), 7855–7859.
Kumar, P., Pandit, S. S., Steppuhn, A., Baldwin, I. T. (2014). A natural history driven, plant mediated RNAi based study reveals CYP6B46’s role in a nicotine-mediated anti-predator herbivore defense. Proceedings of the National Academy of Sciences of the United States of America, 111(4), 1245–1252.
References
External links
Webpage of the Department of Molecular Ecology at the Max Planck Institute for Chemical Ecology
Video
Video on Ian T. Baldwin's research (Latest Thinking)
1958 births
Living people
American ecologists
Cornell University alumni
Chemical ecologists
Members of the German National Academy of Sciences Leopoldina
Max Planck Institute directors
Dartmouth College alumni | Ian T. Baldwin | Chemistry | 1,071 |
2,349,829 | https://en.wikipedia.org/wiki/Leecher%20%28computing%29 | In computing and specifically in Internet slang, a leech is one who benefits, usually deliberately, from others' information or effort but does not offer anything in return, or makes only token offerings in an attempt to avoid being called a leech. In economics, this type of behavior is called "free riding" and is associated with the free rider problem. The term originated in the bulletin board system era, when it referred to users that would download files and upload nothing in return.
Depending on context, leeching does not necessarily refer to illegal use of computer resources, but often instead to greedy use according to etiquette: to wit, using too much of what is freely given without contributing a reasonable amount back to the community that provides it. The word is also used without any pejorative connotations, simply meaning to download large sets of information: for example the Usenet newsreader NewsLeecher.
The name derives from the leech, an animal that sucks blood and then tries to leave unnoticed. Other terms are used, such as "freeloader", "mooch" and "sponge", but leech is the most commonly used.
Examples
Wi-Fi leeches attach to open wireless networks without the owner's knowledge in order to access the Internet. One example of this is someone who connects to a café's free wireless service from their car in the parking lot in order to download large amounts of data. Piggybacking is a term used to describe this phenomenon.
Direct linking (or hot-linking) is a form of bandwidth leeching that occurs when placing an unauthorized linked object, often an image, from one site in a web page belonging to a second site (the leech).
In most P2P-networks, leeching can be defined as behavior consisting of downloading more data, over time, than the individual is uploading to other clients, thus draining speed from the network. The term is used in a similar way for shared FTP directories. Mainly, leeching is taking without giving.
Claiming credit for, or offering for sale, freely available content created and uploaded by others to the Internet (Plagiarism/Copyfraud)
Gaming
In games (whether a traditional tabletop RPG, LARPing, or even MMORPG) the term "leech" is given to someone who avoids confrontation and sits out while another player fights and gains experience for the person, or "leecher", who is avoiding confrontation.
In online multi-player games, "to leech" generally means that a player be present and qualify for the presentation of a reward of some sort, without contributing to the team effort needed to earn that reward. Although in the past the term "leeching" has been applied to a player gaining any benefit due solely to the efforts of others, the term is now most often limited to players that gain experience without meaningful contribution. However, while this usually carries negative connotations, this is not always the case; for instance, in MMOs where power leveling is possible, a higher level player may deliberately consent to a lower leveled player gaining experience without assisting, usually due to the danger to the lower leveled player. In situations where the amount of assistance the lower level player can provide is negligible (as is often the case when being Power Leveled, due to the disparity between target mobs and the player), a higher level player may deliberately encourage the lower level player to "leech" to avoid wasted time spent protecting the lower level player that could otherwise be directed towards more quickly accomplishing the goal of earning experience points. However, this is usually an arrangement between friends or guild-mates, and unsolicited requests to "leech" experience off of a higher level player is often considered extremely rude.
In popular MMORPG's the alternative way to refer to a "leecher" (especially one that is "leeching" items, as opposed to experience) has arisen as being "Ninja" (referring to the notional lightning reflexes needed to claim the reward between its appearance and its retrieval by the more deserving player). This can be used as either a noun ("Player One is a loot ninja!") or a verb ("Player One keeps ninja'ing all the drops!"). However, a "ninja" may not necessarily be "leeching", in that they do sometimes contribute to the goal, "ninja" more applies towards a player claiming an item unfairly, either because the group agreed to pass the item to a particular player in advance, or by taking the item before the group can determine which player gets the item fairly through whichever process the group agreed upon (usually rolling for it, which is simulated in many MMOs through a "/random" or "/roll" command).
This is different from "kill stealing" where a player uses an attack at the right moment to kill an enemy and take the experience benefit (XP), which may benefit the player or not. In some games, XP is granted to the person who deals a final shot to an enemy, or to who deals a significant amount of damage regardless of who hit it last. Kill stealing can in this fashion be applied as a reverse leeching: it prevents another player from gaining XP at all regardless of their effort.
Prevention
Since the BBS era of the 1980s and early 1990s, many systems have implemented a ratio policy, which requires the upload of a certain amount for every amount downloaded. This continues to be found in the internet era, in systems such as BitTorrent.
Wi-Fi networks can implement various authentication and access control technologies in order to prevent leeching. The most common are client MAC address authorization tables (deprecated due to insecurity), Wired Equivalent Privacy (deprecated due to insecurity), and Wi-Fi Protected Access.
Bandwidth leeching can be prevented by running an anti-leeching script on the website's server. It can automatically ban IPs that leech, or can redirect them to faulty files.
P2P networks
Amongst users of the BitTorrent file distribution protocol and common P2P networks, such as the eDonkey network or Gnutella2, a leech is a user who disconnects as soon as they have a complete copy of a particular file, while minimizing or completely suppressing data upload.
However, on most BitTorrent tracker sites, the term leecher is used for all users who are not seeders (which means they do not have the complete file yet). As BitTorrent clients usually begin to upload files almost as soon as they have started to download them, such users are usually not freeloaders (people who don't upload data at all to the swarm). Therefore, this kind of leeching is considered to be a legitimate practice. Reaching an upload/download ratio of 1:1 (meaning that the user has uploaded as much as they downloaded) in a BitTorrent client is considered a minimum in the etiquette of that network. In the terminology of these BitTorrent sites, a leech becomes a seeder (a provider of the file) when they have finished downloading and continue to run the client. They will remain a seeder until the file is removed or destroyed (settings enable the torrent to stop seeding at a certain share ratio, or after X hours have passed seeding).
The so-called bad leechers are those running specially modified clients which avoid uploading data. This has led to the development of a multitude of technologies to ban such misbehaving clients. For example, on BitTorrent, most private trackers do keep track of the amount of data a client uploads or downloads to avoid leeching, while on real P2P networks systems like DLP (Dynamic Leecher Protection) (eMule Xtreme Mod, eDonkey network) or uploader rewarding (Gnutella2) have been brought in place. Note that BitTorrent is not a P2P network, it is only a P2P file distribution system.
Leeching is often seen as a threat to peer-to-peer sharing and as the direct opposite of the practice of seeding. But with rising downloads, uploads are still guaranteed, although few contributors in the system account for most services.
See also
Free-rider problem
LeechModem
Lurker
Parasitism (social offense)
Tragedy of the commons
References
Computer jargon
Video game terminology
Tragedy of the commons
Leech - Meaning | Leecher (computing) | Mathematics,Technology | 1,765 |
74,122,318 | https://en.wikipedia.org/wiki/Ecology%20block | An ecology block, also known as an eco-block or ecoblock, is a type of recycled concrete block used to make retaining walls. Ecology blocks are manufactured using concrete left over from other construction processes. A cross-section of an eco-block typically measures square, with block lengths ranging from to . One block weighs between and .
Uses
Ecology blocks are marketed for construction of retaining walls; they have grooves on the top and bottom to facilitate vertical stacking. They are used for storage of bulk materials and other modular construction projects where permanent structures are not needed. They have also been used as a temporary fix for a critical road in Skagit County, Washington, that had been damaged by floods. Following the September 11 attacks, at the Hanford Site's Plutonium Finishing Plant, eco-blocks and Jersey barriers were used to create a barrier against vehicular attack. The Seattle Police Department used eco-blocks to construct walls around their East Precinct building while the Capitol Hill Organized Protest was established nearby.
Hostile architecture
In response to homelessness in Seattle, several Seattle businesses and residents have deployed eco-blocks as hostile architecture in residential areas and outside of businesses, with the intention of discouraging homeless encampments and recreational vehicle (RV) parking by homeless persons who live in RVs. City law prohibits the use of eco-blocks on city streets, but as of August 1, 2022, compared with hundreds of eco-blocks deployed in the city, only 25 property and business owners have received warnings, and none have been fined. Eco-blocks are particularly popular in industrial areas of Seattle, the only areas where RVs have been allowed to park legally for up to 72 hours at a time. The Seattle Times reported in July 2022 that "a significant portion of public parking in Georgetown has been blocked" by ecology blocks. The blocks have also impeded delivery trucks, which cannot park between them to unload goods.
Seattle eco-block purchasers were attracted to eco-blocks' low cost, about US$20 per block, and the need for special equipment to remove them. In April 2023, the city removed some eco-blocks that abutted a city park that had been popular with homeless campers, but did not remove the blocks in public streets adjacent to a nearby Fremont Brewing facility owned by Sara Nelson, a member of the Seattle City Council, and her husband Matt Lincecum.
Safety concerns
In February 2012, a 56-year-old heavy equipment operator in Washington state died after being crushed by an approximately eco-block that was part of a wall used to subdivide a tank being used for fertilizer storage. The Washington State Department of Labor and Industries identified "Lack of training regarding the dangers of working around bulk material and
ecology blocks" and "Possible destabilization of block wall due to granular material leaking between the blocks" as contributing factors to the incident, and noted that the employer created a training program about eco-block safety after the incident.
When constructing a retaining wall from eco-blocks, a stable foundation is still required. In July 2015, a 70-year-old man in Washington died after an ecology block wall under construction on a sand foundation collapsed, and his legs were crushed by a block.
See also
Jersey barrier, a similarly sized concrete block used for vehicle traffic control
References
External links
Concrete
Homelessness in the United States | Ecology block | Engineering | 684 |
36,915,115 | https://en.wikipedia.org/wiki/Norwegian%20Academy%20of%20Technological%20Sciences | The Norwegian Academy of Technological Sciences (, NTVA) is a learned society based in Trondheim, Norway.
Founded in 1955, the academy has about 500 members. It is a member of the International Council of Academies of Engineering and Technological Sciences (CAETS) and of the European Council of Applied Sciences and Engineering (Euro-CASE).
References
External links
Official site
1955 establishments in Norway
National academies of engineering
Organisations based in Trondheim
Scientific organizations established in 1955
Learned societies of Norway | Norwegian Academy of Technological Sciences | Engineering | 95 |
14,877,359 | https://en.wikipedia.org/wiki/60S%20ribosomal%20protein%20L13a | 60S ribosomal protein L13a is a protein that in humans is encoded by the RPL13A gene.
Ribosomes, the organelles that catalyze protein synthesis, consist of a small 40S subunit and a large 60S subunit. Together these subunits are composed of 4 RNA species and approximately 80 structurally distinct proteins. This gene encodes a ribosomal protein that is a component of the 60S subunit. The protein belongs to the L13P family of ribosomal proteins. It is located in the cytoplasm. Transcript variants utilizing alternative polyA signals have been observed. This gene is co-transcribed with the small nucleolar RNA genes U32, U33, U34, and U35, which are located in its second, fourth, fifth, and sixth introns, respectively. As is typical for genes encoding ribosomal proteins, there are multiple processed pseudogenes of this gene dispersed through the genome.
References
Further reading
Ribosomal proteins | 60S ribosomal protein L13a | Chemistry | 197 |
681 | https://en.wikipedia.org/wiki/Aardwolf | The aardwolf (Proteles cristatus) is an insectivorous hyaenid species, native to East and Southern Africa. Its name means "earth-wolf" in Afrikaans and Dutch. It is also called the maanhaar-jackal (Afrikaans for "mane-jackal"), termite-eating hyena and civet hyena, based on its habit of secreting substances from its anal gland, a characteristic shared with the African civet.
Unlike many of its relatives in the order Carnivora, the aardwolf does not hunt large animals. It eats insects and their larvae, mainly termites; one aardwolf can lap up as many as 300,000 termites during a single night using its long, sticky tongue. The aardwolf's tongue has adapted to be tough enough to withstand the strong bite of termites.
The aardwolf lives in the shrublands of eastern and southern Africa – open lands covered with stunted trees and shrubs. It is nocturnal, resting in burrows during the day and emerging at night to seek food.
Taxonomy
The aardwolf is generally classified as part of the hyena family Hyaenidae. However, it was formerly placed in its own family Protelidae. Early on, scientists felt that it was merely mimicking the striped hyena, which subsequently led to the creation of Protelidae. Recent studies have suggested that the aardwolf probably diverged from other hyaenids early on; how early is still unclear, as the fossil record and genetic studies disagree by 10 million years.
The aardwolf is the only surviving species in the subfamily Protelinae. There is disagreement as to whether the species is monotypic, or can be divided into subspecies. A 2021 study found the genetic differences in eastern and southern aardwolves may be pronounced enough to categorize them as species.
A 2006 molecular analysis indicates it is phylogenetically the most basal of the four extant hyaenidae species.
Etymology
The generic name proteles comes from two words both of Greek origin, protos and teleos which combined means "complete in front" based on the fact that they have five toes on their front feet and four on the rear. The specific name, cristatus, comes from Latin and means "provided with a comb", relating to their mane.
Description
The aardwolf resembles a much smaller and thinner striped hyena, with a more slender muzzle, black vertical stripes on a coat of yellowish fur, and a long, distinct mane down the midline of the neck and back. It also has one or two diagonal stripes down the fore and hindquarters and several stripes on its legs. The mane is raised during confrontations to make the aardwolf appear larger. It is missing the throat spot that others in the family have. Its lower leg (from the knee down) is all black, and its tail is bushy with a black tip.
The aardwolf is about long, excluding its bushy tail, which is about long, and stands about tall at the shoulders. An adult aardwolf weighs approximately , sometimes reaching . The aardwolves in the south of the continent tend to be smaller (about ) than the eastern version (around ). This makes the aardwolf the smallest extant member of the Hyaenidae family. The front feet have five toes each, unlike the four-toed hyena. The skull is similar in shape to those of other hyenas, though much smaller, and its cheek teeth are specialised for eating insects. It still has canines, but unlike other hyenas, these teeth are used primarily for fighting and defense. Its ears, which are large, are very similar to those of the striped hyena.
As an aardwolf ages, it will typically lose some of its teeth, though this has little impact on its feeding habits due to the softness of the insects that it eats.
Distribution and habitat
Aardwolves live in open, dry plains and bushland, avoiding mountainous areas. Due to their specific food requirements, they are found only in regions where termites of the family Hodotermitidae occur. Termites of this family depend on dead and withered grass and are most populous in heavily grazed grasslands and savannahs, including farmland. For most of the year, aardwolves spend time in shared territories consisting of up to a dozen dens, which are occupied for six weeks at a time.
There are two distinct populations: one in Southern Africa, and another in East and Northeast Africa. The species does not occur in the intermediary miombo forests.
An adult pair, along with their most-recent offspring, occupies a territory of .
Behavior and ecology
Aardwolves are shy and nocturnal, sleeping in burrows by day. They will, on occasion during the winter, become diurnal feeders. This happens during the coldest periods as they then stay in at night to conserve heat.
They are primarily solitary animals, though during mating season they form monogamous pairs which occupy a territory with their young. If their territory is infringed upon by another aardwolf, they will chase the intruder away for up to or to the border. If the intruder is caught, which rarely happens, a fight will occur, which is accompanied by soft clucking, hoarse barking, and a type of roar. The majority of incursions occur during mating season, when they can occur once or twice per week. When food is scarce, the stringent territorial system may be abandoned and as many as three pairs may occupy a single territory.
The territory is marked by both sexes, as they both have developed anal glands from which they extrude a black substance that is smeared on rocks or grass stalks in -long streaks. Aardwolves also have scent glands on the forefoot and penile pad. They often mark near termite mounds within their territory every 20 minutes or so. If they are patrolling their territorial boundaries, the marking frequency increases drastically, to once every . At this rate, an individual may mark 60 marks per hour, and upwards of 200 per night.
An aardwolf pair's territory may have up to 10 dens, and numerous middens where they dig small holes and bury their feces with sand. Their dens are usually abandoned aardvark, springhare, or porcupine dens, or on occasion they are crevices in rocks. They will also dig their own dens, or enlarge dens started by springhares. They typically will only use one or two dens at a time, rotating through all of their dens every six months. During the summer, they may rest outside their den during the night and sleep underground during the heat of the day.
Aardwolves are not fast runners nor are they particularly adept at fighting off predators. Therefore, when threatened, the aardwolf may attempt to mislead its foe by doubling back on its tracks. If confronted, it may raise its mane in an attempt to appear more menacing. It also emits a foul-smelling liquid from its anal glands.
Feeding
The aardwolf feeds primarily on termites and more specifically on Trinervitermes. This genus of termites has different species throughout the aardwolf's range. In East Africa, they eat Trinervitermes bettonianus, in central Africa, they eat Trinervitermes rhodesiensis, and in southern Africa, they eat T. trinervoides. Their technique consists of licking them off the ground as opposed to the aardvark, which digs into the mound. They locate their food by sound and also from the scent secreted by the soldier termites. An aardwolf may consume up to 250,000 termites per night using its long, broad, sticky tongue.
They do not destroy the termite mound or consume the entire colony, thus ensuring that the termites can rebuild and provide a continuous supply of food. They often memorize the location of such nests and return to them every few months. During certain seasonal events, such as the onset of the rainy season and the cold of midwinter, the primary termites become scarce, so the need for other foods becomes pronounced. During these times, the southern aardwolf will seek out Hodotermes mossambicus, a type of harvester termite active in the afternoon, which explains some of their diurnal behavior in the winter. The eastern aardwolf, during the rainy season, subsists on termites from the genera Odontotermes and Macrotermes. They are also known to feed on other insects and larvae, and, some sources mention, occasionally eggs, small mammals and birds, but these constitute a very small percentage of their total diet. They use their wide tongues to lap surface foraging termites off of the ground and consume large quantities of sand in the process, which aids in digestion in the absence of teeth to break down their food.
Unlike other hyenas, aardwolves do not scavenge or kill larger animals. Contrary to popular myths, aardwolves do not eat carrion, and if they are seen eating while hunched over a dead carcass, they are actually eating larvae and beetles. Also, contrary to some sources, they do not like meat, unless it is finely ground or cooked for them. The adult aardwolf was formerly assumed to forage in small groups, but more recent research has shown that they are primarily solitary foragers, necessary because of the scarcity of their insect prey. Their primary source, Trinervitermes, forages in small but dense patches of . While foraging, the aardwolf can cover about per hour, which translates to per summer night and per winter night.
Breeding
The breeding season varies depending on location, but normally takes place during autumn or spring. In South Africa, breeding occurs in early July. During the breeding season, unpaired male aardwolves search their own territory, as well as others, for a female to mate with. Dominant males also mate opportunistically with the females of less dominant neighboring aardwolves, which can result in conflict between rival males. Dominant males even go a step further and as the breeding season approaches, they make increasingly greater and greater incursions onto weaker males' territories. As the female comes into oestrus, they add pasting to their tricks inside of the other territories, sometimes doing so more in rivals' territories than their own. Females will also, when given the opportunity, mate with the dominant male, which increases the chances of the dominant male guarding "his" cubs with her. Copulation lasts between 1 and 4.5 hours.
Gestation lasts between 89 and 92 days, producing two to five cubs (most often two or three) during the rainy season (October–December), when termites are more active. They are born with their eyes open, but initially are helpless, and weigh around . The first six to eight weeks are spent in the den with their parents. The male may spend up to six hours a night watching over the cubs while the mother is out looking for food. After three months, they begin supervised foraging, and by four months are normally independent, though they often share a den with their mother until the next breeding season. By the time the next set of cubs is born, the older cubs have moved on. Aardwolves generally achieve sexual maturity at one and a half to two years of age.
Conservation
The aardwolf has not seen decreasing numbers and is relatively widespread throughout eastern Africa. They are not common throughout their range, as they maintain a density of no more than 1 per square kilometer, if food is abundant. Because of these factors, the IUCN has rated the aardwolf as least concern. In some areas, they are persecuted because of the mistaken belief that they prey on livestock; however, they are actually beneficial to the farmers because they eat termites that are detrimental. In other areas, the farmers have recognized this, but they are still killed, on occasion, for their fur. Dogs and insecticides are also common killers of the aardwolf.
In captivity
Frankfurt Zoo in Germany was home to the oldest recorded aardwolf in captivity at 18 years and 11 months.
Notes
References
Sources
Further reading
External links
Animal Diversity Web
IUCN Hyaenidae Specialist Group Aardwolf pages on hyaenidae.org
Cam footage from the Namib desert https://m.youtube.com/watch?v=lRevqS6Pxgg
Mammals described in 1783
Carnivorans of Africa
Hyenas
Mammals of Southern Africa
Fauna of East Africa
Myrmecophagous mammals
Taxa named by Anders Sparrman
Nocturnal animals | Aardwolf | Biology | 2,670 |
2,736,939 | https://en.wikipedia.org/wiki/Random%20number%20generation | Random number generation is a process by which, often by means of a random number generator (RNG), a sequence of numbers or symbols is generated that cannot be reasonably predicted better than by random chance. This means that the particular outcome sequence will contain some patterns detectable in hindsight but impossible to foresee. True random number generators can be hardware random-number generators (HRNGs), wherein each generation is a function of the current value of a physical environment's attribute that is constantly changing in a manner that is practically impossible to model. This would be in contrast to so-called "random number generations" done by pseudorandom number generators (PRNGs), which generate numbers that only look random but are in fact predetermined—these generations can be reproduced simply by knowing the state of the PRNG.
Various applications of randomness have led to the development of different methods for generating random data. Some of these have existed since ancient times, including well-known examples like the rolling of dice, coin flipping, the shuffling of playing cards, the use of yarrow stalks (for divination) in the I Ching, as well as countless other techniques. Because of the mechanical nature of these techniques, generating large quantities of sufficiently random numbers (important in statistics) required much work and time. Thus, results would sometimes be collected and distributed as random number tables.
Several computational methods for pseudorandom number generation exist. All fall short of the goal of true randomness, although they may meet, with varying success, some of the statistical tests for randomness intended to measure how unpredictable their results are (that is, to what degree their patterns are discernible). This generally makes them unusable for applications such as cryptography. However, carefully designed cryptographically secure pseudorandom number generators (CSPRNGS) also exist, with special features specifically designed for use in cryptography.
Practical applications and uses
Random number generators have applications in gambling, statistical sampling, computer simulation, cryptography, completely randomized design, and other areas where producing an unpredictable result is desirable. Generally, in applications having unpredictability as the paramount feature, such as in security applications, hardware generators are generally preferred over pseudorandom algorithms, where feasible.
Pseudorandom number generators are very useful in developing Monte Carlo-method simulations, as debugging is facilitated by the ability to run the same sequence of random numbers again by starting from the same random seed. They are also used in cryptography – so long as the seed is secret. The sender and receiver can generate the same set of numbers automatically to use as keys.
The generation of pseudorandom numbers is an important and common task in computer programming. While cryptography and certain numerical algorithms require a very high degree of apparent randomness, many other operations only need a modest amount of unpredictability. Some simple examples might be presenting a user with a "random quote of the day", or determining which way a computer-controlled adversary might move in a computer game. Weaker forms of randomness are used in hash algorithms and in creating amortized searching and sorting algorithms.
Some applications that appear at first sight to be suitable for randomization are in fact not quite so simple. For instance, a system that "randomly" selects music tracks for a background music system must only appear random, and may even have ways to control the selection of music: a truly random system would have no restriction on the same item appearing two or three times in succession.
True vs. pseudo-random numbers
There are two principal methods used to generate random numbers. The first method measures some physical phenomenon that is expected to be random and then compensates for possible biases in the measurement process. Example sources include measuring atmospheric noise, thermal noise, and other external electromagnetic and quantum phenomena. For example, cosmic background radiation or radioactive decay as measured over short timescales represent sources of natural entropy (as a measure of unpredictability or surprise of the number generation process).
The speed at which entropy can be obtained from natural sources is dependent on the underlying physical phenomena being measured. Thus, sources of naturally occurring true entropy are said to be blocking they are rate-limited until enough entropy is harvested to meet the demand. On some Unix-like systems, including most Linux distributions, the pseudo device file will block until sufficient entropy is harvested from the environment. Due to this blocking behavior, large bulk reads from , such as filling a hard disk drive with random bits, can often be slow on systems that use this type of entropy source.
The second method uses computational algorithms that can produce long sequences of apparently random results, which are in fact completely determined by a shorter initial value, known as a seed value or key. As a result, the entire seemingly random sequence can be reproduced if the seed value is known. This type of random number generator is often called a pseudorandom number generator. This type of generator typically does not rely on sources of naturally occurring entropy, though it may be periodically seeded by natural sources. This generator type is non-blocking, so they are not rate-limited by an external event, making large bulk reads a possibility.
Some systems take a hybrid approach, providing randomness harvested from natural sources when available, and falling back to periodically re-seeded software-based cryptographically secure pseudorandom number generators (CSPRNGs). The fallback occurs when the desired read rate of randomness exceeds the ability of the natural harvesting approach to keep up with the demand. This approach avoids the rate-limited blocking behavior of random number generators based on slower and purely environmental methods.
While a pseudorandom number generator based solely on deterministic logic can never be regarded as a true random number source in the purest sense of the word, in practice they are generally sufficient even for demanding security-critical applications. Carefully designed and implemented pseudorandom number generators can be certified for security-critical cryptographic purposes, as is the case with the yarrow algorithm and fortuna. The former is the basis of the source of entropy on FreeBSD, AIX, macOS, NetBSD, and others. OpenBSD uses a pseudorandom number algorithm known as arc4random.
Generation methods
Physical methods
The earliest methods for generating random numbers, such as dice, coin flipping and roulette wheels, are still used today, mainly in games and gambling as they tend to be too slow for most applications in statistics and cryptography.
A hardware random number generator can be based on an essentially random atomic or subatomic physical phenomenon whose unpredictability can be traced to the laws of quantum mechanics. Sources of entropy include radioactive decay, thermal noise, shot noise, avalanche noise in Zener diodes, clock drift, the timing of actual movements of a hard disk read-write head, and radio noise. However, physical phenomena and tools used to measure them generally feature asymmetries and systematic biases that make their outcomes not uniformly random. A randomness extractor, such as a cryptographic hash function, can be used to approach a uniform distribution of bits from a non-uniformly random source, though at a lower bit rate.
The appearance of wideband photonic entropy sources, such as optical chaos and amplified spontaneous emission noise, greatly aid the development of the physical random number generator. Among them, optical chaos has a high potential to physically produce high-speed random numbers due to its high bandwidth and large amplitude. A prototype of a high-speed, real-time physical random bit generator based on a chaotic laser was built in 2013.
Various imaginative ways of collecting this entropic information have been devised. One technique is to run a hash function against a frame of a video stream from an unpredictable source. Lavarand used this technique with images of a number of lava lamps. HotBits measured radioactive decay with Geiger–Muller tubes, while Random.org uses variations in the amplitude of atmospheric noise recorded with a normal radio.
Another common entropy source is the behavior of human users of the system. While people are not considered good randomness generators upon request, they generate random behavior quite well in the context of playing mixed strategy games. Some security-related computer software requires the user to make a lengthy series of mouse movements or keyboard inputs to create sufficient entropy needed to generate random keys or to initialize pseudorandom number generators.
Computational methods
Most computer-generated random numbers use PRNGs which are algorithms that can automatically create long runs of numbers with good random properties but eventually the sequence repeats (or the memory usage grows without bound). These random numbers are fine in many situations but are not as random as numbers generated from electromagnetic atmospheric noise used as a source of entropy. The series of values generated by such algorithms is generally determined by a fixed number called a seed. One of the most common PRNG is the linear congruential generator, which uses the recurrence
to generate numbers, where , and are large integers, and is the next in as a series of pseudorandom numbers. The maximum number of numbers the formula can produce is the modulus, . The recurrence relation can be extended to matrices to have much longer periods and better statistical properties
.
To avoid certain non-random properties of a single linear congruential generator, several such random number generators with slightly different values of the multiplier coefficient, , can be used in parallel, with a master random number generator that selects from among the several different generators.
A simple pen-and-paper method for generating random numbers is the so-called middle-square method suggested by John von Neumann. While simple to implement, its output is of poor quality. It has a very short period and severe weaknesses, such as the output sequence almost always converging to zero. A recent innovation is to combine the middle square with a Weyl sequence. This method produces high-quality output through a long period.
Most computer programming languages include functions or library routines that provide random number generators. They are often designed to provide a random byte or word, or a floating point number uniformly distributed between 0 and 1.
The quality i.e. randomness of such library functions varies widely from completely predictable output, to cryptographically secure. The default random number generator in many languages, including Python, Ruby, R, IDL and PHP is based on the Mersenne Twister algorithm and is not sufficient for cryptography purposes, as is explicitly stated in the language documentation. Such library functions often have poor statistical properties and some will repeat patterns after only tens of thousands of trials. They are often initialized using a computer's real-time clock as the seed, since such a clock is 64 bit and measures in nanoseconds, far beyond the person's precision. These functions may provide enough randomness for certain tasks (for example video games) but are unsuitable where high-quality randomness is required, such as in cryptography applications, or statistics.
Much higher quality random number sources are available on most operating systems; for example /dev/random on various BSD flavors, Linux, Mac OS X, IRIX, and Solaris, or CryptGenRandom for Microsoft Windows. Most programming languages, including those mentioned above, provide a means to access these higher-quality sources.
By humans
Random number generation may also be performed by humans, in the form of collecting various inputs from end users and using them as a randomization source. However, most studies find that human subjects have some degree of non-randomness when attempting to produce a random sequence of e.g. digits or letters. They may alternate too much between choices when compared to a good random generator; thus, this approach is not widely used. However, for the very reason that humans perform poorly in this task, human random number generation can be used as a tool to gain insights into brain functions otherwise not accessible.
Post-processing and statistical checks
Even given a source of plausible random numbers (perhaps from a quantum mechanically based hardware generator), obtaining numbers which are completely unbiased takes care. In addition, behavior of these generators often changes with temperature, power supply voltage, the age of the device, or other outside interference.
Generated random numbers are sometimes subjected to statistical tests before use to ensure that the underlying source is still working, and then post-processed to improve their statistical properties. An example would be the TRNG9803 hardware random number generator, which uses an entropy measurement as a hardware test, and then post-processes the random sequence with a shift register stream cipher. It is generally hard to use statistical tests to validate the generated random numbers. Wang and Nicol proposed a distance-based statistical testing technique that is used to identify the weaknesses of several random generators. Li and Wang proposed a method of testing random numbers based on laser chaotic entropy sources using Brownian motion properties.
Statistical tests are also used to give confidence that the post-processed final output from a random number generator is truly unbiased, with numerous randomness test suites being developed.
Other considerations
Reshaping the distribution
Uniform distributions
Most random number generators natively work with integers or individual bits, so an extra step is required to arrive at the canonical uniform distribution between 0 and 1. The implementation is not as trivial as dividing the integer by its maximum possible value. Specifically:
The integer used in the transformation must provide enough bits for the intended precision.
The nature of floating-point math itself means there exists more precision the closer the number is to zero. This extra precision is usually not used due to the sheer number of bits required.
Rounding error in division may bias the result. At worst, a supposedly excluded bound may be drawn contrary to expectations based on real-number math.
The mainstream algorithm, used by OpenJDK, Rust, and NumPy, is described in a proposal for C++'s STL. It does not use the extra precision and suffers from bias only in the last bit due to round-to-even. Other numeric concerns are warranted when shifting this canonical uniform distribution to a different range. A proposed method for the Swift programming language claims to use the full precision everywhere.
Uniformly distributed integers are commonly used in algorithms such as the Fisher–Yates shuffle. Again, a naive implementation may induce a modulo bias into the result, so more involved algorithms must be used. A method that nearly never performs division was described in 2018 by Daniel Lemire, with the current state-of-the-art being the arithmetic encoding-inspired 2021 "optimal algorithm" by Stephen Canon of Apple Inc.
Most 0 to 1 RNGs include 0 but exclude 1, while others include or exclude both.
Other distributions
Given a source of uniform random numbers, there are a couple of methods to create a new random source that corresponds to a probability density function. One method called the inversion method, involves integrating up to an area greater than or equal to the random number (which should be generated between 0 and 1 for proper distributions). A second method called the acceptance-rejection method, involves choosing an x and y value and testing whether the function of x is greater than the y value. If it is, the x value is accepted. Otherwise, the x value is rejected and the algorithm tries again.
As an example for rejection sampling, to generate a pair of statistically independent standard normally distributed random numbers (x, y), one may first generate the polar coordinates (r, θ), where r2~χ22 and θ~UNIFORM(0,2π) (see Box–Muller transform).
Whitening
The outputs of multiple independent RNGs can be combined (for example, using a bit-wise XOR operation) to provide a combined RNG at least as good as the best RNG used. This is referred to as software whitening.
Computational and hardware random number generators are sometimes combined to reflect the benefits of both kinds. Computational random number generators can typically generate pseudorandom numbers much faster than physical generators, while physical generators can generate true randomness.
Low-discrepancy sequences as an alternative
Some computations making use of a random number generator can be summarized as the computation of a total or average value, such as the computation of integrals by the Monte Carlo method. For such problems, it may be possible to find a more accurate solution by the use of so-called low-discrepancy sequences, also called quasirandom numbers. Such sequences have a definite pattern that fills in gaps evenly, qualitatively speaking; a truly random sequence may, and usually does, leave larger gaps.
Activities and demonstrations
The following sites make available random number samples:
The SOCR resource pages contain a number of hands-on interactive activities and demonstrations of random number generation using Java applets.
The Quantum Optics Group at the ANU generates random numbers sourced from quantum vacuum. Samples of random numbers are available at their quantum random number generator research page.
Random.org makes available random numbers that are sourced from the randomness of atmospheric noise.
The Quantum Random Bit Generator Service at the Ruđer Bošković Institute harvests randomness from the quantum process of photonic emission in semiconductors. They supply a variety of ways of fetching the data, including libraries for several programming languages.
The Group at the Taiyuan University of Technology generates random numbers sourced from a chaotic laser. Samples of random numbers are available at their physical random number generator service.
Backdoors
Since much cryptography depends on a cryptographically secure random number generator for key and cryptographic nonce generation, if a random number generator can be made predictable, it can be used as backdoor by an attacker to break the encryption.
The NSA is reported to have inserted a backdoor into the NIST certified cryptographically secure pseudorandom number generator Dual EC DRBG. If for example an SSL connection is created using this random number generator, then according to Matthew Green it would allow NSA to determine the state of the random number generator, and thereby eventually be able to read all data sent over the SSL connection. Even though it was apparent that Dual_EC_DRBG was a very poor and possibly backdoored pseudorandom number generator long before the NSA backdoor was confirmed in 2013, it had seen significant usage in practice until 2013, for example by the prominent security company RSA Security. There have subsequently been accusations that RSA Security knowingly inserted a NSA backdoor into its products, possibly as part of the Bullrun program. RSA has denied knowingly inserting a backdoor into its products.
It has also been theorized that hardware RNGs could be secretly modified to have less entropy than stated, which would make encryption using the hardware RNG susceptible to attack. One such method that has been published works by modifying the dopant mask of the chip, which would be undetectable to optical reverse-engineering. For example, for random number generation in Linux, it is seen as unacceptable to use Intel's RDRAND hardware RNG without mixing in the RDRAND output with other sources of entropy to counteract any backdoors in the hardware RNG, especially after the revelation of the NSA Bullrun program.
In 2010, a U.S. lottery draw was rigged by the information security director of the Multi-State Lottery Association (MUSL), who surreptitiously installed backdoor malware on the MUSL's secure RNG computer during routine maintenance. During the hacks the man won a total amount of $16,500,000 over multiple years.
See also
Flipism
League of entropy
List of random number generators
PP (complexity)
Procedural generation
Randomized algorithm
Random password generator
Random variable, contains a chance-dependent value
References
Further reading
NIST SP800-90A, B, C series on random number generation
External links
RANDOM.ORG True Random Number Service
Quantum random number generator at ANU
jRand a Java-based framework for the generation of simulation sequences, including pseudorandom sequences of numbers
Random number generators in NAG Fortran Library
Randomness Beacon at NIST, broadcasting full entropy bit-strings in blocks of 512 bits every 60 seconds. Designed to provide unpredictability, autonomy, and consistency.
A system call for random numbers: getrandom(), a LWN.net article describing a dedicated Linux system call
Statistical Properties of Pseudo Random Sequences and Experiments with PHP and Debian OpenSSL
Random Sequence Generator based on Avalanche Noise
Information theory | Random number generation | Mathematics,Technology,Engineering | 4,154 |
2,598,499 | https://en.wikipedia.org/wiki/Psychologism | Psychologism is a family of philosophical positions, according to which certain psychological facts, laws, or entities play a central role in grounding or explaining certain non-psychological facts, laws, or entities. The word was coined by Johann Eduard Erdmann as Psychologismus, being translated into English as psychologism.
Definition
The Oxford English Dictionary defines psychologism as: "The view or doctrine that a theory of psychology or ideas forms the basis of an account of metaphysics, epistemology, or meaning; (sometimes) spec. the explanation or derivation of mathematical or logical laws in terms of psychological facts." Psychologism in epistemology, the idea that its problems "can be solved satisfactorily by the psychological study of the development of mental processes", was argued in John Locke's An Essay Concerning Human Understanding (1690).
Other forms of psychologism are logical psychologism and mathematical psychologism. Logical psychologism is a position in logic (or the philosophy of logic) according to which logical laws and mathematical laws are grounded in, derived from, explained or exhausted by psychological facts or laws. Psychologism in the philosophy of mathematics is the position that mathematical concepts and/or truths are grounded in, derived from or explained by psychological facts or laws.
Viewpoints
John Stuart Mill was accused by Edmund Husserl of being an advocate of a type of logical psychologism, although this may not have been the case. So were many nineteenth-century German philosophers such as Christoph von Sigwart, Benno Erdmann, Theodor Lipps, Gerardus Heymans, Wilhelm Jerusalem, and Theodor Elsenhans, as well as a number of psychologists, past and present (e.g., Wilhelm Wundt and Gustave Le Bon).
Psychologism was notably criticized by Gottlob Frege in his anti-psychologistic work The Foundations of Arithmetic, and many of his works and essays, including his review of Husserl's Philosophy of Arithmetic. Husserl, in the first volume of his Logical Investigations, called "The Prolegomena of Pure Logic", criticized psychologism thoroughly and sought to distance himself from it. Frege's arguments were largely ignored, while Husserl's were widely discussed.
In "Psychologism and Behaviorism", Ned Block describes psychologism in the philosophy of mind as the view that "whether behavior is intelligent behavior depends on the character of the internal information processing that produces it." This is in contrast to a behavioral view which would state that intelligence can be ascribed to a being solely via observing its behavior. This latter type of behavioral view is strongly associated with the Turing test.
See also
Antipsychologism
Blockhead argument
Naturalized epistemology
References
External links
Husserl's Criticism of Psychologism. Link broken, page preserved most recently from October 22, 2009 at Internet Archive: Eprint. From Diwatao, (apparently former) online journal of the philosophy department of San Beda College, Manila, the Philippines.
Metatheory
Philosophy of mathematics
Theories of deduction | Psychologism | Mathematics | 638 |
3,983,826 | https://en.wikipedia.org/wiki/Omega%20language | In formal language theory within theoretical computer science, an infinite word is an infinite-length sequence (specifically, an ω-length sequence) of symbols, and an ω-language is a set of infinite words. Here, ω refers to the first infinite ordinal number, modeling a set of natural numbers.
Formal definition
Let Σ be a set of symbols (not necessarily finite). Following the standard definition from formal language theory, Σ* is the set of all finite words over Σ. Every finite word has a length, which is a natural number. Given a word w of length n, w can be viewed as a function from the set {0,1,...,n−1} → Σ, with the value at i giving the symbol at position i. The infinite words, or ω-words, can likewise be viewed as functions from to Σ. The set of all infinite words over Σ is denoted Σω. The set of all finite and infinite words over Σ is sometimes written Σ∞ or Σ≤ω.
Thus an ω-language L over Σ is a subset of Σω.
Operations
Some common operations defined on ω-languages are:
Intersection and union Given ω-languages L and M, both and are ω-languages.
Left concatenation Let L be an ω-language, and K be a language of finite words only. Then K can be concatenated on the left, and only on the left, to L to yield the new ω-language KL.
Omega (infinite iteration) As the notation hints, the operation is the infinite version of the Kleene star operator on finite-length languages. Given a formal language L, Lω is the ω-language of all infinite sequences of words from L; in the functional view, of all functions .
Prefixes Let w be an ω-word. Then the formal language Pref(w) contains every finite prefix of w.
Limit Given a finite-length language L, an ω-word w is in the limit of L if and only if is an infinite set. In other words, for an arbitrarily large natural number n, it is always possible to choose some word in L, whose length is greater than n, and which is a prefix of w. The limit operation on L can be written Lδ or .
Distance between ω-words
The set Σω can be made into a metric space by definition of the metric as:
where |x| is interpreted as "the length of x" (number of symbols in x), and inf is the infimum over sets of real numbers. If then there is no longest prefix x and so . Symmetry is clear. Transitivity follows from the fact that if w and v have a maximal shared prefix of length m and v and u have a maximal shared prefix of length n then the first characters of w and u must be the same so . Hence d is a metric.
Important subclasses
The most widely used subclass of the ω-languages is the set of ω-regular languages, which enjoy the useful property of being recognizable by Büchi automata. Thus the decision problem of ω-regular language membership is decidable using a Büchi automaton, and fairly straightforward to compute.
If the language Σ is the power set of a set (called the "atomic propositions") then the ω-language is a linear time property, which are studied in model checking.
Bibliography
Perrin, D. and Pin, J.-E. "Infinite Words: Automata, Semigroups, Logic and Games". Pure and Applied Mathematics Vol 141, Elsevier, 2004.
Staiger, L. "ω-Languages". In G. Rozenberg and A. Salomaa, editors, Handbook of Formal Languages, Volume 3, pages 339-387. Springer-Verlag, Berlin, 1997.
Thomas, W. "Automata on Infinite Objects". In Jan van Leeuwen, editor, Handbook of Theoretical Computer Science, Volume B: Formal Models and Semantics, pages 133-192. Elsevier Science Publishers, Amsterdam, 1990.
Theory of computation
Formal languages | Omega language | Mathematics | 845 |
43,730,804 | https://en.wikipedia.org/wiki/Sciaky%2C%20Inc. | Sciaky, Inc. is an American manufacturer of metal 3D printing systems and industrial welding systems, founded in 1939 and headquartered in Chicago, Illinois. It specializes in electron beam welding systems and services for aerospace manufacturers.
In 2009, Sciaky entered the 3D Printing field with its electron beam additive manufacturing (EBAM) process for large metal parts and applications. In 2011, this technology was selected to produce titanium components for the F-35 Fighter Jet and, later, satellite propellant tanks. Sciaky's EBAM systems became available for commercial purchase in September 2014. Sciaky is a subsidiary of manufacturing and repair company Phillips Service Industries, Inc.
History
1930s
Sciaky Brothers, Inc. is founded in 1939.
1940s
Sciaky is a key supplier of resistance welding systems used to make warplanes for the U.S. military during World War II.
1950s
Sciaky produces its first Electron Beam (EB) welding system in 1957.
1960s
Sciaky becomes a major supplier of EB welding systems used to make F14 jets in 1969
1970s
DEC PDP and Data General Nova mini-computer based weld control systems
1980s
DG Eclipse mini-computer based MarkVII weld control system.
Acquired by Allegheny International in 1982.
Dual VME M68000 based W2000 weld control system.
Acquired by Ferranti International in 1988.
1990s
Phillips Service Industries, Inc. acquires Sciaky in 1994.
2000s
Sciaky begins research on a new manufacturing process called Electron Beam Free Form Fabrication (EBFFF) in 2000.
Single VME x86 board W20x0 weld control system
In 2007, Sciaky earns a contract with the National Aeronautics and Space Administration's (NASA) Langley Research Center to create a new EB gun system in the U.S. incorporating the EBFFF system and tested on a microgravity research aircraft and in space. Engineers from NASA assisted in providing supporting hardware to the gun.
In 2009, Sciaky launches its Electron Beam Additive Manufacturing process as a service-only option.
2010s
In 2011, Sciaky was selected by the Department of Defense (DOD), for the Mentor-Protege Program by Lockheed Martin Aeronautics with the focus of this agreement being the additive manufacturing of titanium structural components for Lockheed Martin's F-35 aircraft program.
In 2012, Sciaky entered a partnership with Penn State University, via DARPA (Defense Advanced Research Projects Agency) funding, to advance Direct Digital Manufacturing technology (DDM) with the goal of advancing and deploying DDM technology for highly engineered and critical metallic systems to the Department of Defense (DOD) and U.S. industry.
In 2014, Sciaky begins selling its EBAM systems on the open market.
As of 2019, the company had four EBAM systems: EBAM 300, 300, 150, and 110.
2020s
In 2020, Sciaky deposited more than 12,000 lbs. of titanium with its EBAM systems.
Metal 3D printing system
The company’s EBAM process relies on a wire-based directed energy deposition (DED) process. The systems can print parts from 8 inches to 19 feet long and can deposit up to 25 lbs. of metal per hour. The system can be used with titanium, tantalum, tungsten, Inconel, niobium, copper-nickel, aluminum, molybdenum, zirconium alloy, and stainless steel. Sciaky’s EBAM system uses closed-loop real-time adaptive controls that self-adjusts the metal deposition.
See also
List of 3D printer manufacturers
References
Metal companies based in Illinois
Manufacturing companies based in Chicago
1939 establishments in Illinois
Manufacturing companies established in 1939
3D printer companies
Welding | Sciaky, Inc. | Engineering | 773 |
4,472,133 | https://en.wikipedia.org/wiki/Chamfered%20dodecahedron | In geometry, the chamfered dodecahedron is a convex polyhedron with 80 vertices, 120 edges, and 42 faces: 30 hexagons and 12 pentagons. It is constructed as a chamfer (edge-truncation) of a regular dodecahedron. The pentagons are reduced in size and new hexagonal faces are added in place of all the original edges. Its dual is the pentakis icosidodecahedron.
It is also called a truncated rhombic triacontahedron, constructed as a truncation of the rhombic triacontahedron. It can more accurately be called an order-5 truncated rhombic triacontahedron because only the order-5 vertices are truncated.
Structure
These 12 order-5 vertices can be truncated such that all edges are equal length. The original 30 rhombic faces become non-regular hexagons, and the truncated vertices become regular pentagons.
The hexagon faces can be equilateral but not regular with D symmetry. The angles at the two vertices with vertex configuration are and at the remaining four vertices with , they are each.
It is the Goldberg polyhedron , containing pentagonal and hexagonal faces.
It also represents the exterior envelope of a cell-centered orthogonal projection of the 120-cell, one of six convex regular 4-polytopes.
Chemistry
This is the shape of the fullerene ; sometimes this shape is denoted to describe its icosahedral symmetry and distinguish it from other less-symmetric 80-vertex fullerenes. It is one of only four fullerenes found by to have a skeleton that can be isometrically embeddable into an L space.
Related polyhedra
This polyhedron looks very similar to the uniform truncated icosahedron which has 12 pentagons, but only 20 hexagons.
The chamfered dodecahedron creates more polyhedra by basic Conway polyhedron notation. The zip chamfered dodecahedron makes a chamfered truncated icosahedron, and Goldberg (2,2).
Chamfered truncated icosahedron
In geometry, the chamfered truncated icosahedron is a convex polyhedron with 240 vertices, 360 edges, and 122 faces, 110 hexagons and 12 pentagons.
It is constructed by a chamfer operation to the truncated icosahedron, adding new hexagons in place of original edges. It can also be constructed as a zip (= dk = dual of kis of) operation from the chamfered dodecahedron. In other words, raising pentagonal and hexagonal pyramids on a chamfered dodecahedron (kis operation) will yield the (2,2) geodesic polyhedron. Taking the dual of that yields the (2,2) Goldberg polyhedron, which is the chamfered truncated icosahedron, and is also Fullerene C240.
Dual
Its dual, the hexapentakis chamfered dodecahedron has 240 triangle faces (grouped as 60 (blue), 60 (red) around 12 5-fold symmetry vertices and 120 around 20 6-fold symmetry vertices), 360 edges, and 122 vertices.
Hexapentakis chamfered dodecahedron
References
Bibliography
External links
Vertex- and edge-truncation of the Platonic and Archimedean solids leading to vertex-transitive polyhedra Livio Zefiro
VRML polyhedral generator (Conway polyhedron notation)
Goldberg polyhedra
Polyhedra
Mathematical notation | Chamfered dodecahedron | Mathematics | 735 |
70,615,797 | https://en.wikipedia.org/wiki/SCTbio | SCTbio is global contract development and manufacturing organization (CDMO) providing cGMP services of Advanced Therapy Medicinal Products (ATMPs). It operates in Europe and North America. The company has strong expertise in the development of autologous cell-based products, cell banking and all needle-to-needle GMP operations, including a validated apheresis collection sites network, product manufacturing, QC, GMP storage, QA/QP release, and worldwide drug products supply for clinical and commercial scale.
Founding
SCTbio was founded in 2021 and is a part of the larger PPF Biotech network. SCTbio was initially part of the SOTIO group, providing drug development and manufacturing capabilities. Its CEO, Luděk Sojka, has been part of the PPF Biotech network and SOTIO since 2011.
On July 1, 2021, SOTIO implemented an official split into two sister companies, SCTbio and SOTIO Biotech.
Operations
SCTbio conducts global operations in Europe and in the USA. The cGMP cell manufacturing facility is based in Prague, Czech Republic. The GMP facility features over 2,000 square meters of total space, including 420 square meters of total clean room area (4,520 sq ft). SCTbio offers the ability to of manufacture genetically modified products that require the separation of viral and non-viral components. The physical segregation minimizes any risk of cross-contamination.
Services
CGMP Manufacturing and Quality control: SCTbio provides full production services covering autologous, or allogenic cell and gene therapy products, and in collaboration with sister Cambridge facility recently expanded to the area of viral vectors. SCTbio leads manufacturing processes from various starting materials, including apheresis products, whole blood, and tumor tissues. The company also conducts quality control testing based on a variety of cellular and molecular methods and provides rapid sterility testing.
Analytical Development: SCTbio conducts development, optimization and implementation of analytical methods, including expertise in cellular, flow cytometry, molecular and microbiology-based methods, provided with the development of standard operating procedures, as well as contributing to methods qualification and validation.
Process Development: SCTbio contributes to the design and development of customized manufacturing procedures, in line with cGMP standards to create new working instructions and standard operating procedures, as well as developing technology transfer plans and execution.
Logistics Services & Apheresis Collection: SCTbio has contributed to a number of clinical trials and developed their logistical services while operating under SOTIO. They have a vast network of logistical services that include shipping and validation of apheresis products and provide technical expertise on the harvesting of peripheral blood mononuclear cells.
Procurement & Warehouse Management: SCTbio participates in the procurement of raw materials for product development and adheres to GxP practices. The company has its own storage facility in Prague that provides different controlled temperature from -190° (liquid nitrogen) to room temperature.
Quality and Regulatory Support: SCTbio provides support and services for quality system checks in the European Union, United Kingdom, and United States. They oversee the final drug products in each of these markets and have flexible quality systems that are in line with the territories that they are established, using electronic Quality Systems and Quality Document management systems that implement custom design and monitoring capabilities.
References
Biotechnology
Life sciences industry
Biotechnology companies of the Czech Republic
PPF Group
Biotechnology companies established in 2021 | SCTbio | Biology | 701 |
44,417,637 | https://en.wikipedia.org/wiki/Conditioned%20play%20audiometry | Conditioned play audiometry (CPA) is a type of audiometry done in children from ages 2 to 5 years old, in developmental age. It is the test that directly follows visual reinforcement audiometry when the child becomes able to focus on a task. It is a type of behavioral hearing test, of which there are many.
Conditioned play audiometry uses toys to direct the child's attention on the listening task and turns it into a game. Instead of raising one's hand in response to the sound, as an adult would, the child might drop a toy into a bucket every time he or she hears a sound. This keeps the child interested in the listening task for longer. Common games include dropping balls in buckets, placing rings on a stick, feeding coins in a play pig, among many others.
The first part of CPA involves conditioning the child. The audiologist presents a loud sound that the child can comfortably hear, while encouraging the child to "drop the ball in the bucket every time you hear the sound," or whichever game is being used. After a few trials to get the child comfortable with the task, the audiologist then attempts to drop to low levels in order to find the softest sound the child can hear. It's incredibly important to go quickly to ensure the child does not lose attention to the task.
There are precautions to take to ensure good reliability when performing solo play audiometry. It is important that the child not react to the clinician's hand movements, instead of sounds themselves. To address this, false taps on the tablet are essential to ensure the child is abiding by the listening task and not visual cues. Should the child react to non-sound producing (false) taps, re-conditioning may be warranted.
Just like typical audiometry, CPA is performed at multiple frequencies, from 250 to 8000 Hz, to get a full range of the child's hearing. This can be performed using typical headphones and with a bone oscillator, and all thresholds are plotted on an audiogram. Once the child has reached approximately five years old, conventional audiometry using a button or hand-raising can typically be performed.
References
Acoustics
Hearing
Ear procedures
Audiology | Conditioned play audiometry | Physics | 456 |
25,523,408 | https://en.wikipedia.org/wiki/Morelos%20railway%20accident | The Morelos railway accident occurred on 23 June 1881 near Cuautla, Morelos in Mexico when an entire train plunged into the San Antonio river, killing over 200 people.
On 18 June 1881, the narrow gauge Morelos Railroad from Mexico City to Cuautla first opened to the public. To honor the occasion, the President of Mexico and other high government officials visited Cuautla, accompanied by about 300 soldiers. Approximately 100 of the soldiers returned to Mexico City on 20 June, with the remainder set to leave on 23 June.
The 23 June train consisted of:
Two locomotives (one forward, one rear)
A passenger car for the Army officers
Five wooden boxcars for the soldiers and their wives
Two wooden boxcars carrying freight, including 80-100 barrels of brandy (reports differ on the number)
There had been heavy rains in the area, and in the dark, the engineer was unable to see that the bridge was now unsupported. When the train started over the bridge it immediately dropped into the ravine. On the way down, burning coals from the rear locomotive set the barrels of alcohol aflame. Between the fall and the fire, few survived.
An investigation was begun, and on 30 June, it was declared that "the actual and sole cause of the disaster was the very bad construction of the bridge." However, a report published on 14 July 1881 by The Toronto Mail set the blame squarely on the battalion's commanding officer, stating that he had forced the engineer at gunpoint to cross the bridge.
See also
Lists of rail accidents
References
Notes
Derailments in Mexico
Railway accidents in 1881
Morelos
1881 in Mexico
History of Morelos | Morelos railway accident | Technology | 340 |
48,595,119 | https://en.wikipedia.org/wiki/Insecticide%20Resistance%20Action%20Committee | The Insecticide Resistance Action Committee (IRAC) was formed in 1984 and works as a specialist technical group of the industry association CropLife to be able to provide a coordinated industry response to prevent or delay the development of insecticide resistance in insect, mite and nematode pests. IRAC strives to facilitate communication and education on insecticide and traits resistance as well as to promote the development and facilitate the implementation of insecticide resistance management strategies.
IRAC is recognised by the Food and Agriculture Organization (FAO) and the World Health Organization (WHO) of the United Nations as an advisory body on matters pertaining to insecticide resistance.
Pesticideresistance.org is a database financed by IRAC, US Department of Agriculture, and others.
Sponsors
IRAC's sponsors are: ADAMA, BASF, Bayer CropScience, Corteva, FMC, Mitsui Chemicals, Nihon Nohyaku, Sumitomo Chemical, Syngenta and UPL.
Mode of action classification
IRAC publishes an insecticide mode of action (MoA) classification that lists most common insecticides and acaricides and recommends that "successive generations of a pest should not be treated with compounds from the same MoA Group". IRAC assigns a mode of action (MoA) to an insecticide, based on sufficient scientific data. They then update the mode of action (MoA) classification. Several insecticides and classes of insecticide may act through the same mode of action.
Classes of Insecticide
If an insecticide is successful, follow-on insecticides, based on the chemical structure of the first in class (prototype) insecticide, may be developed either by the original company or by competitors. Sought after are insecticides which have improved properties or which kill different orders or species of insect. The resulting classes of insecticides are named by IRAC after common usage has been established, although alternative names may be found in the scientific literature.
Table of modes of action and classes of insecticide
In the table the number of insecticides listed in each class is given, and an example of each class. The number of insecticides in the IRAC class listing is given in column Nr (A). The number in the Compendium of Pesticide Common Names (insecticide + acaricide) is given in column Nr (B), although the name given there to the class historically is often different to the IRAC class name.
See also
HRAC classification
List of insecticides
Further reading
References
External links
IRAC website home page
Biotechnology advocacy
Insecticides
Organizations established in 1984
Pesticide organizations | Insecticide Resistance Action Committee | Engineering,Biology | 530 |
41,591,786 | https://en.wikipedia.org/wiki/Cortinarius%20bovinatus | Cortinarius bovinatus is an agaric fungus in the family Cortinariaceae. Described as new to science in 2013, it is found in Europe.
See also
List of Cortinarius species
References
External links
bovinatus
Fungi of Europe
Fungi described in 2013
Fungus species | Cortinarius bovinatus | Biology | 61 |
25,786,160 | https://en.wikipedia.org/wiki/Constraint%20graph%20%28layout%29 | In some tasks of integrated circuit layout design a necessity arises to optimize placement of non-overlapping objects in the plane. In general this problem is extremely hard, and to tackle it with computer algorithms, certain assumptions are made about admissible placements and about operations allowed in placement modifications. Constraint graphs capture the restrictions of relative movements of the objects placed in the plane. These graphs, while sharing common idea, have different definition, depending on a particular design task or its model.
Floorplanning
In floorplanning, the model of a floorplan of an integrated circuit is a set of isothetic rectangles called "blocks" within a larger rectangle called "boundary" (e.g., "chip boundary", "cell boundary").
A possible definition of constraint graphs is as follows. The constraint graph for a given floorplan is a directed graph with vertex set being the set of floorplan blocks and there is an edge from block b1 to b2 (called horizontal constraint), if b1 is completely to the left of b2 and there is an edge from block b1 to b2 (called vertical constraint), if b1 is completely below b2.
If only horizontal constraints are considered, one obtains the horizontal constraint graph. If only vertical constraints are considered, one obtains the vertical constraint graph.
Under this definition, the constraint graph can have as many as edges, where n is the number of blocks. Therefore, other, less dense constraint graphs are considered. The horizontal visibility graph is a horizontal constraint graph in which the horizontal constraint between two blocks exists only if there is a horizontal line segment which connects the two blocks and does not intersect any other blocks. In other words, one block is a potential "immediate obstacle" for moving another one horizontally. The vertical visibility graph is defined in a similar way.
Channel routing
Channel routing is the problem of routing of a set of nets N which have fixed terminals on two opposite sides of a rectangle ("channel"). In this context, the horizontal constraint graph is the undirected graph with vertex set N and two nets are connected by an edge if and only if horizontal segments of the routing must overlap. In the given example, only nets 5 and 6 do not have a horizontal constraint between them. The vertical constraint graph is the directed graph with vertex set N and two nets are connected by an edge if and only if there are two pins from different nets on the same vertical line and the edge is directed from the net with pin on the upper edge of the channel. This direction means that this net must be routed on a horizontal track above the horizontal tracks of the second net. In the given example, only nets 1 and 3 have a vertical constraint.
References
Application-specific graphs
Electronic design | Constraint graph (layout) | Engineering | 565 |
72,949,511 | https://en.wikipedia.org/wiki/C.S.%20Unnikrishnan | C. S. Unnikrishnan (born 25 July 1962) is an Indian physicist and professor known for his contributions in multiple areas of experimental and theoretical physics. He has been a professor at the Tata Institute of Fundamental Research Mumbai and is currently a professor in the School of Quantum Technology at the Defence Institute of Advanced Technology in Pune. He has made significant contributions in foundational issues in gravity and quantum physics and has published over 250 research papers and articles. Unnikrishnan is also a key member of the LIGO-India project and a member of the global LIGO Scientific Collaboration
Education
Unnikrishnan received his M.Sc. degree from Indian Institute of Technology, Madras and his Ph.D. from the Tata Institute of Fundamental Research, University of Mumbai. He has also been a visiting researcher at the Kastler-Brossel Laboratory of the Ecole Normale Supérieure in Paris and at the University of Paris 13.
Research contributions
Unnikrishnan is a renowned researcher in the field of foundational issues in gravity and quantum physics, including quantum optics. His expertise lies in experimental physics, and he has been instrumental in setting up the laser-cooling laboratory at TIFR, Mumbai. He is well-versed in the use of torsion balances, interferometers, laser cooled atoms, and Bose-Einstein Condensates for his experiments.
Unnikrishnan's major theoretical contributions include the Theory of Cosmic Relativity and Universal Action Mechanics. These theories have provided new insights into our understanding of the interplay between gravity and quantum mechanics, and have opened up new avenues for further research. Cosmic Relativity, replaces current theories of dynamics and relativity and argues that all relativistic phenomena and laws of dynamics are controlled by the gravitational potentials of matter and energy in the universe. It provides evidence and solutions to several major issues in fundamental physics.
The discovery of the quantization of the Hall effect, where the movement of electrons is restricted to a 2-D plane, was characterized by quantized plateaus in the Hall resistance and has a simple theory for the integer quantum Hall effect, but there is still no proper understanding of the more spectacular fractional quantum Hall effect. The Cosmic Relativity theory offers a comprehensive understanding of both integer and fractional effects by modifying the quantum degeneracy due to cosmic gravitomagnetic interaction.
Professional accomplishments
Awards
Breakthrough Prize in Physics (2016)
Gruber Prize in Cosmology (2016)
Unnikrishnan is a key member and proposer-scientist of the LIGO-India project and has been a member of the LIGO Scientific Collaboration (LSC). He has made a significant impact in the field of gravitational waves as he shared the Breakthrough Prize in Physics and the Gruber Prize in Cosmology with the LSC in 2016 for their groundbreaking discovery.
He has held academic positions at the Tata Institute of Fundamental Research (TIFR) Mumbai, India, School of Quantum Technology at the Defence Institute of Advanced Technology (DIAT) Pune, India and Indian Institute of Astrophysics (IIA), Bangalore, India.
Unnikrishnan has published over 250 research papers and articles, and is also the author of two major works: his first monograph "Gravity's Time" and a major treatise "New Relativity in the Gravitational Universe". The treatise, which presents a new and innovative perspective on the foundational basis of relativity, has had a major impact in the field. The latter book calls for a change in the foundational basis of relativity and provides a solution to outstanding questions and puzzles about dynamics and relativity.
Books authored
Gravity's Time
New Relativity in the Gravitational Universe
References
Indian scientists
Indian physicists
Tata Institute of Fundamental Research alumni
Academic staff of Tata Institute of Fundamental Research
Relativity theorists
Defence Research and Development Organisation
Indian quantum physicists
21st-century Indian scientists
1962 births
Living people | C.S. Unnikrishnan | Physics | 778 |
178,711 | https://en.wikipedia.org/wiki/Cable%20length | A cable length or length of cable is a nautical unit of measure equal to one tenth of a nautical mile or approximately 100 fathoms. Owing to anachronisms and varying techniques of measurement, a cable length can be anywhere from , depending on the standard used.
Etymology and origin
The modern word cable is directly descended from the Middle English cable, cabel or kabel and also occurs in Middle Dutch and Middle German. Ultimately the word comes from Romanic, probably from a cattle halter. A cable in this usage cable is a thick rope or by transference a chain cable. The OED gives quotations from onwards. A cable's length (often "cable length" or just "cable") is simply the standard length in which cables came, which by 1555 had settled to around or .
Traditionally rope is made on long ropewalks, the length of which determines the maximum length of rope it is possible to make. As rope is "closed" (the final stage in manufacture) the length reduces, thus the ropewalk at Chatham Dockyard is long in order to produce standard coils.
Definition
The definition varies:
International: 185.2 m, equivalent to nautical mile
UK traditional: , though (The Admiralty) used of a sea mile, 1 minute of latitude locally.
US customary (US Navy):
In 2008 the Royal Navy in a handbook defined it as
References
Citations
. Also "fathom", from the same work (pp. 88–89, retrieved 12 January 2017).
Various subpages within the ropery section.
.
Nautical terminology
Units of length
Customary units of measurement in the United States | Cable length | Mathematics | 331 |
31,295,586 | https://en.wikipedia.org/wiki/Cantharellus%20friesii | Cantharellus friesii, the orange or velvet chanterelle, is a fungus native to Asia and Europe. The cap color varies from deep yellow to reddish orange and is 2–4 cm wide. It occurs in beech, fir and spruce forests. C. friesii is considered a good edible mushroom, but because of its rarity, it deserves to be mindfully managed with limited use of fungicides if discovered on residential or commercial property. Harvesting the fruit bodies of the fungus will allow for further propagation of the species as its spores are dispersed along the collector's travels. The specific epithet friesii honors the mycologist Elias Magnus Fries.
References
External links
friesii
Fungi described in 1869
Fungi of Asia
Edible fungi
Fungi of Europe
Fungus species | Cantharellus friesii | Biology | 154 |
24,150,292 | https://en.wikipedia.org/wiki/C22H30O3 | {{DISPLAYTITLE:C22H30O3}}
The molecular formula C22H30O3 (molar mass : 342.47 g/mol) may refer to:
Anacardic acid, a chemical compound found in the shell of the cashew nut
Endrisone, a steroid
Megestrol, a progesterone derivative with antineoplastic properties used in the treatment of advanced carcinoma of the breast and endometrium
Sargachromanol A
SC-5233, an antimineralocorticoid
Trimegestone, a steroid | C22H30O3 | Chemistry | 127 |
54,514,552 | https://en.wikipedia.org/wiki/NGC%201573 | NGC 1573 is an elliptical galaxy in the constellation of Camelopardalis. It was discovered on 1 August 1883 by Wilhelm Tempel. It was described as "very faint, small" by John Louis Emil Dreyer, the compiler of the New General Catalogue. It is located about 190 million light-years (58 megaparsecs) away.
The galaxy PGC 16052 is not a NGC object, nor is it physically associated with NGC 1573, but is often called NGC 1573A. It is an intermediate spiral galaxy with an apparent magnitude of about 14.0. In 2010, a supernova was discovered in PGC 16052 and was designated as SN 2010X.
References
Notes
Elliptical galaxies
1573
03077
015570
Camelopardalis | NGC 1573 | Astronomy | 160 |
25,681,069 | https://en.wikipedia.org/wiki/Project%20Milo | Project Milo (also referred to as Milo and Kate) was a project in development by Lionhead Studios for the Xbox 360 video game console. Formerly a secretive project under the early codename "Dimitri", Project Milo was unveiled at the 2009 Electronic Entertainment Expo (E3) in a demonstration for Kinect, as a "controller-free" entertainment initiative for the Xbox 360 based on depth-sensing and pattern recognition technologies. The project was a tech demo to showcase the capabilities of Kinect and was not released, despite conflicting reports that the project was an actual game.
Development
The project began as work on an "emotional AI (artificial intelligence)" after Lionhead had finished work on Black & White in 2001. The project was code named Dimitri, after the godson of Lionhead creative director Peter Molyneux. Details revealed about the project led some to speculate that "Dimitri" had become Fable II, but a 2006 interview with Molyneux confirmed that the projects were separate. For several years the development of Dimitri remained "experimental", resulting in scarce news updates during this phase of development. In later interviews, Molyneux began to refer to the project as "Project X".
During their press briefing at the Electronic Entertainment Expo in June 2009, Lionhead's parent company Microsoft unveiled Kinect, then known as Project Natal, during which it featured a presentation clip from Molyneux demonstrating a woman naturally interacting with a virtual character, referred to as "Milo." In an interview with Eurogamer after the press conference, Molyneux confirmed that the demonstration was of the previously-known "Dimitri," and would be a game developed around Kinect, titled Milo and Kate. In the game, players would interact with a 10-year-old child (Milo or Millie, selected at the start) and a dog named Kate, playing through a story. According to Molyneux, work on the Kinect-specific elements started in December 2008. The game would also feature an in-game store, for purchasing items to enhance gameplay.
Milo had an AI structure that responded to human interactions, such as spoken word, gestures, or predefined actions in dynamic situations. The game relied on a procedural generation system which was constantly updating a built-in "dictionary" that was capable of matching key words in conversations with inherent voice-acting clips to simulate lifelike conversations. Molyneux claimed that the technology for the game was developed while working on Fable and Black & White.
However, the game was not present at Microsoft's E3 press briefing the following year. Further confusion arose later in the month with a statement by Microsoft's Aaron Greenberg stating that the game was not a product they were planning to bring to market, but was more of an internal tech demo. This was later refuted by Molyneux who stated that he would reveal a more advanced version of Milo during his TEDGlobal talk in Oxford in July 2010. Molyneux went on to hint at difficulties in getting Microsoft to see Milo as a full game. Molyneux said "The biggest challenge for us is convincing people (Microsoft) what we're doing is actually going to work, is going to reach a new audience, is going to be an idea that people love." At the TED conference in Oxford in July 2010, more footage was shown. Players could make crucial decisions in Milo's life, or smaller ones such as squashing a snail or not. During the conference it was shown that Milo could be taught how to skip stones. The demonstration also indicated that users were only able to talk to Milo when a red microphone image appeared on the screen.
In September 2010, Eurogamer ran a story, citing an unnamed source, stating that work on Milo had been halted, and that the Milo tech would be used in a "Fable themed Kinect game". This story was seemingly backed up by Microsoft's Alex Kipman in a November 2010 interview with Gamesindustry.biz, declaring that Project Milo "was never a product" and "was never announced as a game". However, an interview with the drama director of the game was released in March. It showed part of the creation process that he had to go through and some brief sections of gameplay. Completion of the project was also hinted in the interview.
At the 2011 Game Developer's Conference, Lionhead lead programmer Ben Sugden showcased a new graphics technology used in Project Milo for upcoming Xbox 360 titles. At E3 2011, Fable: The Journey was announced, which includes elements from Milo, including voice and emotion recognition. In a May 2012 interview with Eurogamer, Lionhead creative director Gary Carr confirmed that a number of Kinect features from Project Milo had been implemented in Fable: The Journey.
References
External links
Molyneux video interview
Cancelled Xbox 360 games
Kinect games
Lionhead Studios games
Microsoft games
Game artificial intelligence
Technology demonstrations | Project Milo | Mathematics | 1,000 |
70,039,240 | https://en.wikipedia.org/wiki/Rogue%20black%20hole | A rogue black hole is a black hole that is not bound by any object's gravity, allowing them to float freely throughout the universe. Since black holes emit no light, the only ways to detect them are gravitational lensing or x-ray bursts that occur when they destroy an object.
Intergalactic rogue black holes
These are objects without a host galactic group, caused by collisions between two galaxies or when the merging of two black holes is disrupted. It has been estimated that there could be 12 rogue supermassive black holes on the edge of the Milky Way galaxy.
Interstellar rogue black holes
Examples
In January 2022, a team of astronomers reported of OGLE-2011-BLG-0462/MOA-2011-BLG-191, the first unambiguous detection and mass measurement of an isolated stellar black hole using the Hubble Space Telescope together with the Microlensing Observations in Astrophysics (MOA) and the Optical Gravitational Lensing Experiment (OGLE). This black hole is located 5,000 light-years away, has a mass 7.1 times that of the Sun, and moves at about 45 km/s. While there have been other candidates, they have been detected more indirectly.
See also
Rogue star
Rogue planet
Rogue extragalactic planets
Rogue comet
Tidally detached exomoon
Stellar black hole
Primordial black hole
References
Black holes
Extragalactic astronomy | Rogue black hole | Physics,Astronomy | 288 |
15,916,469 | https://en.wikipedia.org/wiki/L.%20James%20Sullivan | Leroy James Sullivan (June 27, 1933 – September 22, 2024) was an American firearms inventor. Going by Jim Sullivan, he designed several "scaled-down" versions of larger firearms.
Early life
Sullivan was born on June 27, 1933, in Nome, Alaska. Sullivan lived in Nome until he was seven years old, concerned that World War II would spread to Alaska, Sullivan's family moved to Seattle, Washington.
Education
Sullivan attended the public schools of Seattle, and later in Kennewick, Washington. Sullivan went on to study engineering, for two years, at the University of Washington in Seattle. Aware that he was about to be drafted to fight in the Korean War Sullivan wanted to become an Army diver, so he left the University of Washington to attend the Sparling School of Deep Sea Diving in Long Beach, California.
Military service
Sullivan served in the US Army, from 1953 to 1955, although he was trained by the Army to be a telephone installer and repairman. Due to his civilian training he went overseas to Korea in 1954, where he was assigned by the Army to be a diver to repair oil pipelines and other facilities damaged during the US invasion of Inchon Harbor.
Small arms designer
Sullivan is largely responsible for the Ultimax 100 light machine gun and the SureFire MGX. He also contributed to the Ruger M77 rifle, M16, Stoner 63, and Ruger Mini-14 rifles (scaled from the AR-10, Stoner 62, and M14 rifle respectively).
Armwest LLC M4
In 2014, Sullivan provided a video interview regarding his contributions to the M16/M4 family of rifles while working for Armalite. A noted critic of the M4, he illustrates the deficiencies found in the rifle in its current configuration. In the video, he demonstrates his "Arm West LLC modified M4", with enhancements he believes necessary to rectify the issues with the weapon. Proprietary issues aside, the weapon is said to borrow features in his prior development, the Ultimax. Sullivan has stated (without exact details as to how) the weapon can fire from the closed bolt in semi-automatic and switch to open bolt when firing in fully automatic, improving accuracy. The weight of the cyclic components of the gun has been doubled (while retaining the weapon's weight at less than eight pounds). Compared to the standard M4, which in automatic fires 750-950 rounds a minute, the rate of fire of the Arm West M4 is heavily reduced both to save ammunition and reduce barrel wear. The reduced rate also renders the weapon more controllable and accurate in automatic firing.
Death
Sullivan died on September 22, 2024, at the age of 91.
References
1933 births
2024 deaths
United States Army personnel of the Korean War
Inventors from Alaska
Weapon designers
Weapon design
Firearm designers
People from Nome, Alaska
People from Seattle
People associated with firearms
Place of death unknown | L. James Sullivan | Engineering | 599 |
5,625,092 | https://en.wikipedia.org/wiki/Arsenical%20bronze | Arsenical bronze is an alloy in which arsenic, as opposed to or in addition to tin or other constituent metals, is combined with copper to make bronze. The use of arsenic with copper, either as the secondary constituent or with another component such as tin, results in a stronger final product and better casting behavior.
Copper ore is often naturally contaminated with arsenic; hence, the term "arsenical bronze" when used in archaeology is typically only applied to alloys with an arsenic content higher than 1% by weight, in order to distinguish it from potentially accidental additions of arsenic.
Origins in pre-history
Although arsenical bronze occurs in the archaeological record across the globe, the earliest artifacts so far known, dating from the 5th millennium BC, have been found on the Iranian plateau. Arsenic is present in a number of copper-containing ores (see table at right, adapted from Lechtman & Klein, 1999), and therefore some contamination of the copper with arsenic would be unavoidable. However, it is still not entirely clear to what extent arsenic was deliberately added to copper and to what extent its use arose simply from its presence in copper ores that were then treated by smelting to produce the metal.
Reconstructing a possible sequence of events in prehistory involves considering the structure of copper ore deposits, which are mostly sulfides. The surface minerals would contain some native copper and oxidized minerals, but much of the copper and other minerals would have been washed further into the ore body, forming a secondary enrichment zone. This includes many minerals such as tennantite, with their arsenic, copper and iron. Thus, the surface deposits would have been used first; with some work, deeper sulfidic ores would have been uncovered and worked, and it would have been discovered that the material from this level had better properties.
Using these various ores, there are four possible methods that may have been used to produce arsenical bronze alloys. These are:
The direct addition of arsenic-bearing metals or ores such as realgar to molten copper. This method, although possible, lacks evidence.
The reduction of antimony-bearing copper arsenates or fahlore to produce an alloy high in arsenic and antimony. This is entirely practicable.
The reduction of roasted copper sulfarsenides such as tennantite and enargite. This method would result in the production of toxic fumes of arsenous oxide and the loss of much of the arsenic present in the ores.
The co-smelting of oxidic and sulfidic ores such as malachite and arsenopyrite together. This method has been demonstrated to work well, with little in the way of dangerous fumes given off during it, because of the reactions between the minerals.
Greater sophistication of metal workers is suggested by Thornton et al. They suggest that iron arsenide was deliberately produced as part of the copper-smelting process, to be traded and used to make arsenical bronze elsewhere by addition to molten copper.
Artifacts made of arsenical bronze cover the complete spectrum of metal objects, from axes to ornaments. The method of manufacture involved heating the metal in crucibles, and casting it into moulds made of stone or clay. After solidifying, it would be polished or, in the case of axes and other tools, work-hardened by beating the working edge with a hammer, thinning out the metal and increasing its strength. Finished objects could also be engraved or decorated as appropriate.
Advantages of arsenical bronze
While arsenic was most likely originally mixed with copper as a result of the ores already containing it, its use probably continued for a number of reasons. First, it acts as a deoxidizer, reacting with oxygen in the hot metal to form arsenous oxides which vaporize from the liquid metal. If a great deal of oxygen is dissolved in liquid copper, when the metal cools the copper oxide separates out at grain boundaries, and greatly reduces the ductility of the resulting object. However, its use can lead to a greater risk of porous castings, owing to the solution of hydrogen in the molten metal and its subsequent loss as a bubble (although any bubbles could be forge-welded and still leave the mass of the metal ready to be work-hardened).
Second, the alloy is capable of greater work-hardening than is the case with pure copper, so that it performs better when used for cutting or chopping. An increase in work-hardening capability arises with an increasing percentage of arsenic, and the bronze can be work-hardened over a wide range of temperatures without fear of embrittlement. Its improved properties over pure copper can be seen with as little as 0.5 to 2 wt% As, giving a 10-to-30% improvement in hardness and tensile strength.
Third, in the correct percentages, it can contribute a silvery sheen to the article being manufactured. There is evidence of arsenical bronze daggers from the Caucasus and other artifacts from different locations having an arsenic-rich surface layer which may well have been produced deliberately by ancient craftsmen, and Mexican bells were made of copper with sufficient arsenic to color them silver.
Arsenical bronze, sites and civilisations
Arsenical bronze was used by many societies and cultures across the globe. Firstly, the Iranian plateau, followed by the adjacent Mesopotamian area, together covering modern Iran, Iraq and Syria, has the earliest arsenical bronze metallurgy in the world, as previously mentioned. It was in use from the 4th millennium BC through to mid 2nd millennium BC, a period of nearly 2,000 years. There was a great deal of variation in arsenic content of artifacts throughout this period, making it impossible to say exactly how much was added deliberately and how much came about by accident.
These matters were clarified considerably by 2016. The two relevant ancient sites in eastern Turkey (Malatya Province) are Norşuntepe and Değirmentepe, where arsenical bronze production was taking place before 4000 BC. Hearths or natural draft furnaces, slag, ore, and pigment had been recovered throughout these sites. This was in the context of architectural complexes typical of southern Mesopotamian architecture.
According to Boscher (2016), at Değirmentepe, arsenical copper objects were clearly manufactured around 4200 BC, yet the technological aspects of this production remain unclear. This is because the primary smelting of ore seems to have been undertaken elsewhere, perhaps already at the sites of mining.
In contrast, the related Norşuntepe site provides a better context of production, and demonstrates that some form of arsenic alloying was indeed taking place by the 4th millennium BC. Since the slag identified at Norşuntepe contains no arsenic, this means that arsenic in some form was added separately.
Societies using arsenical bronze include the Akkadians, those of Ur, and the Amorites, all based around the Tigris and Euphrates rivers and centres of the trade networks which spread arsenical bronze across the Middle East during the Bronze Age.
The Chalcolithic-period Nahal Mishmar hoard in the Judean Desert west of the Dead Sea contains a number of arsenical bronze (4–12% arsenic) and perhaps arsenical copper artifacts made using the lost-wax process, the earliest known use of this complex technique. "Carbon-14 dating of the reed mat in which the objects were wrapped suggests that it dates to at least 3500 B.C. It was in this period that the use of copper became widespread throughout the Levant, attesting to considerable technological developments that parallel major social advances in the region."
In ancient Egypt, use of arsenical bronze/copper is confirmed since the second phase of Naqada culture, and then used widely until the beginning of the New Kingdom, i.e. in the Egyptian Chalcolithic, Early and Middle Bronze Age, and within the same eras also in ancient Nubia. In the Old Kingdom, era of the largest pyramids' builders, the arsenical copper was used for the production of tools at Giza. Arsenical copper was also processed in the workshop uncovered at Giza's Heit el-Ghurab, "lost city of pyramid builders" from the reign of Menkaure. Egyptian and Nubian objects made of arsenical copper were identified in the collections in Brussels, and in Leipzig. In the Middle Kingdom, use of tin bronze is increasing in ancient Egypt and Nubia. One of the largest studies of such material was the research of the Egyptian and Nubian axe blades in the British Museum, and it provided comparable results. Similar situation can be observed in Middle Bronze Age Kerma.
Sulfide deposits frequently are a mix of different metal sulfides, such as copper, zinc, silver, arsenic, mercury, iron and other metals. (Sphalerite (ZnS with more or less iron), for example, is not uncommon in copper sulfide deposits, and the metal smelted would be brass, which is both harder and more durable than copper.) The metals could theoretically be separated out, but the alloys resulting were typically much stronger than the metals individually.
The use of arsenical bronze spread along trade routes into northwestern China, to the Gansu–Qinghai region, with the Siba, Qijia and Tianshanbeilu cultures. However it is still unclear as to whether arsenical bronze artifacts were imported or made locally, although the latter is suspected as being more likely due to possible local exploitation of mineral resources. On the other hand, the artifacts show typological connections to the Eurasian steppe.
The Eneolithic period in Northern Italy, with the Remedello and Rinaldone cultures in 2800 to 2200 BC, saw the use of arsenical bronze. Indeed, it seems that arsenical bronze was the most common alloy in use in the Mediterranean basin at this time.
In South America, arsenical bronze was the predominant alloy in Ecuador and north and central Peru, because of the rich arsenic bearing ores present there. By contrast, the south and central Andes, southern Peru, Bolivia and parts of Argentina, were rich in the tin ore cassiterite and thus did not use arsenical bronze.
The Sican Culture of northwestern coastal Peru is famous for its use of arsenical bronze during the period 900 to 1350 AD. Arsenical bronze co-existed with tin bronze in the Andes, probably due to its greater ductility which meant it could be easily hammered into thin sheets which were valued in local society.
Arsenical bronze after the Bronze Age
The archaeological record in Egypt, Peru and the Caucasus suggests that arsenical bronze was produced for a time alongside tin bronze. At Tepe Yahya its use continued into the Iron Age for the manufacture of trinkets and decorative objects, thus demonstrating that there was not a simple succession of alloys over time, with superior new alloys replacing older ones. There are few real advantages metallurgically for the superiority of tin bronze, and early authors suggested that arsenical bronze was phased out due to its health effects. It is more likely that it was phased out in general use because alloying with tin gave castings which had similar strength to arsenical bronze but did not require further work-hardening to achieve useful strength. It is also probable that more certain results could be achieved with the use of tin, because it could be added directly to the copper in specific amounts, whereas the precise amount of arsenic being added was much harder to gauge due to the manufacturing process.
Health effects of arsenical bronze use
Arsenic is an element with a vaporization point of 615 °C, such that arsenical oxide will be lost from the melt before or during casting, and fumes from fire setting for mining and ore processing have long been known to attack the nervous system, eyes, lungs, and skin.
Chronic arsenic poisoning leads to peripheral neuropathy, which can cause weakness in the legs and feet. It has been speculated that this lay behind the legend of lame smiths in many cultures and myths, such as the Greek god Hephaestus. As Hephaestus was an iron-age smith, not a bronze-age smith, the connection would be one from ancient folk memory.
A well-preserved mummy of a man who lived around 3,200 BC found in the Ötztal Alps, popularly known as Ötzi, showed high levels of both copper particles and arsenic in his hair. This, along with Ötzi's copper axe blade, which is 99.7% pure copper, has led scientists to speculate that he was involved in copper smelting.
Modern uses of arsenical bronze
Arsenical bronze has seen little use in the modern period. It appears that the closest equivalent goes by the name of arsenical copper, defined as copper with under 0.5% arsenic by mass, below the accepted percentage in archaeological artifacts. The presence of 0.5% arsenic in copper lowers the electrical conductivity to 34% of that of pure copper, and even as little as 0.05% decreases it by 15%.
See also
Arsenical copper
Arsenical brass
References
External links
Bronze Age
Copper alloys
History of metallurgy
Coinage metals and alloys
Arsenic | Arsenical bronze | Chemistry,Materials_science | 2,694 |
28,170,854 | https://en.wikipedia.org/wiki/Egan%20Report | The Egan Report, titled Rethinking Construction, was an influential report on the UK construction industry produced by an industry task force chaired by Sir John Egan, published in November 1998. Together with the Latham Report, Constructing the Team, produced four years earlier, it did much to drive efficiency improvements in UK construction industry practice during the early years of the 21st century.
Historical context
While the 1994 Latham Report had stimulated various industry initiatives, government action was deemed necessary to get the industry to make the necessary changes. In October 1997, the then Deputy Prime Minister John Prescott commissioned a Construction Task Force, chaired by Sir John Egan, a former chief executive of Jaguar Cars, to look at the construction industry from the clients' perspective. The Task Force was to advise on opportunities to improve the efficiency and quality of the UK construction industry's service and products, to reinforce the impetus for change, and to make the industry more responsive to the needs of its customers.
Report
Informed by experiences in other industries (notably manufacturing), the Task Force report endorsed much of the progressive thinking already under way, and sought to improve performance through eliminating waste or non-value-adding activities from the construction process. It identified five key drivers of change:
committed leadership
a focus on the customer
integrated processes and teams
a quality driven agenda, and
commitment to people.
Having put the client's needs at the very heart of the process, it advocated an integrated project process based around four key elements:
product development
project implementation
partnering the supply chain, and
production of components.
Legacy
Existing industry bodies such as the Construction Industry Board, Construction Best Practice Programme and the Design Build Foundation incorporated the Egan agenda into their activities, and were augmented by a new industry organisation, the Movement for Innovation. These national level organisations were tasked with application of the ideas of Rethinking Construction through ‘demonstration projects’, and regional ‘cluster groups’ or best practice clubs (these initiatives continue today under the auspices of Constructing Excellence).
In March 1999, the UK government's Achieving Excellence in Construction initiative was launched to improve the performance - as industry clients - of central government departments, executive agencies and non-departmental public bodies. The initiative set out a route map with performance targets under four headings: management, measurement, standardisation and integration. Targets included the use of partnering and development of long-term relationships. Against this background, other government departments began to recognise the impact partnering could make and to promote the approach (e.g.: CABE/HM Treasury 2000, National Audit Office 2001).
In July 2001, as successor to both the earlier Task Force and the CIB, the Strategic Forum for Construction was set up by ministers under the chairmanship of Sir John Egan. On 12 September 2002 it published Accelerating Change, a report on its first year of activity. This report also underlined the potential importance of information technology in achieving greater integration, and set the tone for future UK government initiatives, notably the drive from 2010 onwards under chief construction adviser Paul Morrell to implement building information modelling on all UK public sector construction projects.
The Egan Report was one of the influences that fed into the early syllabus of the Interdisciplinary Design for the Built Environment programme at the University of Cambridge.
References
Construction industry of the United Kingdom
Building
Reports of the United Kingdom government
1998 in the United Kingdom
1998 in British politics
November 1998 events in the United Kingdom | Egan Report | Engineering | 679 |
13,818,545 | https://en.wikipedia.org/wiki/Maneuvering%20speed | In aviation, the maneuvering speed of an aircraft is an airspeed limitation at which the full deflection of the controls can be made at without risking structural damage.
The maneuvering speed of an aircraft is shown on a cockpit placard and in the aircraft's flight manual but is not commonly shown on the aircraft's airspeed indicator.
In the context of air combat maneuvering (ACM), the maneuvering speed is also known as corner speed or cornering speed.
Implications
It has been widely misunderstood that flight below maneuvering speed will provide total protection from structural failure. In response to the destruction of American Airlines Flight 587, a CFR Final Rule was issued clarifying that "flying at or below the design maneuvering speed does not allow a pilot to make multiple large control inputs in one airplane axis or single full control inputs in more than one airplane axis at a time". Such actions "may result in structural failures at any speed, including below the maneuvering speed."
Design maneuvering speed VA
VA is the design maneuvering speed and is a calibrated airspeed. Maneuvering speed cannot be slower than and need not be greater than Vc.
If is chosen by the manufacturer to be exactly the aircraft will stall in a nose-up pitching maneuver before the structure is subjected to its limiting aerodynamic load. However, if is selected to be greater than , the structure will be subjected to loads which exceed the limiting load unless the pilot checks the maneuver.
The maneuvering speed or maximum operating maneuvering speed depicted on a cockpit placard is calculated for the maximum weight of the aircraft. Some Pilot's Operating Handbooks also present safe speeds for weights less than the maximum.
The formula used to calculate a safe speed for a lower weight is , where VA is maneuvering speed (at maximum weight), W2 is actual weight, W1 is maximum weight.
Maximum operating maneuvering speed VO
Some aircraft have a maximum operating maneuvering speed VO. Note that this is a different concept than design maneuvering speed. The concept of maximum operating maneuvering speed was introduced to the US type-certification standards for light aircraft in 1993. The maximum operating maneuvering speed is selected by the aircraft designer and cannot be more than , where Vs is the stalling speed of the aircraft, and n is the maximal allowed positive load factor.
See also
V speeds
American Airlines Flight 587
References
External links
"Design airspeeds"
Airspeed
Aerodynamics | Maneuvering speed | Physics,Chemistry,Engineering | 497 |
39,339,697 | https://en.wikipedia.org/wiki/Iribarren%20number | In fluid dynamics, the Iribarren number or Iribarren parameter – also known as the surf similarity parameter and breaker parameter – is a dimensionless parameter used to model several effects of (breaking) surface gravity waves on beaches and coastal structures. The parameter is named after the Spanish engineer Ramón Iribarren Cavanilles (1900–1967), who introduced it to describe the occurrence of wave breaking on sloping beaches. The parameter used to describe breaking wave types on beaches; or wave run-up on – and reflection by – beaches, breakwaters and dikes.
Iribarren's work was further developed by Jurjen Battjes in 1974, who named the parameter after Iribarren.
Definition
The Iribarren number which is often denoted as Ir or ξ – is defined as:
with
where ξ is the Iribarren number, is the angle of the seaward slope of a structure, H is the wave height, L0 is the deep-water wavelength, T is the period and g is the gravitational acceleration. Depending on the application, different definitions of H and T are used, for example: for periodic waves the wave height H0 at deep water or the breaking wave height Hb at the edge of the surf zone. Or, for random waves, the significant wave height Hs at a certain location.
Breaker types
The type of breaking wave – spilling, plunging, collapsing or surging – depends on the Iribarren number. According to , for periodic waves propagating on a plane beach, two possible choices for the Iribarren number are:
or
where H0 is the offshore wave height in deep water, and Hb is the value of the wave height at the break point (where the waves start to break). Then the breaker types dependence on the Iribarren number (either ξ0 or ξb) is approximately:
References
Footnotes
Other
Water waves
Dimensionless numbers of fluid mechanics
Coastal engineering | Iribarren number | Physics,Chemistry,Engineering | 395 |
75,486,042 | https://en.wikipedia.org/wiki/Avenciguat | Avenciguat (development name BI 685509) is a soluble guanylate cyclase activator developed by Boehringer Ingelheim for kidney disease, and cirrhosis.
References
Drugs developed by Boehringer Ingelheim
Experimental drugs
Pyrazoles
Pyridines
Tetrahydropyrans
Carboxylic acids
Tetrahydroisoquinolines
Ethoxy compounds | Avenciguat | Chemistry | 90 |
536,249 | https://en.wikipedia.org/wiki/Storm%20Prediction%20Center | The Storm Prediction Center (SPC) is a US government agency that is part of the National Centers for Environmental Prediction (NCEP), operating under the control of the National Weather Service (NWS), which in turn is part of the National Oceanic and Atmospheric Administration (NOAA) of the United States Department of Commerce (DoC).
Headquartered at the National Weather Center in Norman, Oklahoma, the Storm Prediction Center is tasked with forecasting the risk of severe thunderstorms and tornadoes in the contiguous United States. It issues convective outlooks, mesoscale discussions, and watches as a part of this process. Convective outlooks are issued for the following eight days (issued separately for Day 1, Day 2, Day 3, and Days 4–8), and detail the risk of severe thunderstorms and tornadoes during the given forecast period, although tornado, hail and wind details are only available for Days 1 and 2. Days 3–8 use a probabilistic scale, determining the probability for a severe weather event in percentage categories (15%/yellow and 30%/orange).
Mesoscale discussions are issued to provide information on certain individual regions where severe weather is becoming a threat and states whether a watch is likely and details thereof, particularly concerning conditions conducive for the development of severe thunderstorms in the short term, as well as situations of isolated severe weather when watches are not necessary. Watches are issued when forecasters are confident that severe weather will occur, and usually precede the onset of severe weather by one hour, although this sometimes varies depending on certain atmospheric conditions that may inhibit or accelerate convective development.
The agency is also responsible for forecasting fire weather (indicating conditions that are favorable for wildfires) in the contiguous U.S., issuing fire weather outlooks for Days 1, 2, and 3–8, which detail areas with various levels of risk for fire conditions (such as fire levels and fire alerts).
History
The Storm Prediction Center began in 1952 as SELS (Severe Local Storms Unit), the U.S. Weather Bureau in Washington, D.C. In 1954, the unit moved its forecast operations to Kansas City, Missouri. SELS began issuing convective outlooks for predicted thunderstorm activity in 1955, and began issuing radar summaries in three-hour intervals in 1960; with the increased duties of compiling and disseminating radar summaries, this unit became the National Severe Storms Forecast Center (NSSFC) in 1966, remaining headquartered in Kansas City.
In 1968, the National Severe Storms Forecast Center began issuing status reports on weather watches; the agency then made its first computerized data transmission in 1971. On April 2, 1982, the agency issued the first "Particularly Dangerous Situation" watch, which indicates the imminent threat of a major severe weather event over the watch's timespan. In 1986, the NSSFC introduced two new forecast products: the Day 2 Convective Outlook (which include probabilistic forecasts for outlined areas of thunderstorm risk for the following day) and the Mesoscale Discussion (a short-term forecast outlining specific areas under threat for severe thunderstorm development).
In October 1995, the National Severe Storms Forecast Center relocated its operations to Norman, Oklahoma, and was rechristened the Storm Prediction Center. At that time, the guidance center was housed at Max Westheimer Airport (now the University of Oklahoma Westheimer Airport), co-located in the same building as the National Severe Storms Laboratory and the local National Weather Service Weather Forecast Office (the latter of which, in addition to disseminating forecasts, oversees the issuance of weather warnings and advisories for the western two-thirds of Oklahoma and western portions of North Texas, and issues outline and status updates for SPC-issued severe thunderstorm and tornado watches that include areas served by the Norman office). In 1998, the center began issuing the National Fire Weather Outlook to provide forecasts for areas potentially susceptible to the development and spread of wildfires based on certain meteorological factors. The Day 3 Convective Outlook (which is similar in format to the Day 2 forecast) was first issued on an experimental basis in 2000, and was made an official product in 2001.
In 2006, the Storm Prediction Center, National Severe Storms Laboratory and National Weather Service Norman Forecast Office moved their respective operations into the newly constructed National Weather Center, near Westheimer Airport. Since the agency's relocation to Norman, the 557th Weather Wing at Offutt Air Force Base would assume control of issuing the Storm Prediction Center's severe weather products in the event that the SPC is no longer able to issue them in the event of an outage (such as a computer system failure or building-wide power disruption) or emergency (such as an approaching strong tornadic circulation or tornado on the ground) affecting the Norman campus; on April 1, 2009, the SPC reassigned responsibilities for issuing the center's products in such situations to the 15th Operational Weather Squadron based out of Scott Air Force Base.
Brief history timeline
1948: Following Weather Bureau (WB) researchers' work by on a 20 March tornado at Tinker AFB, two officers (Fawbush and Miller) successfully predict another one five days later on 25 March at same base, given responsibility for AF tornado predictions.
1951: Severe Weather Warning Center (SWWC) established as an Air Weather Service unit, headed by Fawbush and Miller.
1952: WB establishes its own Weather Bureau-Army-Navy (WBAN) Analysis Center in Washington in March as a trial unit, made permanent on 21 May as the Weather Bureau Severe Weather Unit (SWU).
1953: SWU renamed Severe Local Storm (SELS) Warning Center on 17 June.
1954: SELS relocates from the WBAN Center in Washington to the WB's District Forecast Office (DFO) in downtown Kansas City in September.
1955: National Severe Storms Project (NSSP) formed SELS' as research component.
1958: SELS assumes authority for all public severe weather forecasts.
1962: Some from NSSP move to Norman's Weather Radar Laboratory to work with a new Weather Surveillance Radar-1957 (WSR-57).
1964: Remainder of NSSP moves to Norman and is reorganized as National Severe Storms Laboratory (NSSL).
1965: Environmental Science Services Administration (ESSA) formed, and entire WB office (SELS and DFO) in Kansas City renamed National Severe Storms Forecast Center (NSSFC).
1976: Techniques Development Unit (TDU) established in April to provide software development and evaluate forecast methods.
1995: NSSFC renamed Storm Prediction Center (SPC) in October.
1997: SPC moves from Kansas City to Norman.
2006: SPC moves a few miles south to the National Weather Center (NWC) on the University of Oklahoma Research Campus.
2023: Meteorologist Liz Leitman becomes the first woman at the SPC to issue a convective weather watch.
2024: On February 15, 2024, Leitman became the first woman meteorologist to issue a severe thunderstorm watch.
Overview
The Storm Prediction Center is responsible for forecasting the risk of severe weather caused by severe thunderstorms, specifically those producing tornadoes, hail of in diameter or larger, and/or winds of [50 knots] or greater. The agency also forecasts hazardous winter and fire weather conditions. It does so primarily by issuing convective outlooks, severe thunderstorm watches, tornado watches and mesoscale discussions.
There is a three-stage process in which the area, time period, and details of a severe weather forecast are refined from a broad-scale forecast of potential hazards to a more specific and detailed forecast of what hazards are expected, and where and in what time frame they are expected to occur. If warranted, forecasts will also increase in severity through this three-stage process.
The Storm Prediction Center employs a total of 43 personnel, including five lead forecasters, ten mesoscale/outlook forecasters, and seven assistant mesoscale forecasters. Many SPC forecasters and support staff are heavily involved in scientific research into severe and hazardous weather. This involves conducting applied research and writing technical papers, developing training materials, giving seminars and other presentations locally and nationwide, attending scientific conferences, and participating in weather experiments.
Convective outlooks
The Storm Prediction Center issues convective outlooks (AC), consisting of categorical and probabilistic forecasts describing the general threat of severe convective storms over the contiguous United States for the next six to 192 hours (Day 1 through Day 8). These outlooks are labeled and issued by day, and are issued up to five times per day.
The categorical levels of risks are TSTM (for Thunder Storm: light green shaded area – rendered as a brown line prior to April 2011 – indicating a risk for general thunderstorms), "MRGL" (for Marginal: darker green shaded area, indicating a very low but present risk of severe weather); "SLGT" (for Slight: yellow shaded area – previously rendered as a green line – indicating a slight risk of severe weather); "ENH" (for Enhanced: orange shaded area, which replaced the upper end of the SLGT category on October 22, 2014); "MDT" (for Moderate: red shaded area – previously rendered as a red line – indicating a moderate risk of severe weather); and "HIGH" (pink shaded area – previously a rendered as a fuchsia line – indicating a high risk of severe weather). Significant severe areas (referred to as "hatched areas" because of their representation on outlook maps) refer to a threat of increased storm intensity that is of "significant severe" levels (F2/EF2 or stronger tornado, or larger hail, or winds or greater).
In April 2011, the SPC introduced a new graphical format for its categorical and probability outlooks, which included the shading of risk areas (with the colors corresponding to each category, as mentioned above, being changed as well) and population, county/parish/borough and interstate overlays. The new shaded maps also incorporated a revised color palette for the shaded probability categories in each outlook.
In 2013, the SPC incorporated a small table under the Convective Outlook's risk category map that indicates the total coverage area by square miles, the total estimated population affected and major cities included within a severe weather risk area.
Public severe weather outlooks (PWO) are issued when a significant or widespread outbreak is expected, especially for tornadoes. From November to March, it can also be issued for any threat of significant tornadoes in the nighttime hours, noting the lower awareness and greater danger of tornadoes at that time of year.
Categories
A marginal risk day indicates storms of only limited organization, longevity, coverage and/or intensity, typically isolated severe or near-severe storms with limited wind damage, large hail and possibly a low tornado risk. Wind gusts of at least and hailstones of around in diameter are common storm threats within a marginal risk; depending on the sufficient wind shear, a tornado – usually of weak (EF0 to EF1) intensity and short duration – may be possible. This category replaced the "SEE TEXT" category on October 22, 2014.
A slight risk day typically will indicate that the threat exists for scattered severe weather, including scattered wind damage (produced by straight-line sustained winds and/or gusts of 60 to 70 mph), scattered severe hail (varying in size from to ) and/or isolated tornadoes (often of shorter duration and varying weak to moderate intensity, depending on the available wind shear and other sufficient atmospheric parameters). During the peak severe weather season, most days will have a slight risk somewhere in the United States. Isolated significant severe events are possible in some circumstances, but are generally not widespread.
An enhanced risk day indicates that there is a greater threat for severe weather than that which would be indicated by a slight risk, but conditions are not adequate for the development of widespread significant severe weather to necessitate a moderate category, with more numerous areas of wind damage (often with wind gusts of to ), along with severe hail (occasionally over ) and several tornadoes (in some setups, isolated strong tornadoes are possible). Severe storms are expected to be more concentrated and of varying intensities. These days are quite frequent in the peak severe weather season and occur occasionally at other times of year. This risk category replaced the upper end of "slight" on October 22, 2014, although a few situations that previously warranted a moderate risk were reclassified as enhanced (i.e. 45% wind or 15% tornado with no significant area).
A moderate risk day indicates that more widespread and/or more dangerous severe weather is possible, with significant severe weather often more likely. Numerous tornadoes (some of which may be strong and potentially long-track), more widespread or severe wind damage (often with gusts over ) and/or very large/destructive hail (up to or exceeding in diameter) could occur. Major events, such as large tornado outbreaks or widespread straight-line wind events, are sometimes also possible on moderate risk days, but with greater uncertainty. Moderate risk days are not terribly uncommon, and typically occur several times a month during the peak of the severe weather season, and occasionally at other times of the year. Slight and enhanced risk areas typically surround areas under a moderate risk, where the threat is lower.
A high risk day indicates a considerable likelihood of significant to extreme severe weather, generally a major tornado outbreak or (much less often) an extreme derecho event. On these days, the potential exists for extremely severe and life-threatening weather. This includes a large number of tornadoes - many of which will likely be strong to violent and on the ground for a half-hour or longer, or widespread and very destructive straight-line winds, likely in excess of . Hail cannot verify or produce a high risk on its own, although such a day usually involves a threat for widespread very large and damaging hail as well. Many of the most prolific severe weather days were high risk days. Such days are rare; a high risk is typically issued (at the most) only a few times each year (see List of Storm Prediction Center high risk days). High risk areas are usually surrounded by a larger moderate risk area, where uncertainty is greater or the threat is somewhat lower.
The Storm Prediction Center began asking for public comment on proposed categorical additions to the Day 1-3 Convective Outlooks on April 21, 2014, for a two-month period. The Storm Prediction Center broadened this system beginning on October 22, 2014 by adding two new risk categories to the three used originally. The new categories that were added are a "marginal risk" (replacing the "SEE TEXT" contours, see below) and an "enhanced risk". The latter is used to delineate areas where severe weather will occur that would fall under the previous probability criteria of an upper-end slight risk, but do not warrant the issuance of a moderate risk. In order from least to greatest threat, these categories are ranked as: marginal, slight, enhanced, moderate, and high.
Issuance and usage
Convective outlooks are issued by the Storm Prediction Center in Zulu time (also known as Universal Coordinated Time or UTC).
The categories at right refer to the risk levels for the specific severe weather event occurring within of any point in the delineated region, as described in the previous section. The Day 1 Convective Outlook, issued five times per day at 0600Z (valid from 1200Z of the current day until 1200Z the following day), 1300Z and 1630Z (the "morning updates," valid until 1200Z the following day), 2000Z (the "afternoon update," valid until 1200Z the following day), and the 0100Z (the "evening update," valid until 1200Z the following day), provides a textual forecast, map of categories and probabilities, and chart of probabilities. Prior to January 28, 2020, the Day 1 was currently the only outlook to issue specific probabilities for tornadoes, hail or wind. It is the most descriptive and highest accuracy outlook, and typically has the highest probability levels.
Day 2 outlooks, issued twice daily at 0600Z in Daylight Saving Time or 0700Z in Standard Time and 1730Z, refer to predicted risks of convective weather for the following day (1200Z to 1200Z of the next calendar day; for example, a Day 2 outlook issued on April 12, 2100, would be valid from 1200Z on April 13, 2100, through 1200Z on April 14, 2100) and include only a categorical outline, textual description, and a map of categories and probabilities. Day 2 moderate risks are fairly uncommon, and a Day 2 high risk has only been issued twice (for April 7, 2006 and for April 14, 2012). Probabilities for tornadoes, hail and wind applying to the Day 1 Convective Outlook were incorporated into the Day 2 Convective Outlook on January 28, 2020, citing research to SPC operations and improvements in numerical forecast guidance that have increased forecaster confidence in risk estimation for those hazards in that timeframe. The individual hazard probabilistic forecasts replaced the existing "total severe" probability graph for general severe convective storms that had been used for the Day 2 outlook beforehand.
Day 3 outlooks refer to the day after tomorrow, issued twice daily since August 13, 2024 at 0730Z in Daylight Saving Time or 0830Z in Standard Time and 1930Z and include the same products (categorical outline, text description, and probability graph) as the Day 2 outlook. As of June 2012, the SPC forecasts general thunderstorm risk areas. Higher probability forecasts are less and less likely as the forecast period increases due to lessening forecast ability farther in advance. Day 3 moderate risks are quite rare; these have been issued only twenty times since the product became operational (most recently for March 22, 2022). Day 3 high risks are never issued and the operational standards do not allow for such. This is most likely because it would require both a very high degree of certainty (60%) for an event which was still at least 48 hours away and a reasonable level of confidence that said severe thunderstorm outbreak would include significant severe weather (EF2+ tornadoes, hurricane-force winds, and/or egg-sized hail).
Day 4–8 outlooks are the longest-term official SPC Forecast Product, and often change significantly from day to day. This extended forecast for severe weather was an experimental product until March 22, 2007, when the Storm Prediction Center incorporated it as an official product. Areas are delineated in this forecast that have least a 15% or 30% chance of severe weather in the Day 4–8 period (equivalent to a slight risk and an enhanced risk, respectively); as forecaster confidence is not fully resolute on how severe weather will evolve more than three days out, the Day 4–8 outlook only outlines the areas in which severe thunderstorms are forecast to occur during the period at the 15% and 30% likelihood, and does not utilize other categorical risk areas or outline where general (non-severe) thunderstorm activity will occur.
Local forecast offices of the National Weather Service, radio and television stations, and emergency planners often use the forecasts to gauge the potential severe weather threats to their areas. Even after the marginal and enhanced risk categories were added in October 2014, some television stations have continued to use the original three-category system to outline forecasted severe weather risks (though stations that do this may utilize in-house severe weather outlooks that vary to some degree from the SPC convective outlooks), while certain others that have switched to the current system have chosen not to outline marginal risk areas.
Generally, the convective outlook boundaries or lines – general thunderstorms (light green), marginal (dark green), slight (yellow), enhanced (orange), moderate (red) and high (purple) – will be continued as an arrow or line not filled with color if the risk area enters another country (Canada or Mexico) or across waters beyond the United States coastline. This indicates that the risk for severe weather is also valid in that general area of the other side of the border or oceanic boundary.
Mesoscale discussions
SPC mesoscale discussions (MDs) once covered convection (mesoscale convective discussions [MCDs]) and precipitation (mesoscale precipitation discussions [MPDs]); MPDs are now issued by the Weather Prediction Center (WPC). MCDs generally precede the issuance of a tornado or severe thunderstorm watch, by one to three hours when possible. Mesoscale discussions are designed to give local forecasters an update on a region where a severe weather threat is emerging and an indication of whether a watch is likely and details thereof, as well as situations of isolated severe weather when watches are not necessary. MCDs contain meteorological information on what is happening and what is expected to occur in the next few hours, and forecast reasoning in regard to weather watches. Mesoscale discussions are often issued to update information on watches already in effect, and sometimes when one is to be canceled. Mesoscale discussions are occasionally used as advance notice of a categorical upgrade of a scheduled convective outlook.
Example
Meso-gamma mesoscale discussion
SPC mesoscale discussions for a high-impact and high-confidence strong tornadoes (EF2+) or winds greater than are called meso-gamma mesoscale discussions. Meso-gamma mesoscale discussions are rarely issued by the SPC. , the Storm Prediction Center has issued 42 meso-gamma mesoscale discussions.
Weather watches
Watches (WWs) issued by the SPC are generally less than in area and are normally preceded by a mesoscale discussion. Watches are intended to be issued preceding the arrival of severe weather by one to six hours. They indicate that conditions are favorable for thunderstorms capable of producing various modes of severe weather, including large hail, damaging straight-line winds and/or tornadoes. In the case of severe thunderstorm watches organized severe thunderstorms are expected but conditions are not thought to be especially favorable for tornadoes (although they can occur in such areas where one is in effect, and some severe thunderstorm watch statements issued by the SPC may note a threat of isolated tornadic activity if conditions are of modest favorability for storm rotation capable of inducing them), whereas for tornado watches conditions are thought to be favorable for severe thunderstorms to produce tornadoes.
In situations where a forecaster expects a significant threat of extremely severe and life-threatening weather, a watch with special enhanced wording, "Particularly Dangerous Situation" (PDS), is subjectively issued. It is occasionally issued with tornado watches, normally for the potential of major tornado outbreaks, especially those with a significant threat of multiple tornadoes capable of producing F4/EF4 and F5/EF5 damage and/or staying on the ground for long-duration – sometimes uninterrupted – paths. A PDS severe thunderstorm watch is very rare and is typically reserved for derecho events impacting densely populated areas.
Watches are not "warnings", where there is an immediate severe weather threat to life and property. Although severe thunderstorm and tornado warnings are ideally the next step after watches, watches cover a threat of organized severe thunderstorms over a larger area and may not always precede a warning; watch "busts" do sometimes occur should thunderstorm activity not occur at all or that which does develop never reaches the originally forecast level of severity. Warnings are issued by local National Weather Service offices, not the Storm Prediction Center, which is a national guidance center.
The process of issuing a convective watch begins with a conference call from SPC to local NWS offices. If after collaboration a watch is deemed necessary, the Storm Prediction Center will issue a watch approximation product which is followed by the local NWS office issuing a specific county-based watch product. The latter product is responsible for triggering public alert messages via television, radio stations and NOAA Weather Radio. The watch approximation product outlines specific regions covered by the watch (including the approximate outlined area in statute miles) and its time of expiration (based on the local time zone(s) of the areas under the watch), associated potential threats, a meteorological synopsis of atmospheric conditions favorable for severe thunderstorm development, forecasted aviation conditions, and a pre-determined message informing the public of the meaning behind the watch and to be vigilant of any warnings or weather statements that may be issued by their local National Weather Service office.
Watch outline products provide a visual map depiction of the issued watch; the SPC typically delineates watches within this product in the form of "boxes," which technically are represented as either squares, rectangles (horizontal or vertical) or parallelograms depending on the area it covers. Jurisdictions outlined by the county-based watch product as being included in the watch area may differ from the actual watch box; as such, certain counties, parishes or boroughs not covered by the fringes of the watch box may actually be included in the watch and vice versa. Watches can be expanded, contracted (by removing jurisdictions where SPC and NWS forecasters no longer consider there to be a viable threat of severe weather, in which case, the watch box may take on a trapezoidal representation in map-based watch products) or canceled before their set time of expiration by local NWS offices.
Example
Fire weather products
The Storm Prediction Center also is responsible for issuing fire weather outlooks (FWD) for the continental United States. These outlooks are a guidance product for local, state and federal government agencies, including local National Weather Service offices, in forecasting the potential for wildfires. The outlooks issued are for Day 1, Day 2, and Days 3–8. The Day 1 product is issued at 4:00 a.m. Central Time and is updated at 1700Z, and is valid from 1200Z to 1200Z the following day. The Day 2 outlook is issued at 1000Z and is updated at 2000Z for the forecast period of 1200Z to 1200Z the following day. The Day 3–8 outlook is issued at 2200Z, and is valid from 1200Z two days after the current calendar date to 1200Z seven days after the current calendar date.
There are four types of Fire Weather Outlook areas: "See Text", a "Critical Fire Weather Area for Wind and Relative Humidity", an "Extremely Critical Fire Weather Area for Wind and Relative Humidity", and a "Critical Fire Weather Area for Dry Thunderstorms". The outlook type depends on the forecast weather conditions, severity of the predicted threat, and local climatology of a forecast region. "See Text" is a map label used for outlining areas where fire potential is great enough to pose a limited threat, but not enough to warrant a critical area, similar to areas using the same notation title that were formerly outlined in convective outlooks. Critical Fire Weather Areas for Wind and Relative Humidity are typically issued when strong winds ( > ; for Florida) and low relative humidity (usually < 20%) are expected to occur where dried fuels exist, similar to a slight, enhanced, or moderate risk of severe weather. Critical Fire Weather Areas for Dry Thunderstorms are typically issued when widespread or numerous thunderstorms producing rainfall of little accumulation to provide sufficient ground wetting ( < ) are expected to occur where dried fuels exist. Extremely Critical Fire Weather Areas for Wind and Relative Humidity are issued when very strong winds and very low humidity are expected to occur with very dry fuels. Extremely Critical areas are issued relatively rarely, similar to the very low frequency of high risk areas in convective outlooks (see List of Storm Prediction Center extremely critical days).
See also
National Weather Service Norman, Oklahoma – the Weather Forecast Office located adjacent to the Storm Prediction Center within the National Weather Center, which serves central and western Oklahoma and northwestern Texas
Severe weather terminology (United States)
Chris Broyles, a forecaster at the Storm Prediction Center
References
External links
SPC products descriptions
Norman, Oklahoma
History of Kansas City, Missouri
National Centers for Environmental Prediction
Weather prediction
1995 establishments in Oklahoma | Storm Prediction Center | Physics | 5,858 |
8,290,478 | https://en.wikipedia.org/wiki/Cyclohexylamine | Cyclohexylamine is an organic compound, belonging to the aliphatic amine class. It is a colorless liquid, although, like many amines, samples are often colored due to contaminants. It has a fishy odor and is miscible with water. Like other amines, it is a weak base, compared to strong bases such as NaOH, but it is a stronger base than its aromatic analog, aniline.
It is a useful intermediate in the production of many other organic compounds (e.g. cyclamate)
Preparation
Cyclohexylamine is produced by two routes, the main one being the complete hydrogenation of aniline using some cobalt- or nickel-based catalysts:
C6H5NH2 + 3 H2 → C6H11NH2
It is also prepared by alkylation of ammonia using cyclohexanol.
Applications
Cyclohexylamine is used as an intermediate in synthesis of other organic compounds. It is the precursor to sulfenamide-based reagents used as accelerators for vulcanization. The amine itself is an effective corrosion inhibitor. It has been used as a flushing aid in the printing ink industry.
Drugs List
It is a building block for pharmaceuticals (e.g., mucolytics, analgesics, and bronchodilators). Most of the drugs in the following list fall into the arena of sulfonamide hypoglycemics though:
Acetohexamide
Amesergide
Bromhexine
Brovanexine
CGP-11112 (not actually made from CyNH2 but CyN containing).
Cilostazol
Clorexolone
Cyclamate
Enpromate
Esaprazole
Glibenclamide
Glicaramide
Gliquidone
Glipizide
Glisindamide
Glisolamide
Glycyclamide
Glyhexamide
Hexazinone
Hexylcaine
Hydroxyhexamide
lomustine
Metahexamide
Timegadine
Thiohexamide
U-37883A
Toxicity
Cyclohexylamine has a low acute toxicity with LD50 (rat; p.o.) = 0.71 ml/kg Like other amines, it is corrosive.
Cyclohexylamine is listed as an extremely hazardous substance as defined by Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act. The National Institute for Occupational Safety and Health has suggested workers not be exposed to a recommended exposure limit of over 10 ppm (40 mg/m3) over an eight-hour workshift.
References
Amines
Cyclohexyl compounds | Cyclohexylamine | Chemistry | 568 |
31,376,949 | https://en.wikipedia.org/wiki/Greek%20Atomic%20Energy%20Commission | The Greek Atomic Energy Commission (EEAE) is an independent government agency of Greece which is responsible for atomic safety, development and regulations and for monitoring artificially produced ionizing and non-ionizing radiation. The seven-member board of directors operate under the supervision of the Ministry of Development through the General Secretariat of Research and Technology.
The EEAE was established by act of legislation in 1954. Among other notable Greek scientists, Leonidas Zervas has served twice as president of the commission (1964–1965 & 1974–1975).
References
External links
Official site
Nuclear organizations
Nuclear technology in Greece
Politics of Greece
Independent government agencies of Greece
1954 establishments in Greece | Greek Atomic Energy Commission | Engineering | 132 |
24,639,265 | https://en.wikipedia.org/wiki/Six-dimensional%20space | Six-dimensional space is any space that has six dimensions, six degrees of freedom, and that needs six pieces of data, or coordinates, to specify a location in this space. There are an infinite number of these, but those of most interest are simpler ones that model some aspect of the environment. Of particular interest is six-dimensional Euclidean space, in which 6-polytopes and the 5-sphere are constructed. Six-dimensional elliptical space and hyperbolic spaces are also studied, with constant positive and negative curvature.
Formally, six-dimensional Euclidean space, , is generated by considering all real 6-tuples as 6-vectors in this space. As such it has the properties of all Euclidean spaces, so it is linear, has a metric and a full set of vector operations. In particular the dot product between two 6-vectors is readily defined and can be used to calculate the metric. 6 × 6 matrices can be used to describe transformations such as rotations that keep the origin fixed.
More generally, any space that can be described locally with six coordinates, not necessarily Euclidean ones, is six-dimensional. One example is the surface of the 6-sphere, S6. This is the set of all points in seven-dimensional space (Euclidean) that are a fixed distance from the origin. This constraint reduces the number of coordinates needed to describe a point on the 6-sphere by one, so it has six dimensions. Such non-Euclidean spaces are far more common than Euclidean spaces, and in six dimensions they have far more applications.
Geometry
6-polytope
A polytope in six dimensions is called a 6-polytope. The most studied are the regular polytopes, of which there are only three in six dimensions: the 6-simplex, 6-cube, and 6-orthoplex. A wider family are the uniform 6-polytopes, constructed from fundamental symmetry domains of reflection, each domain defined by a Coxeter group. Each uniform polytope is defined by a ringed Coxeter–Dynkin diagram. The 6-demicube is a unique polytope from the D6 family, and 221 and 122 polytopes from the E6 family.
5-sphere
The 5-sphere, or hypersphere in six dimensions, is the five-dimensional surface equidistant from a point. It has symbol S5, and the equation for the 5-sphere, radius r, centre the origin is
The volume of six-dimensional space bounded by this 5-sphere is
which is 5.16771 × r6, or 0.0807 of the smallest 6-cube that contains the 5-sphere.
6-sphere
The 6-sphere, or hypersphere in seven dimensions, is the six-dimensional surface equidistant from a point. It has symbol S6, and the equation for the 6-sphere, radius r, centre the origin is
The volume of the space bounded by this 6-sphere is
which is 4.72477 × r7, or 0.0369 of the smallest 7-cube that contains the 6-sphere.
Applications
Transformations in three dimensions
In three dimensional space a rigid transformation has six degrees of freedom, three translations along the three coordinate axes and three from the rotation group SO(3). Often these transformations are handled separately as they have very different geometrical structures, but there are ways of dealing with them that treat them as a single six-dimensional object.
Screw theory
In screw theory angular and linear velocity are combined into one six-dimensional object, called a twist. A similar object called a wrench combines forces and torques in six dimensions. These can be treated as six-dimensional vectors that transform linearly when changing frame of reference. Translations and rotations cannot be done this way, but are related to a twist by exponentiation.
Phase space
Phase space is a space made up of the position and momentum of a particle, which can be plotted together in a phase diagram to highlight the relationship between the quantities. A general particle moving in three dimensions has a phase space with six dimensions, too many to plot but they can be analysed mathematically.
Rotations in four dimensions
The rotation group in four dimensions, SO(4), has six degrees of freedom. This can be seen by considering the 4 × 4 matrix that represents a rotation: as it is an orthogonal matrix the matrix is determined, up to a change in sign, by e.g. the six elements above the main diagonal. But this group is not linear, and it has a more complex structure than other applications seen so far.
Another way of looking at this group is with quaternion multiplication. Every rotation in four dimensions can be achieved by multiplying by a pair of unit quaternions, one before and one after the vector. These quaternion are unique, up to a change in sign for both of them, and generate all rotations when used this way, so the product of their groups, S3 × S3, is a double cover of SO(4), which must have six dimensions.
Although the space we live in is considered three-dimensional, there are practical applications for four-dimensional space. Quaternions, one of the ways to describe rotations in three dimensions, consist of a four-dimensional space. Rotations between quaternions, for interpolation, for example, take place in four dimensions. Spacetime, which has three space dimensions and one time dimension is also four-dimensional, though with a different structure to Euclidean space.
Electromagnetism
In electromagnetism, the electromagnetic field is generally thought of as being made of two things, the electric field and magnetic field. They are both three-dimensional vector fields, related to each other by Maxwell's equations. A second approach is to combine them in a single object, the six-dimensional electromagnetic tensor, a tensor- or bivector-valued representation of the electromagnetic field. Using this Maxwell's equations can be condensed from four equations into a particularly compact single equation:
where is the bivector form of the electromagnetic tensor, is the four-current and is a suitable differential operator.
String theory
In physics string theory is an attempt to describe general relativity and quantum mechanics with a single mathematical model. Although it is an attempt to model our universe it takes place in a space with more dimensions than the four of spacetime that we are familiar with. In particular a number of string theories take place in a ten-dimensional space, adding an extra six dimensions. These extra dimensions are required by the theory, but as they cannot be observed are thought to be quite different, perhaps compactified to form a six-dimensional space with a particular geometry too small to be observable.
Since 1997 another string theory has come to light that works in six dimensions. Little string theories are non-gravitational string theories in five and six dimensions that arise when considering limits of ten-dimensional string theory.
Theoretical background
Bivectors in four dimensions
A number of the above applications can be related to each other algebraically by considering the real, six-dimensional bivectors in four dimensions. These can be written for the set of bivectors in Euclidean space or for the set of bivectors in spacetime. The Plücker coordinates are bivectors in while the electromagnetic tensor discussed in the previous section is a bivector in . Bivectors can be used to generate rotations in either or through the exponential map (e.g. applying the exponential map of all bivectors in generates all rotations in ). They can also be related to general transformations in three dimensions through homogeneous coordinates, which can be thought of as modified rotations in .
The bivectors arise from sums of all possible wedge products between pairs of 4-vectors. They therefore have C = 6 components, and can be written most generally as
They are the first bivectors that cannot all be generated by products of pairs of vectors. Those that can are simple bivectors and the rotations they generate are simple rotations. Other rotations in four dimensions are double and isoclinic rotations and correspond to non-simple bivectors that cannot be generated by single wedge product.
6-vectors
6-vectors are simply the vectors of six-dimensional Euclidean space. Like other such vectors they are linear, can be added subtracted and scaled like in other dimensions. Rather than use letters of the alphabet, higher dimensions usually use suffixes to designate dimensions, so a general six-dimensional vector can be written . Written like this the six basis vectors are , , , , and .
Of the vector operators the cross product cannot be used in six dimensions; instead, the wedge product of two 6-vectors results in a bivector with 15 dimensions. The dot product of two vectors is
It can be used to find the angle between two vectors and the norm,
This can be used for example to calculate the diagonal of a 6-cube; with one corner at the origin, edges aligned to the axes and side length 1 the opposite corner could be at , the norm of which is
which is the length of the vector and so of the diagonal of the 6-cube.
Gibbs bivectors
In 1901 J.W. Gibbs published a work on vectors that included a six-dimensional quantity he called a bivector. It consisted of two three-dimensional vectors in a single object, which he used to describe ellipses in three dimensions. It has fallen out of use as other techniques have been developed, and the name bivector is now more closely associated with geometric algebra.
Footnotes
References
Dimension
dimensional space | Six-dimensional space | Physics | 1,977 |
22,208 | https://en.wikipedia.org/wiki/Organic%20chemistry | Organic chemistry is a subdiscipline within chemistry involving the scientific study of the structure, properties, and reactions of organic compounds and organic materials, i.e., matter in its various forms that contain carbon atoms. Study of structure determines their structural formula. Study of properties includes physical and chemical properties, and evaluation of chemical reactivity to understand their behavior. The study of organic reactions includes the chemical synthesis of natural products, drugs, and polymers, and study of individual organic molecules in the laboratory and via theoretical (in silico) study.
The range of chemicals studied in organic chemistry includes hydrocarbons (compounds containing only carbon and hydrogen) as well as compounds based on carbon, but also containing other elements, especially oxygen, nitrogen, sulfur, phosphorus (included in many biochemicals) and the halogens. Organometallic chemistry is the study of compounds containing carbon–metal bonds.
Organic compounds form the basis of all earthly life and constitute the majority of known chemicals. The bonding patterns of carbon, with its valence of four—formal single, double, and triple bonds, plus structures with delocalized electrons—make the array of organic compounds structurally diverse, and their range of applications enormous. They form the basis of, or are constituents of, many commercial products including pharmaceuticals; petrochemicals and agrichemicals, and products made from them including lubricants, solvents; plastics; fuels and explosives. The study of organic chemistry overlaps organometallic chemistry and biochemistry, but also with medicinal chemistry, polymer chemistry, and materials science.
Educational aspects
Organic chemistry is typically taught at the college or university level. It is considered a very challenging course but has also been made accessible to students.
History
Before the 18th century, chemists generally believed that compounds obtained from living organisms were endowed with a vital force that distinguished them from inorganic compounds. According to the concept of vitalism (vital force theory), organic matter was endowed with a "vital force". During the first half of the nineteenth century, some of the first systematic studies of organic compounds were reported. Around 1816 Michel Chevreul started a study of soaps made from various fats and alkalis. He separated the acids that, in combination with the alkali, produced the soap. Since these were all individual compounds, he demonstrated that it was possible to make a chemical change in various fats (which traditionally come from organic sources), producing new compounds, without "vital force". In 1828 Friedrich Wöhler produced the organic chemical urea (carbamide), a constituent of urine, from inorganic starting materials (the salts potassium cyanate and ammonium sulfate), in what is now called the Wöhler synthesis. Although Wöhler himself was cautious about claiming he had disproved vitalism, this was the first time a substance thought to be organic was synthesized in the laboratory without biological (organic) starting materials. The event is now generally accepted as indeed disproving the doctrine of vitalism.
After Wöhler, Justus von Liebig worked on the organization of organic chemistry, being considered one of its principal founders.
In 1856, William Henry Perkin, while trying to manufacture quinine, accidentally produced the organic dye now known as Perkin's mauve. His discovery, made widely known through its financial success, greatly increased interest in organic chemistry.
A crucial breakthrough for organic chemistry was the concept of chemical structure, developed independently in 1858 by both Friedrich August Kekulé and Archibald Scott Couper. Both researchers suggested that tetravalent carbon atoms could link to each other to form a carbon lattice, and that the detailed patterns of atomic bonding could be discerned by skillful interpretations of appropriate chemical reactions.
The era of the pharmaceutical industry began in the last decade of the 19th century when the German company, Bayer, first manufactured acetylsalicylic acid—more commonly known as aspirin. By 1910 Paul Ehrlich and his laboratory group began developing arsenic-based arsphenamine, (Salvarsan), as the first effective medicinal treatment of syphilis, and thereby initiated the medical practice of chemotherapy. Ehrlich popularized the concepts of "magic bullet" drugs and of systematically improving drug therapies. His laboratory made decisive contributions to developing antiserum for diphtheria and standardizing therapeutic serums.
Early examples of organic reactions and applications were often found because of a combination of luck and preparation for unexpected observations. The latter half of the 19th century however witnessed systematic studies of organic compounds. The development of synthetic indigo is illustrative. The production of indigo from plant sources dropped from 19,000 tons in 1897 to 1,000 tons by 1914 thanks to the synthetic methods developed by Adolf von Baeyer. In 2002, 17,000 tons of synthetic indigo were produced from petrochemicals.
In the early part of the 20th century, polymers and enzymes were shown to be large organic molecules, and petroleum was shown to be of biological origin.
The multiple-step synthesis of complex organic compounds is called total synthesis. Total synthesis of complex natural compounds increased in complexity to glucose and terpineol. For example, cholesterol-related compounds have opened ways to synthesize complex human hormones and their modified derivatives. Since the start of the 20th century, complexity of total syntheses has been increased to include molecules of high complexity such as lysergic acid and vitamin B12.
The discovery of petroleum and the development of the petrochemical industry spurred the development of organic chemistry. Converting individual petroleum compounds into types of compounds by various chemical processes led to organic reactions enabling a broad range of industrial and commercial products including, among (many) others: plastics, synthetic rubber, organic adhesives, and various property-modifying petroleum additives and catalysts.
The majority of chemical compounds occurring in biological organisms are carbon compounds, so the association between organic chemistry and biochemistry is so close that biochemistry might be regarded as in essence a branch of organic chemistry. Although the history of biochemistry might be taken to span some four centuries, fundamental understanding of the field only began to develop in the late 19th century and the actual term biochemistry was coined around the start of 20th century. Research in the field increased throughout the twentieth century, without any indication of slackening in the rate of increase, as may be verified by inspection of abstraction and indexing services such as BIOSIS Previews and Biological Abstracts, which began in the 1920s as a single annual volume, but has grown so drastically that by the end of the 20th century it was only available to the everyday user as an online electronic database.
Characterization
Since organic compounds often exist as mixtures, a variety of techniques have also been developed to assess purity; chromatography techniques are especially important for this application, and include HPLC and gas chromatography. Traditional methods of separation include distillation, crystallization, evaporation, magnetic separation and solvent extraction.
Organic compounds were traditionally characterized by a variety of chemical tests, called "wet methods", but such tests have been largely displaced by spectroscopic or other computer-intensive methods of analysis. Listed in approximate order of utility, the chief analytical methods are:
Nuclear magnetic resonance (NMR) spectroscopy is the most commonly used technique, often permitting the complete assignment of atom connectivity and even stereochemistry using correlation spectroscopy. The principal constituent atoms of organic chemistry – hydrogen and carbon – exist naturally with NMR-responsive isotopes, respectively 1H and 13C.
Elemental analysis: A destructive method used to determine the elemental composition of a molecule. See also mass spectrometry, below.
Mass spectrometry indicates the molecular weight of a compound and, from the fragmentation patterns, its structure. High-resolution mass spectrometry can usually identify the exact formula of a compound and is used in place of elemental analysis. In former times, mass spectrometry was restricted to neutral molecules exhibiting some volatility, but advanced ionization techniques allow one to obtain the "mass spec" of virtually any organic compound.
Crystallography can be useful for determining molecular geometry when a single crystal of the material is available. Highly efficient hardware and software allows a structure to be determined within hours of obtaining a suitable crystal.
Traditional spectroscopic methods such as infrared spectroscopy, optical rotation, and UV/VIS spectroscopy provide relatively nonspecific structural information but remain in use for specific applications. Refractive index and density can also be important for substance identification.
Properties
The physical properties of organic compounds typically of interest include both quantitative and qualitative features. Quantitative information includes a melting point, boiling point, solubility, and index of refraction. Qualitative properties include odor, consistency, and color.
Melting and boiling properties
Organic compounds typically melt and many boil. In contrast, while inorganic materials generally can be melted, many do not boil, and instead tend to degrade. In earlier times, the melting point (m.p.) and boiling point (b.p.) provided crucial information on the purity and identity of organic compounds. The melting and boiling points correlate with the polarity of the molecules and their molecular weight. Some organic compounds, especially symmetrical ones, sublime. A well-known example of a sublimable organic compound is para-dichlorobenzene, the odiferous constituent of modern mothballs. Organic compounds are usually not very stable at temperatures above 300 °C, although some exceptions exist.
Solubility
Neutral organic compounds tend to be hydrophobic; that is, they are less soluble in water than inorganic solvents. Exceptions include organic compounds that contain ionizable groups as well as low molecular weight alcohols, amines, and carboxylic acids where hydrogen bonding occurs. Otherwise, organic compounds tend to dissolve in organic solvents. Solubility varies widely with the organic solute and with the organic solvent.
Solid state properties
Various specialized properties of molecular crystals and organic polymers with conjugated systems are of interest depending on applications, e.g. thermo-mechanical and electro-mechanical such as piezoelectricity, electrical conductivity (see conductive polymers and organic semiconductors), and electro-optical (e.g. non-linear optics) properties. For historical reasons, such properties are mainly the subjects of the areas of polymer science and materials science.
Nomenclature
The names of organic compounds are either systematic, following logically from a set of rules, or nonsystematic, following various traditions. Systematic nomenclature is stipulated by specifications from IUPAC (International Union of Pure and Applied Chemistry). Systematic nomenclature starts with the name for a parent structure within the molecule of interest. This parent name is then modified by prefixes, suffixes, and numbers to unambiguously convey the structure. Given that millions of organic compounds are known, rigorous use of systematic names can be cumbersome. Thus, IUPAC recommendations are more closely followed for simple compounds, but not complex molecules. To use the systematic naming, one must know the structures and names of the parent structures. Parent structures include unsubstituted hydrocarbons, heterocycles, and mono functionalized derivatives thereof.
Nonsystematic nomenclature is simpler and unambiguous, at least to organic chemists. Nonsystematic names do not indicate the structure of the compound. They are common for complex molecules, which include most natural products. Thus, the informally named lysergic acid diethylamide is systematically named
(6aR,9R)-N,N-diethyl-7-methyl-4,6,6a,7,8,9-hexahydroindolo-[4,3-fg] quinoline-9-carboxamide.
With the increased use of computing, other naming methods have evolved that are intended to be interpreted by machines. Two popular formats are SMILES and InChI.
Structural drawings
Organic molecules are described more commonly by drawings or structural formulas, combinations of drawings and chemical symbols. The line-angle formula is simple and unambiguous. In this system, the endpoints and intersections of each line represent one carbon, and hydrogen atoms can either be notated explicitly or assumed to be present as implied by tetravalent carbon.
History
By 1880 an explosion in the number of chemical compounds being discovered occurred assisted by new synthetic and analytical techniques. Grignard described the situation as "chaos le plus complet" (complete chaos) due to the lack of convention it was possible to have multiple names for the same compound. This led to the creation of the Geneva rules in 1892.
Classification of organic compounds
Functional groups
The concept of functional groups is central in organic chemistry, both as a means to classify structures and for predicting properties. A functional group is a molecular module, and the reactivity of that functional group is assumed, within limits, to be the same in a variety of molecules. Functional groups can have a decisive influence on the chemical and physical properties of organic compounds. Molecules are classified based on their functional groups. Alcohols, for example, all have the subunit C-O-H. All alcohols tend to be somewhat hydrophilic, usually form esters, and usually can be converted to the corresponding halides. Most functional groups feature heteroatoms (atoms other than C and H). Organic compounds are classified according to functional groups, alcohols, carboxylic acids, amines, etc. Functional groups make the molecule more acidic or basic due to their electronic influence on surrounding parts of the molecule.
As the pKa (aka basicity) of the molecular addition/functional group increases, there is a corresponding dipole, when measured, increases in strength. A dipole directed towards the functional group (higher pKa therefore basic nature of group) points towards it and decreases in strength with increasing distance. Dipole distance (measured in Angstroms) and steric hindrance towards the functional group have an intermolecular and intramolecular effect on the surrounding environment and pH level.
Different functional groups have different pKa values and bond strengths (single, double, triple) leading to increased electrophilicity with lower pKa and increased nucleophile strength with higher pKa. More basic/nucleophilic functional groups desire to attack an electrophilic functional group with a lower pKa on another molecule (intermolecular) or within the same molecule (intramolecular). Any group with a net acidic pKa that gets within range, such as an acyl or carbonyl group is fair game. Since the likelihood of being attacked decreases with an increase in pKa, acyl chloride components with the lowest measured pKa values are most likely to be attacked, followed by carboxylic acids (pKa =4), thiols (13), malonates (13), alcohols (17), aldehydes (20), nitriles (25), esters (25), then amines (35). Amines are very basic, and are great nucleophiles/attackers.
Aliphatic compounds
The aliphatic hydrocarbons are subdivided into three groups of homologous series according to their state of saturation:
alkanes (paraffins): aliphatic hydrocarbons without any double or triple bonds, i.e. just C-C, C-H single bonds
alkenes (olefins): aliphatic hydrocarbons that contain one or more double bonds, i.e. di-olefins (dienes) or poly-olefins.
alkynes (acetylenes): aliphatic hydrocarbons which have one or more triple bonds.
The rest of the group is classified according to the functional groups present. Such compounds can be "straight-chain", branched-chain or cyclic. The degree of branching affects characteristics, such as the octane number or cetane number in petroleum chemistry.
Both saturated (alicyclic) compounds and unsaturated compounds exist as cyclic derivatives. The most stable rings contain five or six carbon atoms, but large rings (macrocycles) and smaller rings are common. The smallest cycloalkane family is the three-membered cyclopropane ((CH2)3). Saturated cyclic compounds contain single bonds only, whereas aromatic rings have an alternating (or conjugated) double bond. Cycloalkanes do not contain multiple bonds, whereas the cycloalkenes and the cycloalkynes do.
Aromatic compounds
Aromatic hydrocarbons contain conjugated double bonds. This means that every carbon atom in the ring is sp2 hybridized, allowing for added stability. The most important example is benzene, the structure of which was formulated by Kekulé who first proposed the delocalization or resonance principle for explaining its structure. For "conventional" cyclic compounds, aromaticity is conferred by the presence of 4n + 2 delocalized pi electrons, where n is an integer. Particular instability (antiaromaticity) is conferred by the presence of 4n conjugated pi electrons.
Heterocyclic compounds
The characteristics of the cyclic hydrocarbons are again altered if heteroatoms are present, which can exist as either substituents attached externally to the ring (exocyclic) or as a member of the ring itself (endocyclic). In the case of the latter, the ring is termed a heterocycle. Pyridine and furan are examples of aromatic heterocycles while piperidine and tetrahydrofuran are the corresponding alicyclic heterocycles. The heteroatom of heterocyclic molecules is generally oxygen, sulfur, or nitrogen, with the latter being particularly common in biochemical systems.
Heterocycles are commonly found in a wide range of products including aniline dyes and medicines. Additionally, they are prevalent in a wide range of biochemical compounds such as alkaloids, vitamins, steroids, and nucleic acids (e.g. DNA, RNA).
Rings can fuse with other rings on an edge to give polycyclic compounds. The purine nucleoside bases are notable polycyclic aromatic heterocycles. Rings can also fuse on a "corner" such that one atom (almost always carbon) has two bonds going to one ring and two to another. Such compounds are termed spiro and are important in several natural products.
Polymers
One important property of carbon is that it readily forms chains, or networks, that are linked by carbon-carbon (carbon-to-carbon) bonds. The linking process is called polymerization, while the chains, or networks, are called polymers. The source compound is called a monomer.
Two main groups of polymers exist synthetic polymers and biopolymers. Synthetic polymers are artificially manufactured, and are commonly referred to as industrial polymers. Biopolymers occur within a respectfully natural environment, or without human intervention.
Biomolecules
Biomolecular chemistry is a major category within organic chemistry which is frequently studied by biochemists. Many complex multi-functional group molecules are important in living organisms. Some are long-chain biopolymers, and these include peptides, DNA, RNA and the polysaccharides such as starches in animals and celluloses in plants. The other main classes are amino acids (monomer building blocks of peptides and proteins), carbohydrates (which includes the polysaccharides), the nucleic acids (which include DNA and RNA as polymers), and the lipids. Besides, animal biochemistry contains many small molecule intermediates which assist in energy production through the Krebs cycle, and produces isoprene, the most common hydrocarbon in animals. Isoprenes in animals form the important steroid structural (cholesterol) and steroid hormone compounds; and in plants form terpenes, terpenoids, some alkaloids, and a class of hydrocarbons called biopolymer polyisoprenoids present in the latex of various species of plants, which is the basis for making rubber. Biologists usually classify the above-mentioned biomolecules into four main groups, i.e., proteins, lipids, carbohydrates, and nucleic acids. Petroleum and its derivatives are considered organic molecules, which is consistent with the fact that this oil comes from the fossilization of living beings, i.e., biomolecules.
See also: peptide synthesis, oligonucleotide synthesis and carbohydrate synthesis.
Small molecules
In pharmacology, an important group of organic compounds is small molecules, also referred to as 'small organic compounds'. In this context, a small molecule is a small organic compound that is biologically active but is not a polymer. In practice, small molecules have a molar mass less than approximately 1000 g/mol.
Fullerenes
Fullerenes and carbon nanotubes, carbon compounds with spheroidal and tubular structures, have stimulated much research into the related field of materials science. The first fullerene was discovered in 1985 by Sir Harold W. Kroto of the United Kingdom and by Richard E. Smalley and Robert F. Curl Jr., of the United States. Using a laser to vaporize graphite rods in an atmosphere of helium gas, these chemists and their assistants obtained cagelike molecules composed of 60 carbon atoms (C60) joined by single and double bonds to form a hollow sphere with 12 pentagonal and 20 hexagonal faces—a design that resembles a football, or soccer ball. In 1996 the trio was awarded the Nobel Prize for their pioneering efforts. The C60 molecule was named buckminsterfullerene (or, more simply, the buckyball) after the American architect R. Buckminster Fuller, whose geodesic dome is constructed on the same structural principles.
Others
Organic compounds containing bonds of carbon to nitrogen, oxygen and the halogens are not normally grouped separately. Others are sometimes put into major groups within organic chemistry and discussed under titles such as organosulfur chemistry, organometallic chemistry, organophosphorus chemistry and organosilicon chemistry.
Organic reactions
Organic reactions are chemical reactions involving organic compounds. Many of these reactions are associated with functional groups. The general theory of these reactions involves careful analysis of such properties as the electron affinity of key atoms, bond strengths and steric hindrance. These factors can determine the relative stability of short-lived reactive intermediates, which usually directly determine the path of the reaction.
The basic reaction types are: addition reactions, elimination reactions, substitution reactions, pericyclic reactions, rearrangement reactions and redox reactions. An example of a common reaction is a substitution reaction written as:
where X is some functional group and Nu is a nucleophile.
The number of possible organic reactions is infinite. However, certain general patterns are observed that can be used to describe many common or useful reactions. Each reaction has a stepwise reaction mechanism that explains how it happens in sequence—although the detailed description of steps is not always clear from a list of reactants alone.
The stepwise course of any given reaction mechanism can be represented using arrow pushing techniques in which curved arrows are used to track the movement of electrons as starting materials transition through intermediates to final products.
Organic synthesis
Synthetic organic chemistry is an applied science as it borders engineering, the "design, analysis, and/or construction of works for practical purposes". Organic synthesis of a novel compound is a problem-solving task, where a synthesis is designed for a target molecule by selecting optimal reactions from optimal starting materials. Complex compounds can have tens of reaction steps that sequentially build the desired molecule. The synthesis proceeds by utilizing the reactivity of the functional groups in the molecule. For example, a carbonyl compound can be used as a nucleophile by converting it into an enolate, or as an electrophile; the combination of the two is called the aldol reaction. Designing practically useful syntheses always requires conducting the actual synthesis in the laboratory. The scientific practice of creating novel synthetic routes for complex molecules is called total synthesis.
Strategies to design a synthesis include retrosynthesis, popularized by E.J. Corey, which starts with the target molecule and splices it to pieces according to known reactions. The pieces, or the proposed precursors, receive the same treatment, until available and ideally inexpensive starting materials are reached. Then, the retrosynthesis is written in the opposite direction to give the synthesis. A "synthetic tree" can be constructed because each compound and also each precursor has multiple syntheses.
See also
Important publications in organic chemistry
List of organic reactions
Molecular modelling
References
External links
MIT.edu, OpenCourseWare: Organic Chemistry I
HaverFord.edu, Organic Chemistry Lectures, Videos and Text
Organic-Chemistry.org, Organic Chemistry Portal – Recent Abstracts and (Name)Reactions
Orgsyn.org, Organic Chemistry synthesis journal
Pearson Channels, Organic Chemistry Video Lectures and Practice Problems
Khanacademy.org, Khan Academy - Organic Chemistry
Chemistry
Chemistry | Organic chemistry | Chemistry | 5,170 |
75,144,170 | https://en.wikipedia.org/wiki/Graphitization | Graphitization is a process of transforming a carbonaceous material, such as coal or the carbon in certain forms of iron alloys, into graphite.
Process
The graphitization process involves a restructuring of the molecular structure of the carbon material. In the initial state, these materials can have an amorphous structure or a crystalline structure different from graphite. Graphitization generally occurs at high temperatures (up to ), and can be accelerated by catalysts such as iron or nickel.
When carbonaceous material is exposed to high temperatures for an extended period of time, the carbon atoms begin to rearrange and form layered crystal planes. In the structure of graphite, carbon atoms are arranged in flat hexagonal sheets that are stacked on top of each other. These crystal planes give graphite its characteristic flake structure, giving it specific properties such as good electrical and thermal conductivity, low friction and excellent lubrication.
Interest
Graphitization can be observed in various contexts. For example, it occurs naturally during the formation of certain types of coal or graphite in the Earth's crust. It can also be artificially induced during the manufacture of specific carbon materials, such as graphite electrodes used in fuel cells, nuclear reactors or metallurgical applications.
Graphitization is of particular interest in the field of metallurgy. Some iron alloys, such as cast iron, can undergo graphitization heat treatment to improve their mechanical properties and machinability. During this process, the carbon dissolved in the iron alloy matrix separates and restructures as graphite, which gives the cast iron its specific characteristics, such as improved ductility and wear resistance.
Notes and references
Molecular physics
Metallurgy
Materials science | Graphitization | Physics,Chemistry,Materials_science,Engineering | 349 |
71,192,202 | https://en.wikipedia.org/wiki/Zytek%20ZA1348 | The Zytek ZA1348, also known as the Zytek ZA348, is a 3.4-liter, naturally-aspirated, V8 racing engine, designed, developed and produced by British manufacturer Zytek, between 2003 and 2004. It was specifically constructed and built as the spec-engine for the new A1GP open-wheel formula racing series, and debuted in 2005. It powered the Lola B05/52 A1 Grand Prix car. It produced between , and around of torque. A slightly detuned version of the engine, producing around , but a similar torque figure of , was used in the Ginetta G50Z sports racing car. The engine itself is very light, weighing only , constructed out of cast aluminum alloy.
Applications
Lola B05/52
Ginetta G50Z
References
Engines by model
Gasoline engines by model
Zytek engines
V8 engines | Zytek ZA1348 | Technology | 183 |
529,093 | https://en.wikipedia.org/wiki/Act%20of%20God | In legal usage in the English-speaking world, an act of God, act of nature, or damnum fatale ("loss arising from inevitable accident") is an event caused by no direct human action (e.g. severe or extreme weather and other natural disasters) for which individual persons are not responsible and cannot be held legally liable for loss of life, injury, or property damage. An act of God may amount to an exception to liability in contracts (as under the Hague–Visby Rules), or it may be an "insured peril" in an insurance policy. In Scots law, the equivalent term is damnum fatale, while most Common law proper legal systems use the term act of God.
It is legally distinct from—though often related to—a common clause found in contract law known as force majeure. In light of the scientific consensus on climate change, its modern applicability has been questioned by legal scholars.
Contract law
In the law of contracts, an act of God may be interpreted as an implied defense under the rule of impossibility or impracticability. If so, the promise is discharged because of unforeseen occurrences, which were unavoidable and would result in insurmountable delay, expense, or other material breach.
Under the English common law, contractual obligations were deemed sacrosanct, so failure to honor a contract could lead to an order for specific performance or internment in a debtor's prison. In 1863, this harsh rule was softened by the case of Taylor v Caldwell which introduced the doctrine of frustration of contract, which provided that "where a contract becomes impossible to perform and neither party is at fault, both parties may be excused their obligations". In this case, a music hall was burned down by act of God before a contract of hire could be fulfilled, and the court deemed the contract frustrated.
In other contracts, such as indemnification, an act of God may be no excuse, and in fact may be the central risk assumed by the promisor—e.g., flood insurance or crop insurance—the only variables being the timing and extent of the damage. In many cases, failure by way of ignoring obvious risks due to "natural phenomena" will not be sufficient to excuse performance of the obligation, even if the events are relatively rare: e.g., the year 2000 problem in computers. Under the Uniform Commercial Code, 2-615, failure to deliver goods sold may be excused by an "act of God" if the absence of such act was a "basic assumption" of the contract, and the act has made the delivery "commercially impracticable".
Recently, human activities have been claimed to be the root causes of some events previously considered natural disasters. In particular:
Geothermal injections of water provoking earthquakes (Basel, Switzerland, 2003)
Drilling provoking mud volcano (Java, 2008)
As a general principle of act of God, epidemic can be classified as an act of God if the epidemic was unforeseeable and renders the promise discharged if the promisor cannot avoid the effect of the epidemic by exercise of reasonable prudence, diligence and care, or by the use of those means which the situation renders reasonable to employ.
Tort law and delict law
UK – England and Wales
An act of God is an unforeseeable natural phenomenon. Explained by Lord Hobhouse in Transco plc v Stockport Metropolitan Borough Council as describing an event:
UK – Scotland
In Tennant v Earl of Glasgow (1864 2 M (HL) 22) Lord Chancellor Westbury described a case as: "what is denominated in the law of Scotland damnum fatale — occurrences and circumstances which no human foresight can provide against, and of which human prudence is not bound to recognize the possibility; and which, when they do occur, therefore, are calamities that do not involve the obligation of paying for the consequences that may result from them."
United States
In the law of torts, an act of God may be asserted as a type of intervening cause, the lack of which would have avoided the cause or diminished the result of liability (e.g., but for the earthquake, the old, poorly constructed building would be standing). However, foreseeable results of unforeseeable causes may still raise liability. For example, a bolt of lightning strikes a ship carrying volatile compressed gas, resulting in the expected explosion. Liability may be found if the carrier did not use reasonable care to protect against sparks—regardless of their origins. Similarly, strict liability could defeat a defense for an act of God where the defendant has created the conditions under which any accident would result in harm. For example, a long-haul truck driver takes a shortcut on a back road and the load is lost when the road is destroyed in an unforeseen flood. Other cases find that a common carrier is not liable for the unforeseeable forces of nature. See Memphis & Charlestown RR Co. v. Reeves, 77 U.S. 176 (1870).
One example is that of "rainmaker" Charles Hatfield, who was hired in 1915 by the city of San Diego to fill the Morena reservoir to capacity with rainwater for $10,000. The region was soon flooded by heavy rains, nearly bursting the reservoir's dam, killing nearly 20 people, destroying 110 bridges (leaving 2), knocking out telephone and telegraph lines, and causing an estimated $3.5 million in damage in total. When the city refused to pay him (he had forgotten to sign the contract), he sued the city. The floods were ruled an act of God, excluding him from liability but also from payment.
See also
Force majeure
Vis major
Lawsuits against God
Extreme event attribution
References
Emergency management
Tort law legal terminology
Natural disasters
Insurance law legal terminology
Contract law legal terminology
Common law legal terminology | Act of God | Physics | 1,220 |
40,496,327 | https://en.wikipedia.org/wiki/Extended%20theories%20of%20gravity | Extended theories of gravity are alternative theories of gravity developed from the exact starting points investigated first by Albert Einstein and Hilbert. These are theories describing gravity, which are metric theory, "a linear connection" or related affine theories, or metric-affine gravitation theory. Rather than trying to discover correct calculations for the matter side of the Einstein field equations (which include inflation, dark energy, dark matter, large-scale structure, and possibly quantum gravity), it is instead proposed to change the gravitational side of the equation.
Proposed theories
Hernández et al.
One such theory is also an extension to general relativity and Newton's Universal gravity law (), first proposed in 2010 by the Mexican astronomers Xavier Hernández Doring, Sergio Mendoza Ramos et al., researchers at the Astronomy Institute, at the National Autonomous University of Mexico. This theory is in accordance with observations of kinematics of the solar system, extended binary stars, and all types of galaxies and galactic groups and clouds. It also reproduces the gravitational lensing effect without the need of postulating dark matter.
There is some evidence that it could also explain the dark energy phenomena and give a solution to the initial conditions problem.
These results can be classified as a metric f(R) gravity theory, more properly an f(R,T) theory, derived from an action principle. This approach to solve the dark matter problem takes into account the Tully–Fisher relation as an empirical law that applies always at scales larger than the Milgrom radius.
See also
Modified Newtonian Dynamics
Alternatives to general relativity
References
Further reading
External links
Sergio Mendoza's web page
News
El universal.com .
La jornada.mx .
La crónica.com .
Theories of gravity
General relativity | Extended theories of gravity | Physics | 351 |
73,155,123 | https://en.wikipedia.org/wiki/Leon%20Lucy | Leon B. Lucy (1938–2018) was a British-American astrophysicist, best known for his contribution to the Richardson-Lucy deconvolution algorithm and spearheading the development of smoothed-particle hydrodynamics methods. He won the Gold Medal of the Royal Astronomical Society in 2000.
References
External links
Columbia University faculty
1938 births
2018 deaths | Leon Lucy | Physics,Astronomy | 76 |
4,613,989 | https://en.wikipedia.org/wiki/Genevestigator | Genevestigator is an application consisting of a gene expression database and tools to analyse the data. It exists in two versions, biomedical and plant, depending on the species of the underlying microarray and RNAseq as well as single-cell RNA-sequencing data. It was started in January 2004 by scientists from ETH Zurich and is currently developed and commercialized by Nebion AG.
Researchers and scientists from academia and industry use it to identify, characterize and validate novel drug targets and biomarkers, identify appropriate research models and in general to understand how gene expression changes with different treatments.
Gene expression database
The Genevestigator database comprises transciptomic data from numerous public repositories including GEO, Array Express and renowned cancer research projects as TCGA. Depending on the license agreement, it may also contain data from private gene expression studies. All data are manually curated, quality-controlled and enriched for sample and experiment descriptions derived from corresponding scientific publications.
The number of species from where the samples are derived is constantly increasing. Currently, the biomedical version contains data from human, mouse, and rat used in biomedical research. Gene expression studies are from various research areas including oncology, immunology, neurology, dermatology and cardiovascular diseases. Samples comprise tissue biopsies and cell lines.
The plant version (no longer available) contained both, widely used model species such as arabidopsis and medicago as well as major crop species such as maize, rice, wheat and soybean. After the acquisition of Nebion AG by Immunai Inc. in July 2021, plant data began to be phased out as the biotech company prioritized their focus on biopharma data. As of 2023, the plant data is being maintained on a separate server for remaining users with a license to the plant version of Genevestigator.
Gene expression tools
More than 60,000 scientists from academia and industry use Genevestigator for their work in molecular biology, toxicogenomics, biomarker discovery and target validation. The original scientific publication has been cited over 3,500 times.
The analysis tools are divided into three major sets:
CONDITION SEARCH tools: find conditions such as a tissue, disease, treatment or genetic background that regulate the gene(s) of interest
GENE SEARCH tools: find genes that are specifically expressed in the condition(s) of interest
SIMILARITY SEARCH tools: find genes or conditions that show a similar gene expression pattern
See also
Spatiotemporal gene expression
References
Prasad A, Suresh Kumar S, Dessimoz C, Bleuler S, Laule O, Hruz T, Gruissem W, and P Zimmermann (2013) Global regulatory architecture of human, mouse and rat tissue transcriptomes. BMC Genomics 2013, 14:716.
Hruz T, Wyss M, Lucas C, Laule O, von Rohr P, Zimmermann P, and S Bleuler (2013) A Multilevel Gamma-Clustering Layout Algorithm for Visualization of Biological Networks. Advances in Bioinformatics, vol. 2013, Article ID 920325, 10 pages, 2013. doi:10.1155/2013/920325.
Hruz T, Wyss M, Docquier M, Pfaffl MW, Masanetz S, Borghi L, Verbrugge P, Kalaydjieva L, Bleuler S, Laule O, Descombes P, Gruissem W and P Zimmermann (2011) RefGenes: identification of reliable and condition specific reference genes for RT-qPCR data normalization. BMC Genomics 2011, 12:156.
Hruz T, Laule O, Szabo G, Wessendorp F, Bleuler S, Oertle L, Widmayer P, Gruissem W and P Zimmermann (2008) Genevestigator V3: a reference expression database for the meta-analysis of transcriptomes. Advances in Bioinformatics 2008, 420747
Grennan AK (2006) Genevestigator. Facilitating web-based gene-expression analysis. Plant Physiology 141(4):1164-6
Laule O, Hirsch-Hoffmann M, Hruz T, Gruissem W, and P Zimmermann (2006) Web-based analysis of the mouse transcriptome using Genevestigator. BMC Bioinformatics 7:311
Zimmermann P, Hennig L and W Gruissem (2005) Gene expression analysis and network discovery using Genevestigator. Trends in Plant Science 9 10, 407-409
Zimmermann P, Hirsch-Hoffmann M, Hennig L and W Gruissem (2004) GENEVESTIGATOR: Arabidopsis Microarray Database and Analysis Toolbox. Plant Physiology 136 1, 2621-2632
Zimmermann P, Schildknecht B, Garcia-Hernandez M, Gruissem W, Craigon D, Mukherjee G, May S, Parkinson H, Rhee S, Wagner U and L Hennig (2006) MIAME/Plant - adding value to plant microarray experiments. Plant Methods 2, 1
Genetics
Bioinformatics software | Genevestigator | Biology | 1,098 |
2,487 | https://en.wikipedia.org/wiki/Amazonite | Amazonite, also known as amazonstone, is a green tectosilicate mineral, a variety of the potassium feldspar called microcline. Its chemical formula is KAlSi3O8, which is polymorphic to orthoclase.
Its name is taken from that of the Amazon River, from which green stones were formerly obtained, though it is unknown whether those stones were amazonite. Although it has been used for jewellery for well over three thousand years, as attested by archaeological finds in Middle and New Kingdom Egypt and Mesopotamia, no ancient or medieval authority mentions it. It was first described as a distinct mineral only in the 18th century.
Green and greenish-blue varieties of potassium feldspars that are predominantly triclinic are designated as amazonite. It has been described as a "beautiful crystallized variety of a bright verdigris-green" and as possessing a "lively green colour". It is occasionally cut and used as a gemstone.
Occurrence
Amazonite is a mineral of limited occurrence. In Bronze Age Egypt, it was mined in the southern Eastern Desert at Gebel Migif. In early modern times, it was obtained almost exclusively from the area of Miass in the Ilmensky Mountains, southwest of Chelyabinsk, Russia, where it occurs in granitic rocks.
Amazonite is now known to occur in various places around the world. Those places are, among others, as follows:
Australia:
Eyre Peninsula, Koppio, Baila Hill Mine (Koppio Amazonite Mine)
China:
Baishitouquan granite intrusion, Hami Prefecture, Xinjiang: found in granite
Libya:
Jabal Eghei, Tibesti Mountains: found in granitic rocks
Mongolia:
Avdar Massif, Töv Province: found in alkali granite
Ethiopia:
Konso Zone
South Africa:
Mogalakwena, Limpopo Province
Khâi-Ma, Northern Cape
Kakamas, Northern Cape
Ceres Valley, Western Cape
Sweden:
Skuleboda mine, Västra Götaland County: found in pegmatite
United States:
Colorado:
Deer Trail, Arapahoe County:233
Custer County:234
Devils Head, Douglas County:234
Pine Creek, Douglas County:234
Crystal Park, El Paso County:234
Pikes Peak, El Paso County: found in coarse granites or pegmatite
St. Peter's Dome, El Paso County:234
Tarryall Mountains, Park County:235
Crystal Peak, Teller County:235
Wyoming
Virginia:
Morefield Mine, Amelia County: found in pegmatite
Rutherford Mine, Amelia County
Pennsylvania:
Media, Delaware County:244
Middletown, Delaware County:244
Color
For many years, the source of amazonite's color was a mystery. Some people assumed the color was due to copper because copper compounds often have blue and green colors. A 1985 study suggests that the blue-green color results from quantities of lead and water in the feldspar. Subsequent 1998 theoretical studies by A. Julg expand on the potential role of aliovalent lead in the color of microcline.
Other studies suggest the colors are associated with the increasing content of lead, rubidium, and thallium ranging in amounts between 0.00X and 0.0X in the feldspars, with even extremely high contents of PbO, lead monoxide, (1% or more) known from the literature. A 2010 study also implicated the role of divalent iron in the green coloration. These studies and associated hypotheses indicate the complex nature of the color in amazonite; in other words, the color may be the aggregate effect of several mutually inclusive and necessary factors.
Health
A 2021 study by the German Institut für Edelsteinprüfung (EPI) found that the amount of lead that leaked from an sample of amazonite into an acidic solution simulating saliva exceeded European Union standard DIN EN 71-3:2013's recommended amount by five times. This experiment was to simulate a child swallowing amazonite, and could also apply to new alternative medicine practices such as inserting the mineral into oils or drinking water for days.
Gallery
References
Further reading
External links
Feldspar
Gemstones | Amazonite | Physics | 853 |
6,577,312 | https://en.wikipedia.org/wiki/Anita%20Dolly%20Panek | Anita Dolly Haubenstock Panek is a Brazilian biochemist. She emigrated from Poland to Brazil because of World War II. She received a B.Sc. in Chemistry, 1954 and a Ph.D. in 1962. She became a professor at the Universidade Federal do Rio de Janeiro.
In 1988 she showed that endogenous trehalose protects cells against the damage caused by freezing.
Memberships
Brazilian Academy of Sciences, Rio de Janeiro, Brazil, 1986.
Latin American Academy of Sciences, Caracas, Venezuela, 1989.
Third World Academy of Sciences, 1989.
Awards
Commander of the National Order of Scientific Merit, Brazil, 1996.
References
External links
Anita Dolly Panek
Brazilian scientists
Living people
Polish biochemists
Polish women chemists
Commanders of the National Order of Scientific Merit (Brazil)
Brazilian people of Polish-Jewish descent
Brazilian women chemists
20th-century Polish women scientists
Women biochemists
20th-century Brazilian women scientists
Polish women academics
20th-century Polish scientists
Year of birth missing (living people)
20th-century Polish women
Polish emigrants to Brazil
21st-century Brazilian women scientists | Anita Dolly Panek | Chemistry | 220 |
16,640,926 | https://en.wikipedia.org/wiki/Selenium%20hexafluoride | Selenium hexafluoride is the inorganic compound with the formula SeF6. It is a very toxic colourless gas described as having a "repulsive" odor. It is not widely encountered and has no commercial applications.
Structure, preparation, and reactions
SeF6 has octahedral molecular geometry with an Se−F bond length of 168.8 pm. In terms of bonding, it is hypervalent.
SeF6 can be prepared from the elements. It also forms by the reaction of bromine trifluoride (BrF3) with selenium dioxide. The crude product can be purified by sublimation.
The relative reactivity of the hexafluorides of S, Se, and Te follows the order TeF6 > SeF6 > SF6, the latter being completely inert toward hydrolysis until high temperatures. SeF6 also resists hydrolysis. The gas can be passed through 10% NaOH or KOH without change, but reacts with gaseous ammonia at 200 °C.
Safety
Although selenium hexafluoride is quite inert and slow to hydrolyze, it is toxic even at low concentrations, especially by longer exposure. In the U.S., OSHA and ACGIH standards for selenium hexafluoride exposure is an upper limit of 0.05 ppm in air averaged over an eight-hour work shift. Additionally, selenium hexafluoride is designated as IDLH chemical with a maximum allowed exposure limit of 2 ppm.
References
External links
ATSDR ToxFAQs - Selenium Hexafluoride U.S. Department of Health and Human Services
CDC - NIOSH Pocket Guide to Chemical Hazards U.S. Department of Health and Human Services
WebBook page for SeF6
Selenium(VI) compounds
Hexafluorides
Octahedral compounds
Chalcohalides
Selenium halides
Foul-smelling chemicals | Selenium hexafluoride | Chemistry | 410 |
52,832,272 | https://en.wikipedia.org/wiki/Mitochondria%20associated%20membranes | Mitochondria-associated membranes (MAMs) represent regions of the endoplasmic reticulum (ER) which are reversibly tethered to mitochondria. These membranes are involved in import of certain lipids from the ER to mitochondria and in regulation of calcium homeostasis, mitochondrial function, autophagy and apoptosis. They also play a role in development of neurodegenerative diseases and glucose homeostasis.
Role
In mammalian cells, formation of these linkage sites are important for some cellular events including:
Calcium homeostasis
Mitochondria associated membranes are involved in the transport of calcium from the ER to mitochondria. This interaction is important for rapid uptake of calcium by mitochondria through Voltage dependent anion channels (VDACs), which are located at the outer mitochondrial membrane (OMM). This transport is regulated with chaperones and regulatory proteins which control the formation of the ER–mitochondria junction. Transfer of calcium from ER to mitochondria depends on high concentration of calcium in the intermembrane space, and mitochondrial calcium uniporter (MCU) accumulates calcium into the mitochondrial matrix for electrochemical gradient.
Regulation of lipid metabolism
Transport of phosphatidylserine into mitochondria from the ER for decarboxylation to phosphatidylethanolamine through the ER-mitochondria lipid which transform phosphatidic acid (PA) into phosphatidylserine (PS) by phosphatidylserine synthases 1 and 2 (PSS1, PSS2) in the ER and then transfers PS to mitochondria, where phosphatidylserine decarboxylase (PSD) transform into phosphatidylethanolamine (PE). PE which is synthesized at mitochondria goes back to ER where phosphatidylethanolamine methyltransferase 2 (PEMT2) synthesizes PC (phosphatidylcholine).
Regulation of autophagy and mitophagy
The formation of autophagosomes through the coordination of ATG (autophagy-related) proteins and the vesicular trafficking by MAM.
Regulation of the morphology: Dynamics and functions of mitochondria, and cell survival
These membrane contact sites have been associated with the delicate balance between life and death of the cell.
Isolation membranes are the initial step to form auto-phagosomes. These closed membranes are double membrane-bond, with lysosomes inside it. The main function of these membrane is degradation, as role in cellular homeostasis. However, the origin of them has remained unclear.
Maybe it is the plasma membrane, the endoplasmic reticulum (ER) and the mitochondria. But the ER- mitochondria contact site have markers, the auto-phagosome marker ATG14, and the auto-phagosome-formation marker ATG5, until the formation of auto-phagosome is complete. Whereas, the absence of ATG14 puncta, it is caused by the breakdown of the ER–mitochondria contact site
The oxidative stress and the beginning of endoplasmic reticulum (ER) stress occur together; the ER stress have a key sensor enriched at the mitochondria-associated ER membranes (MAMs). This key is PERK (RNA-dependent protein kinase (PKR)-like ER kinase), PERK contributes to apoptosis twofold by sustaining the levels of pro-apoptotic C/EBP homologous protein (CHOP).
A tight ER–mitochondria contact site is integral to the mechanisms controlling cellular apoptosis and to inter-organelle Ca signals. The mitochondria-associated ER membranes (MAMs), play role in cell death modulation. Mitochondrial outer membrane permeabilization (MOMP), is a reason of the higher matrix Ca levels, which is acts as a trigger for apoptosis. MOMP is the process before apoptosis, which is accompanied to permeability of the inner membrane of the mitochondria (IMM).
Permeability transition pore (PTP) opening induces mitochondrial swelling and outer membrane of the mitochondria (OMM) rupture. Moreover, PTP opening induce releasing of caspase-activating factors and apoptosis. Caspase-activating factors induced by cytochrome C to bind to the IP3R, this will result in higher Ca transfer from the ER to the mitochondria, amplifying the apoptotic signal.
Alzheimer’s disease (AD)
MAMS play an important role in Ca Homeostasis, phospholipid and cholesterol metabolism. Research has associated the alteration of these functions of MAMs in Alzheimer's disease. Mitochindrial associated membranes associated with Alzheimer's disease have been reported to have an up-regulation of lipids synthesized in the MAMs juxtaposition and an up regulation of protein complexes present in the contact region between the ER and mitochondria. Research has suggested that the sites of MAM are the primary sites of activity for γ-secretase activity and amyloid precursor protein (APP) localization along with the presenilin 1 (PS1), presenilin 2 (PS2) proteins. γ-secretase functions in the cleavage of the beta- APP protein. Patients diagnosed with Alzheimer’s disease have presented results that indicated the accumulation of amyloid beta peptide in the brain which in turn leads to the amyloid cascade suggestion. Also increased connectivity between the ER and the mitochondria at MAM sites has been observed in human patients diagnosed with familial AD (FAD) by increase of the contact sites. These individuals showed mutations in the PS1, PS2 and APP proteins at the MAM sites. This increased connectivity also caused an abnormality in Ca signaling between neurons. Also with regard to the role in MAMs in phospholipid metabolism, patients diagnosed with AD have been reported to show alterations in levels of Phosphatedylserine and phostphatedylethanolamine in the ER and mitochondria respectively, this leads to the intracellular tangles containing hyperphosphorylated forms of the microtubule‐associated protein tau within tissues.
Parkinson's disease (PD)
One of the causes of Parkinson's disease is mutations in genes encoding for different proteins that are localized at the MAM sites. Mutations in the genes that encode the proteins Parkin, PINK1, alpha-Synuclein (α-Syn) or the protein deglycase DJ-1 have been linked to this disease through research. However, further research is still being considered in order to determine the direct correlations of these genes to Parkinson’s disease. In normal conditions, these genes are believed to be responsible for the cells ability to degrade mitochondria that has been rendered nonfunctional in a process known as mitophagy. However, mutations in the Parkin and pink1 genes have been associated with the cells becoming incapable of degrading faulty mitochondria. The proteins alpha-Synuclein (α-Syn) and DJ-1 have been shown to promote MAM function interaction between the ER and the mitochondria. The wild-type gene that codes for α-Syn promotes the physical junction between ER and mitochondria by binding to the lipid raft regions of the MAM. However, the mutant form of this gene has a low affinity to the lipid raft regions, thereby diminishing the contact between the ER and mitochondria and causing accumulation of α-Syn in Lewy bodies which is a major characteristic of PD. Further research on PD association with alterations in MAM is still being developed.
References
Presenilins are enriched in endoplasmic reticulum membranes associated with mitochondria.
Area-Gomez E, de Groof AJ, Boldogh I, Bird TD, Gibson GE, Koehler CM, Yu WH, Duff KE, Yaffe MP, Pon LA, Schon EA. Am J Pathol. 2009 Nov;175(5):1810-6.
doi: 10.2353/ajpath.2009.090219. PMID: 19834068
Neurodegenerative disorders
Cell biology
Mitochondria
Membrane biology | Mitochondria associated membranes | Chemistry,Biology | 1,785 |
46,332,876 | https://en.wikipedia.org/wiki/Interleukin-1%20receptor%20associated%20kinase | The interleukin-1 receptor (IL-1R) associated kinase (IRAK) family plays a crucial role in the protective response to pathogens introduced into the human body by inducing acute inflammation followed by additional adaptive immune responses. IRAKs are essential components of the Interleukin-1 receptor signaling pathway and some Toll-like receptor signaling pathways. Toll-like receptors (TLRs) detect microorganisms by recognizing specific pathogen-associated molecular patterns (PAMPs) and IL-1R family members respond the interleukin-1 (IL-1) family cytokines. These receptors initiate an intracellular signaling cascade through adaptor proteins, primarily, MyD88. This is followed by the activation of IRAKs. TLRs and IL-1R members have a highly conserved amino acid sequence in their cytoplasmic domain called the Toll/Interleukin-1 (TIR) domain. The elicitation of different TLRs/IL-1Rs results in similar signaling cascades due to their homologous TIR motif leading to the activation of mitogen-activated protein kinases (MAPKs) and the IκB kinase (IKK) complex, which initiates a nuclear factor-κB (NF-κB) and AP-1-dependent transcriptional response of pro-inflammatory genes. Understanding the key players and their roles in the TLR/IL-1R pathway is important because the presence of mutations causing the abnormal regulation of Toll/IL-1R signaling leading to a variety of acute inflammatory and autoimmune diseases.
IRAKs are membrane proximal putative serine-threonine kinases. Four IRAK family members have been described in humans: IRAK1, IRAK2, IRAKM, and IRAK4. Two are active kinases, IRAK-1 and IRAK-4, and two are inactive, IRAK-2 and IRAK-M, but all regulate the nuclear factor-κB (NF-κB) and mitogen-activated protein kinase (MAPK) pathways.
Some special/significant features of each IRAK family member:
There is some evidence that IRAK-1 functions in regulating other signaling cascades leading to NF-κB activation. One signaling pathway in particular nerve growth factor (NGF) may be dependent on the function of IRAK-1 in its signaling pathway for its activation and cell survival.
IRAK-2 has 4 isoforms IRAK-2a, IRAK-2b, IRAK-2c, and IRAK-2d. The latter two have negative feedback in the TLR signaling pathways. IRAK-2a and IRAK-2b positively activate NF-κB/TLR pathway by stimulating LPS.
IRAK-M is specific to monomyeloic cells (monocytes and macrophages) while the other IRAKs that are ubiquitously expressed. IRAK-M negatively regulates TLR signaling by inhibiting the IRAK-4/IRAK-1 complex
The newest described IRAK family member, IRAK-4, has been found to be critical for the recruitment of IRAK-1 and for its activation/degradation. IL-1 stimulates IRAK-4 to the IL-1R complex initiating the Toll/IL-1 receptor signaling cascade upstream of IRAKs, so the deletion of IRAK-1 does not abolish the activation of NF-κB and mitogen-activated protein kinase pathways.
Discovery
IRAKs were first identified in 1994 by Michael Martin and colleagues when they successfully co-precipitated a protein kinase with type I interleukin-1 receptors (IL-1RI) from human T cells. They speculated that this kinase was the link between the T cell's transmembrane IL-1 receptor and the cytosolic signalling pathway's downstream components.
The name “IRAK” came from Zhaodan Cao and colleagues in 1995. The DNA sequence analysis of IRAK's domains revealed many conserved amino acids with the serine/threonine specific protein kinase Pelle in Drosophila, that functions downstream of a Toll receptor. Cao's lab confirmed the kinase's activity as necessarily associated with the IL-1 receptor by immunoprecipitating the IL-1 receptors from different cell types treated with IL-1 and without IL-1. Even cells without over-expressed IL-1 receptors showed kinase activity when exposed to IL-1, and were able to co-precipitate a protein kinase with endogenous IL-1 receptors. Thus the human IL-1 receptor's accessory protein was named Interleukin-1 Receptor-Associated Kinase.
In 1997, MyD88 was identified as the cytosolic protein that recruits IRAKs to the cytosolic domains of IL-1 receptors, mediating IL-1's signal transduction to the cytosolic signal cascade. Subsequent studies associated IRAKs with multiple signalling pathways triggered by interleukin, and specified multiple IRAK types.
Structure
Functional domains
All IRAK family members are multidomain proteins consisting of a conserved N-terminal Death Domain (DD) and a central kinase domain (KD). The DD is a protein interaction motif that important for interacting with other signaling molecules such as the adaptor protein MyD88 and other IRAK members. The KD is responsible for the kinase activity of IRAK proteins and consists of 12 subdomains. All IRAK KDs have an ATP binding pocket with an invariable lysine residue in subdomain II, however, only IRAK-1 and IRAK-4 have an aspartate residue in the catalytic site of subdomain VI, which is thought to be critical for kinase activity. It is thought that IRAK-2 and IRAK-M are catalytically inactive because they lack this aspartate residue in the KD.
The C-terminal domain does not seem to show much similarity between IRAK family members. The C-terminal domain is important for the interaction with the signaling molecule TRAF6. IRAK-1 contains three TRAF6 interaction motifs, IRAK-2 contains two and IRAK-M contains one.
IRAK-1 contains a region that is rich in serine, proline, and threonine (proST). It is thought that IRAK-1 undergoes hyperphosphorylation in this region. The proST region also contains two proline (P), glutamic acid (E), serine (S) and threonine (T)-rich (PEST) sequences that are thought to promote the degradation of IRAK-1.
Role in immune signaling
Interleukin-1 receptor signaling
Interleukin-1 receptors (IL-1Rs) are cytokine receptors that transduce an intracellular signaling cascade in response to the binding of the inflammatory cytokine interleukin-1 (IL-1). This signaling cascade results in the initiation of transcription of certain genes involved in inflammation. Because IL-1Rs do not possess intrinsic kinase activity, they rely on the recruitment of adaptor molecules, such as IRAKs, to transduce their signals.
IL-1 binding to IL-1R complex triggers the recruitment of the adaptor molecule MyD88 through interactions with the TIR domain. MyD88 brings IRAK-4 to the receptor complex. Preformed complexes of the adaptor molecule Tollip and IRAK-1 are also recruited to the receptor complex, allowing IRAK-1 to bind MyD88. IRAK-1 binding to MyD88 brings it into close proximity with IRAK-4 so that IRAK-4 can phosphorylate and activate IRAK-1. Once phosphorylated, IRAK-1 recruits the adaptor protein TNF receptor associated factor 6 (TRAF6) and the IRAK-1-TRAF6 complex dissociates from the IL-1R complex. The IRAK-1-TRAF6 complex interacts with a pre-existing complex at the plasma membrane consisting of TGF-β activated kinase 1 (TAK1), and two TAK binding proteins, TAB1 and TAB2. TAK1 is a mitogen-activated protein kinase kinase kinase (MAPKKK). This interaction leads to the phosphorylation of TAB2 and TAK1, which then translocate to the cytosol with TRAF6 and TAB1. IRAK-1 remains at the membrane and is targeted for degradation by ubiquitination. Once the TAK1-TRAF6-TAB1-TAB2 complex is in the cytosol, ubiquitination of TRAF6 in triggers the activation of TAK1 kinase activity. TAK1 can then activate two transcription pathways, the nuclear factor-κB (NF-κB) pathway and the mitogen-activated protein kinase (MAPK) pathway. To activate the NF-κB pathway, TAK1 phosphorylates the IκB kinase (IKK) complex, which subsequently phosphorylates the NF-κB inhibitor, IκB, targeting it for degradation by the proteasome. Once IκB is removed, the NF-κB proteins p65 and p50 are free to translocate into the nucleus and activate transcription of proinflammatory genes. To activate the MAPK pathway, TAK1 phosphorylates MAPK kinase (MKK) 3/4/6, which then phosphorylate members of the MAPK family, c-Jun N-terminal kinase (JNK) and p38. Phosphorylated JNK/p38 can then translocate into the nucleus and phosphorylate and activate transcription factors such as c-Fos and c-Jun.
Toll-like receptor signaling
Toll-like receptors (TLRs) are important innate immune receptors that recognize pathogen associated molecular patterns (PAMPs) and initiate the appropriate immune response to eliminate a particular pathogen. PAMPs are conserved motifs associated with microorganisms that are not found in host cells, such as, bacterial lipopolysaccharide (LPS), viral double-stranded RNA, etc. TLRs are similar to IL-1Rs in that they do not possess intrinsic kinase activity and require adaptor molecules to relay their signals. Stimulation of TLRs can also result in NF-κB and MAPK mediated transcription, similar to the IL-1R signaling pathway.
It has been shown that IRAK-1 is essential for TLR7 and TLR9 interferon (IFN) induction. TLR7 and TLR9 in plasmacytoid dendritic cells (pDCs) recognize viral nucleic acids and trigger the production of interferon-α (IFN-α), an important cytokine for inducing an antiviral state in host cells. TLR7 and TLR9 mediated IFN-α induction requires the formation of a complex consisting of MyD88, TRAF6 and the interferon regulatory factor 7 (IRF7). IRF7 is a transcription factor that translocates into the nucleus when activated and initiates transcription of IFN-α. IRAK-1 was shown to directly phosphorylates IRF7 in vitro and the kinase activity of IRAK-1 was shown to be essential for IRF7 transcriptional activation. It was subsequently shown that IRAK-1 is required for the activation of interferon regulatory factor 5 (IRF5). IRF5 is another transcription factor that induces IFN production following stimulation of TLR7, TLR8 and TLR9 by specific viruses. In order to be activated, IRF5 must be polyubiquitinated by TRAF6. It has been shown that TRAF6-mediated ubiquitination of IRF5 is dependent on the kinase activity of IRAK-1.
IRAK-1 has also been shown to play a critical role in TLR4 interleukin-10 (IL-10) induction. TLR4 recognizes bacterial LPS and triggers the transcription of IL-10, a cytokine regulating the inflammatory response. IL-10 transcription is activated by signal transducer and activator of transcription 3 (STAT3). IRAK-1 forms a complex with STAT3 and the IL-10 promoter element in the nucleus and is required for STAT3 phosphorylation and activation of IL-10 transcription.
IRAK-2 plays an important role in TLR-mediated NF-κB activation. Knocking down IRAK-2 has been shown to impair NF-κB activation by TLR3, TLR4 and TLR8. The mechanism of how IRAK-2 functions is still unknown, however, IRAK-2 has been shown to interact with a TIR adaptor protein that does not bind to IRAK-1, called Mal/TIRAP. Mal/TIRAP has been specifically implicated in TLR2 and TLR4 mediated NF-κB signaling. In addition, it has been shown that IRAK-2 is recruited to the TLR3 receptor. IRAK-2 is the only IRAK family member that is known to play a role in TLR3 signaling.
One of the most distinct features of IRAK-M is that it is a negative regulator of TLR signaling to prevent excessive inflammation. It is thought that IRAK-M enhances the binding of MyD88 to IRAK-1 and IRAK-4, preventing IRAK-1 from dissociating from the receptor complex and inducing downstream NF-κB and MAPK signaling. It has also been shown that IRAK-M negatively regulates the alternative NF-κB pathway in TLR2 signaling. The alternative NF-κB pathway is predominantly triggered by CD40, lymphotoxin β receptor (LT), and the B-cell activating receptor belonging to the TNF family (BAFF receptor). The alternative NF-κB pathway involves the activation of NF-κN-inducing kinase (NIK) and subsequent phosphorylation of the transcription factors p100/RelB in an IKKα-dependent mechanism. It was observed that IRAK-M knockout resulted in increased induction of the alternative NF-κB pathway but not the classical pathway. The mechanism by which IRAK-M inhibits NF-κB signaling is still unknown.
IRAK-4 is an essential component of MyD88 mediated signaling pathways and is therefore critical for both IL-1R and TLR signaling. MyD88 acts as a scaffold protein for the interaction between IRAK-1 and IRAK-4, allowing IRAK-4 to phosphorylate IRAK-1, leading to autophosphorylation and activation of IRAK-1 [1,2]. IRAK-4 is critical for IL-1R and TLR NF-κB and MAPK signaling pathways as well as TLR7/9 MyD88-mediated interferon activation.
Role in disease
Interleukin 1 is a cytokine that acts locally and systemically in the innate immune system. IL-1a and IL-1ß are known for causing inflammation, but can also cause induction of other proinflammatory cytokines, and fever. Because IRAKs are a crucial step in the IL-1 receptor signalling pathway, deficiencies or over-expression of IRAKs can cause suboptimal or overactive cellular response to IL-1a and IL-1ß. Thus Interleukin-1 Receptor Associated Kinases are promising therapeutic targets for autoimmune-, immunodeficiency-, and cancer-related disorders.
Cancer
Inflammation signalling is known to be a major factor in many cancer types, and an inflammatory microclimate is a key aspect of human tumours. IL-1ß, which activates the inflammatory signalling pathway containing IRAKs, is directly involved in tumour cell growth, angiogenesis, invasion, and metastasis. In tumour cells containing the L265P MyD88 mutant, protein-signalling complexes spontaneously assemble, activating IRAK-4's kinase activity and promoting inflammation and growth independent of Interleukin-1 signalling. IRAK-4 inhibiting drugs are thus a potential therapeutic treatment for lymphoid malignancies with the L265P MyD88 mutation, especially in Waldenström's Macroglobulinaemia, in which BTK and IRAK1/4 inhibitors have shown promising but unconfirmed results.
In 2013, Garrett Rhyasen and his colleagues at the University of Cincinnati studied the contribution of active IRAK-1 and IRAK-4 in human myelodysplastic syndrome (MDS) and acute myeloid leukemia (AML). They found that IRAK1 knockout therapy incited apoptosis and impaired leukemic progenitor activity. They also established that IRAK4, while imperative to proliferation of human hematologic malignancies, is not imperative to the pathogenesis of MDS/AML. Further testing of IRAK-inhibitory therapy could prove essential to cancer therapy development.
Autoimmune Disorders
Autoimmune disorders such as MS, rheumatoid arthritis, lupus and psoriasis are caused by innate immune system deregulation inducing chronic inflammation. In most cases, inhibition of IRAK-1 and IRAK-4 are suspected to the most effective targets for knockout drugs, as their functions are integral to the cytokine pathways inducing chronic inflammation.
Mutations in the gene for IRAK-M have been identified as contributors to early onset asthma. Compromised IRAK-M leads to overproduction of inflammatory cytokines in the lungs, eventually triggering T cell mediated allergic reactions and exacerbation of asthma symptoms. Researchers have proposed that increasing IRAK-M function in these individuals may moderate asthma symptoms.
References
EC 2.7.11
Immunology | Interleukin-1 receptor associated kinase | Biology | 3,799 |
31,425,310 | https://en.wikipedia.org/wiki/Tapuy | Tapuy, also spelled tapuey or tapey, is a rice wine produced in the Philippines. It is a traditional beverage originated from Banaue and Mountain Province, where it is used for important occasions such as weddings, rice harvesting ceremonies, fiestas and cultural fairs. It is produced from either pure glutinous rice or a combination of glutinous and non-glutinous rice together with roots, ginger extract, and a powdered starter culture locally known as bubod. Tapuy is an Ilocano name. The wine is more commonly called baya or bayah in Igorot languages.
Etymology
Tapuy is derived from Proto-Malayo-Polynesian *tapay ("fermented [food]"), which in turn is derived from Proto-Austronesian * ("fermented [food]"). Derived cognates has come to refer to a wide variety of fermented food throughout Austronesia, including yeasted bread (Tagalog: tinapay) and rice wine. Tapuy is a variant of the widespread Austronesian rice paste or rice wine tapai (or tapay in Philippine languages).
Proto-Malayo-Polynesian *tapay-an also refers to large earthen jars originally used for this fermentation process. Cognates in modern Austronesian languages include tapayan (Tagalog), tepayan (Iban), and tempayan (Javanese and Malay).
Description
The characteristics of tapuy, as in many other rice wines, depend on the process and ingredients used by each manufacturer. However, in general, tapuy is a clear full-bodied wine with a strong alcoholic flavor, moderately sweet and often leaves a lingering taste. The alcohol content is 28 proof or about 14 percent. It has no sulfites (which are preservatives found in other wines) that sometimes cause adverse reactions like hang-over and allergies. Tapuy is also not diluted with water and has no sugar added.
The process of producing commercial tapuy starts with weighing and washing selected rice. Then, the rice is cooked, cooled and inoculated with a natural starter culture, locally known as bubod. After that, a process of natural pre-fermentation and natural fermentation follows. Once the fermentation is completed, the fresh wine can be harvested and pasteurized. After the pasteurization, the rice wine is aged, filtered and clarified before bottling. Finally, bottled rice wine is pasteurized once again before sealing.
Traditions
Once a year, during the Ipitik festival, rice wine brewers from all over the Mountain Province gather together to bring their best rice wine concoctions. The one whole day of merrymaking is filled with wood-carving contests, art exhibits, and large cook-outs that aim to feed hundreds of people. There are gong players, mostly children who dance while playing a lively beat. Indeed, the tapuy rice wine is deeply rooted in one of the colorful and diverse cultures in the Philippines.
See also
Pangasi
References
External links
The Philippine Rice Research Institute website
How to make tapuy
How to make tapuy starter culture (bubod)
Fermented drinks
Philippine alcoholic drinks
Filipino cuisine
Rice wine | Tapuy | Biology | 669 |
71,108,476 | https://en.wikipedia.org/wiki/Carolingian%20pound | The Carolingian pound (, ), also called Charlemagne's pound or the Charlemagne pound, was a unit of weight that emerged during the reign of Charlemagne. It served both as a trading weight and a coinage weight. It had a mass of about 408 g and was introduced in as part of Charlemagne's monetary reform around AD 793/94. This stipulated that 240 denarii (= pfennigs) were to be minted from one pound weight of silver.
The units of weight that emerged over time as a result of the Carolingian monetary system and its associated pound or Karlspfund, were of great importance for large parts of Europe. The basic features of this monetary system, which was based on the Carolingian pound, continued to exist in England until 1971. Initially, the Carolingian pound was valid across the whole of the Carolingian Empire and, to a lesser extent, in the Holy Roman Empire under the Ottonian dynasty that followed. Under the Salians, who ruled from 1024, the Cologne Mark was introduced. This amounted to 576 thousandths of the Carolingian pound and became the dominant coinage weight. Similar modifications were made to trading weights at the same time.
Origin
The Karlspfund is first attested by a contemporary manuscript, as well as reports from the Council of Frankfurt in 794. These say that new coins, new deniers or denars, were now to be minted in the Empire. These deniers later became known as pfennigs. The exact derivation of the target weight of the Charlemagne pound itself has yet to be clarified.
Today, the original weight of the Charlemagne pound can be determined primarily by weighing surviving Carolingian coins from the early period, although a variation of several per cent occurs. In the literature, the Karlspfund is often given 408.25 g or approximately as 408 g, The latter is the equivalent of one denier of exactly 1.7 g in weight.
Derivatives
France
From the middle of the 12th century, several variants of the Carolingian pound emerged in France which were legal tender at different times.
Paris pound (Libra parisi). The Paris pound, at almost 460 g, had been around since the time Louis the Fat and was of the Carolingian pound.
Tours pound. At the beginning of the 13th century, the livre tournois, the pound of the city of Tours, was used in France. This was identical to the "earlier" livre de Troyes in use at the same time in Troyes. The Livre tournois was exactly of the Karlspfund.
Troy pound. At the same time, a new system was created in Troyes, the "later" livre de Troyes. This was legal throughout France from 1266 at the latest, until 1 August 1793. It was officially and unambiguously also called the "livre des poids-de-marc" (Mark pound weight). It was of the Karlspfund.
The English pound weight, which was adopted very early and directly from France, shows that the value of the Carolingian pound was a little lower in France for a long time.
The weight of the livre des poids-de-marc also corresponds very closely to one seventieth of the mass of a French cubic foot of water. So it is likely that this is why there was a slight increase in the weight measure in France. The ratio of the two is about 3136 : 3125, so only there is only a +0.35% difference.
England
The English system of Troy weights probably originates in the French market town of Troyes where English merchants traded at least as early as the early 9th century. The name troy is first attested in 1390, describing the weight of a platter, in an account of the travels in Europe of the Earl of Derby. The English weights were based on the older value of the livre de Troyes which was of the Carolingian pound. Thus it is easy to compare them directly to the Karlspfund:
The metrological numerical values only differed from their official values (1958) by about 0.0017 %. The former corresponded to an English grain of exactly 64.8 mg.
Holy Roman Empire
Many of the important weights in the German Holy Roman Empire, such as the Vienna pound, the Cologne mark and the Nuremberg apothecary's pound were derived from the Charlemagne pound. For example, the ratio of the Cologne mark to the Karlspfund is exactly 576:1000.
The relatively large deviation of the empirical Karlspfund of almost 0.4% - which is still within the coefficient of variation determined for old weights is due to the later French, slightly larger version.
The so-called Custom Union mark of the German Customs Union was set at 233.8555 g in 1838, i.e. only around 0.105% less than its numerical value. Cologne and Vienna marks maintained their ratio of 10 : 12. Thus in creating their derivatives, the leading metrologists of the Holy Roman Empire preserved the Carolingian pound with outstanding precision for over a thousand years.
Carolingian pfennig
After the Carolingian monetary reform, the schilling (lat. solidus) was initially only a coin of account, the unminted gold equivalent of 12 silver denarii (denarius = pfennig). A schilling was the equivalent of 1/20th of a Carolingian pound in silver weight. At 12 pfennigs to the schilling, Carolingian silver pfennigs were actually minted from a pound of silver 240.
For historical units of length, the coefficient of variation is generally accurate to within ± 0.2%. In ancient and medieval units of weight, a range of about (1.0023 −1) = 3/500 can be used. The ratio 126 : 125 and its reciprocal value represents the higher metrological precision requirements of medieval weights.
Coefficients of variation become considerably smaller from around the Renaissance period. In addition, a distinction must be made between the actual and known values of the dimensions themselves and the tolerances that inevitably occur in "mass production". At that time, purely for technical reasons, the variation was no better than, for a pfennig, 1.6 to 1.8 g.
Weight of the Carolingian Pound
The weight given for the Carolingian pound varies slightly in the literature for the following reasons:
406 ½ grams is a good approximation of the weight of the Carolingian pound. Its only disadvantage is that the denarius with a value of 1.69375 g has a five-digit number after the decimal point.
405 g equates to four digits on the right side of the denarius. This value is based on the English weight system.
406 g would give a period value for the denarius and is based on the German Customs Union mark.
408 g is slightly high and equates to of the old French pound. It equates to a single digit decimal point for the denarius.
408.24g is sometimes used and may also be rounded to 408.25 g.
406.4256 grams is an average that represents a modern overall rounding of all weights, including those derived from the Carolingian pound. However, it does not mean that Carolingian metrologists could determine their pound value to a precision of mg nor that modern research has determine the historical value to that level of precision.
Footnotes
References
Charlemagne
Units of mass
Obsolete units of measurement
Units of measurement of the Holy Roman Empire | Carolingian pound | Physics,Mathematics | 1,572 |
72,628,992 | https://en.wikipedia.org/wiki/Siemens%20M55 | The Siemens M55 was a mobile phone introduced by Siemens in 2003. At the time, it was a high-end phone and one of the first color phones by Siemens, with a 4096 color screen. It was a lower-end counterpart of the Siemens S55, with Bluetooth and infrared connectivity removed. It had a 700mAh battery and no video recorder. Fully charged, it could last around 11 days without charging. There were two models of the M55: one with an orange keyboard and the other with a grey keyboard.
References
M55 | Siemens M55 | Technology | 114 |
19,001 | https://en.wikipedia.org/wiki/Microsoft | Microsoft Corporation is an American multinational technology conglomerate headquartered in Redmond, Washington. Founded in 1975, the company became highly influential in the rise of personal computers through software like Windows, and the company has since expanded to Internet services, cloud computing, video gaming and other fields. Microsoft is the largest software maker, one of the most valuable public U.S. companies, and one of the most valuable brands globally.
Microsoft was founded by Bill Gates and Paul Allen to develop and sell BASIC interpreters for the Altair 8800. It rose to dominate the personal computer operating system market with MS-DOS in the mid-1980s, followed by Windows. During the 41 years from 1980 to 2021 Microsoft released 9 versions of MS-DOS with a median frequency of 2 years, and 13 versions of Microsoft Windows with a median frequency of 3 years. The company's 1986 initial public offering (IPO) and subsequent rise in its share price created three billionaires and an estimated 12,000 millionaires among Microsoft employees. Since the 1990s, it has increasingly diversified from the operating system market. Steve Ballmer replaced Gates as CEO in 2000 which would see the then-largest of Microsoft's corporate acquisitions in Skype Technologies in 2011, and an increased focus on hardware that led to its first in-house PC line, the Surface, in 2012, and the formation of Microsoft Mobile through Nokia. Since Satya Nadella took over as CEO in 2014, the company has changed focus towards cloud computing, as well as its large acquisition of LinkedIn for $26.2 billion in 2016. Under Nadella's direction, the company has also expanded its video gaming business to support the Xbox brand, establishing the Microsoft Gaming division in 2022, which is currently the third-largest gaming company in the world by revenue, following the 2023 acquisition of Activision Blizzard for $68.7 billion.
Microsoft has been market-dominant in the IBM PC–compatible operating system market and the office software suite market since the 1990s. Its best-known software products are the Windows line of operating systems and the Microsoft Office and Microsoft 365 suite of productivity applications, which most notably include the Word word processor and Excel spreadsheet editor. Its flagship hardware products are the Surface lineup of personal computers and Xbox video game consoles, the latter of which includes the Xbox network; the company also provides a range of consumer Internet services such as Bing web search, the MSN web portal, the Outlook.com email service and the Microsoft Store. In the enterprise and development fields, Microsoft most notably provides the Azure cloud computing platform, Microsoft SQL Server database software, and Visual Studio.
Microsoft is considered one of the Big Five American information technology companies, alongside Alphabet, Amazon, Apple, and Meta. In April 2019, Microsoft reached a trillion-dollar market cap, becoming the third public U.S. company to be valued at over $1 trillion. It has been criticized for its monopolistic practices, and the company's software has been criticized for problems with ease of use, robustness, and security.
History
1972–1985: Founding
Childhood friends Bill Gates and Paul Allen sought to make a business using their skills in computer programming. In 1972, they founded Traf-O-Data, which sold a rudimentary computer to track and analyze automobile traffic data. Gates enrolled at Harvard University while Allen pursued a degree in computer science at Washington State University, though he later dropped out to work at Honeywell. The January 1975 issue of Popular Electronics featured Micro Instrumentation and Telemetry Systems's (MITS) Altair 8800 microcomputer, which inspired Allen to suggest that they could program a BASIC interpreter for the device. Gates called MITS and claimed that he had a working interpreter, and MITS requested a demonstration. Allen worked on a simulator for the Altair while Gates developed the interpreter, and it worked flawlessly when they demonstrated it to MITS in March 1975 in Albuquerque, New Mexico. MITS agreed to distribute it, marketing it as Altair BASIC. Gates and Allen established Microsoft on April 4, 1975, with Gates as CEO, and Allen suggested the name "Micro-Soft", short for micro-computer software. In August 1977, the company formed an agreement with ASCII Magazine in Japan, resulting in its first international office of ASCII Microsoft. Microsoft moved its headquarters to Bellevue, Washington, in January 1979.
Microsoft entered the operating system (OS) business in 1980 with its own version of Unix called Xenix, but it was MS-DOS that solidified the company's dominance. IBM awarded a contract to Microsoft in November 1980 to provide a version of the CP/M OS to be used in the IBM Personal Computer (IBM PC). For this deal, Microsoft purchased a CP/M clone called 86-DOS from Seattle Computer Products which it branded as MS-DOS, although IBM rebranded it to IBM PC DOS. Microsoft retained ownership of MS-DOS following the release of the IBM PC in August 1981. IBM had copyrighted the IBM PC BIOS, so other companies had to reverse engineer it for non-IBM hardware to run as IBM PC compatibles, but no such restriction applied to the operating systems. Microsoft eventually became the leading PC operating systems vendor. The company expanded into new markets with the release of the Microsoft Mouse in 1983, as well as with a publishing division named Microsoft Press.
Paul Allen resigned from Microsoft in 1983 after developing Hodgkin's lymphoma. Allen claimed in Idea Man: A Memoir by the co-founder of Microsoft that Gates wanted to dilute his share in the company when he was diagnosed with Hodgkin's disease because he did not think that he was working hard enough. Allen later invested in low-tech sectors, sports teams, commercial real estate, neuroscience, private space flight, and more.
1985–1994: Windows and Office
Microsoft released Windows 1.0 on November 20, 1985, as a graphical extension for MS-DOS, despite having begun jointly developing OS/2 with IBM that August. Microsoft moved its headquarters from Bellevue to Redmond, Washington, on February 26, 1986, and went public on March 13, with the resulting rise in stock making an estimated four billionaires and 12,000 millionaires from Microsoft employees. Microsoft released its version of OS/2 to original equipment manufacturers (OEMs) on April 2, 1987. In 1990, the Federal Trade Commission examined Microsoft for possible collusion due to the partnership with IBM, marking the beginning of more than a decade of legal clashes with the government. Meanwhile, the company was at work on Microsoft Windows NT, which was heavily based on their copy of the OS/2 code. It shipped on July 21, 1993, with a new modular kernel and the 32-bit Win32 application programming interface (API), making it easier to port from 16-bit (MS-DOS-based) Windows. Microsoft informed IBM of Windows NT, and the OS/2 partnership deteriorated.
In 1990, Microsoft introduced the Microsoft Office suite which bundled separate applications such as Microsoft Word and Microsoft Excel. On May 22, Microsoft launched Windows 3.0, featuring streamlined user interface graphics and improved protected mode capability for the Intel 386 processor, and both Office and Windows became dominant in their respective areas.
On July 27, 1994, the Department of Justice's Antitrust Division filed a competitive impact statement that said: "Beginning in 1988 and continuing until July 15, 1994, Microsoft induced many OEMs to execute anti-competitive 'per processor licenses. Under a per-processor license, an OEM pays Microsoft a royalty for each computer it sells containing a particular microprocessor, whether the OEM sells the computer with a Microsoft operating system or a non-Microsoft operating system. In effect, the royalty payment to Microsoft when no Microsoft product is being used acts as a penalty, or tax, on the OEM's use of a competing PC operating system. Since 1988, Microsoft's use of per processor licenses has increased."
1995–2007: Foray into the Web, Windows 95, Windows XP, and Xbox
Following Bill Gates's internal "Internet Tidal Wave memo" on May 26, 1995, Microsoft began to redefine its offerings and expand its product line into computer networking and the World Wide Web. With a few exceptions of new companies, like Netscape, Microsoft was the only major and established company that acted fast enough to be a part of the World Wide Web practically from the start. Other companies like Borland, WordPerfect, Novell, IBM and Lotus, being much slower to adapt to the new situation, would give Microsoft market dominance.
The company released Windows 95 on August 24, 1995, featuring pre-emptive multitasking, a completely new user interface with a novel start button, and 32-bit compatibility; similar to NT, it provided the Win32 API. Windows 95 came bundled with the online service MSN, which was at first intended to be a competitor to the Internet, and (for OEMs) Internet Explorer, a Web browser. Internet Explorer has not bundled with the retail Windows 95 boxes, because the boxes were printed before the team finished the Web browser, and instead were included in the Windows 95 Plus! pack. Backed by a high-profile marketing campaign and what The New York Times called "the splashiest, most frenzied, most expensive introduction of a computer product in the industry's history," Windows 95 quickly became a success. Branching out into new markets in 1996, Microsoft and General Electric's NBC unit created a new 24/7 cable news channel, MSNBC. Microsoft created Windows CE 1.0, a new OS designed for devices with low memory and other constraints, such as personal digital assistants. In October 1997, the Justice Department filed a motion in the Federal District Court, stating that Microsoft violated an agreement signed in 1994 and asked the court to stop the bundling of Internet Explorer with Windows.
On January 13, 2000, Bill Gates handed over the CEO position to Steve Ballmer, an old college friend of Gates and employee of the company since 1980, while creating a new position for himself as Chief Software Architect. Various companies including Microsoft formed the Trusted Computing Platform Alliance in October 1999 to (among other things) increase security and protect intellectual property through identifying changes in hardware and software. Critics decried the alliance as a way to enforce indiscriminate restrictions over how consumers use software, and over how computers behave, and as a form of digital rights management: for example, the scenario where a computer is not only secured for its owner but also secured against its owner as well. On April 3, 2000, a judgment was handed down in the case of United States v. Microsoft Corp., calling the company an "abusive monopoly." Microsoft later settled with the U.S. Department of Justice in 2004.
On October 25, 2001, Microsoft released Windows XP, unifying the mainstream and NT lines of OS under the NT codebase. The company released the Xbox later that year, entering the video game console market dominated by Sony and Nintendo. In March 2004 the European Union brought antitrust legal action against the company, citing it abused its dominance with the Windows OS, resulting in a judgment of €497 million ($613 million) and requiring Microsoft to produce new versions of Windows XP without Windows Media Player: Windows XP Home Edition N and Windows XP Professional N. In November 2005, the company's second video game console, the Xbox 360, was released. There were two versions, a basic version for $299.99 and a deluxe version for $399.99.
Increasingly present in the hardware business following Xbox, Microsoft 2006 released the Zune series of digital media players, a successor of its previous software platform Portable Media Center. These expanded on previous hardware commitments from Microsoft following its original Microsoft Mouse in 1983; as of 2007 the company sold the best-selling wired keyboard (Natural Ergonomic Keyboard 4000), mouse (IntelliMouse), and desktop webcam (LifeCam) in the United States. That year the company also launched the Surface "digital table", later renamed PixelSense.
2007–2011: Microsoft Azure, Windows Vista, Windows 7, and Microsoft Stores
Released in January 2007, the next version of Windows, Vista, focused on features, security and a redesigned user interface dubbed Aero. Microsoft Office 2007, released at the same time, featured a "Ribbon" user interface which was a significant departure from its predecessors. Relatively strong sales of both products helped to produce a record profit in 2007. The European Union imposed another fine of €899 million ($1.4 billion) for Microsoft's lack of compliance with the March 2004 judgment on February 27, 2008, saying that the company charged rivals unreasonable prices for key information about its workgroup and backoffice servers. Microsoft stated that it was in compliance and that "these fines are about the past issues that have been resolved". 2007 also saw the creation of a multi-core unit at Microsoft, following the steps of server companies such as Sun and IBM.
Gates retired from his role as Chief Software Architect on June 27, 2008, a decision announced in June 2006, while retaining other positions related to the company in addition to being an advisor for the company on key projects. Azure Services Platform, the company's entry into the cloud computing market for Windows, launched on October 27, 2008. On February 12, 2009, Microsoft announced its intent to open a chain of Microsoft-branded retail stores, and on October 22, 2009, the first retail Microsoft Store opened in Scottsdale, Arizona; the same day Windows 7 was officially released to the public. Windows 7's focus was on refining Vista with ease-of-use features and performance enhancements, rather than an extensive reworking of Windows.
As the smartphone industry boomed in the late 2000s, Microsoft had struggled to keep up with its rivals in providing a modern smartphone operating system, falling behind Apple and Google-sponsored Android in the United States. As a result, in 2010 Microsoft revamped their aging flagship mobile operating system, Windows Mobile, replacing it with the new Windows Phone OS that was released in October that year. It used a new user interface design language, codenamed "Metro", which prominently used simple shapes, typography, and iconography, utilizing the concept of minimalism. Microsoft implemented a new strategy for the software industry, providing a consistent user experience across all smartphones using the Windows Phone OS. It launched an alliance with Nokia in 2011 and Microsoft worked closely with the company to co-develop Windows Phone, but remained partners with long-time Windows Mobile OEM HTC. Microsoft is a founding member of the Open Networking Foundation started on March 23, 2011. Fellow founders were Google, HPE Networking, Yahoo!, Verizon Communications, Deutsche Telekom and 17 other companies. This nonprofit organization is focused on providing support for a cloud computing initiative called Software-Defined Networking. The initiative is meant to speed innovation through simple software changes in telecommunications networks, wireless networks, data centers, and other networking areas.
2011–2014: Windows 8/8.1, Xbox One, Outlook.com, and Surface devices
Following the release of Windows Phone, Microsoft undertook a gradual rebranding of its product range throughout 2011 and 2012, with the corporation's logos, products, services, and websites adopting the principles and concepts of the Metro design language. Microsoft unveiled Windows 8, an operating system designed to power both personal computers and tablet computers, in Taipei in June 2011. A developer preview was released on September 13, which was subsequently replaced by a consumer preview on February 29, 2012, and released to the public in May. The Surface was unveiled on June 18, becoming the first computer in the company's history to have its hardware made by Microsoft. On June 25, Microsoft paid US$1.2 billion to buy the social network Yammer. On July 31, they launched the Outlook.com webmail service to compete with Gmail. On September 4, 2012, Microsoft released Windows Server 2012.
In July 2012, Microsoft sold its 50% stake in MSNBC, which it had run as a joint venture with NBC since 1996. On October 1, Microsoft announced its intention to launch a news operation, part of a new-look MSN, with Windows 8 later in the month. On October 26, 2012, Microsoft launched Windows 8 and the Microsoft Surface. Three days later, Windows Phone 8 was launched. To cope with the potential for an increase in demand for products and services, Microsoft opened a number of "holiday stores" across the U.S. to complement the increasing number of "bricks-and-mortar" Microsoft Stores that opened in 2012. On March 29, 2013, Microsoft launched a Patent Tracker.
In August 2012, the New York City Police Department announced a partnership with Microsoft for the development of the Domain Awareness System which is used for Police surveillance in New York City.
The Kinect, a motion-sensing input device made by Microsoft and designed as a video game controller, first introduced in November 2010, was upgraded for the 2013 release of the Xbox One video game console. Kinect's capabilities were revealed in May 2013: an ultra-wide 1080p camera, function in the dark due to an infrared sensor, higher-end processing power and new software, the ability to distinguish between fine movements (such as a thumb movement), and determining a user's heart rate by looking at their face. Microsoft filed a patent application in 2011 that suggests that the corporation may use the Kinect camera system to monitor the behavior of television viewers as part of a plan to make the viewing experience more interactive. On July 19, 2013, Microsoft stocks suffered their biggest one-day percentage sell-off since the year 2000, after its fourth-quarter report raised concerns among investors on the poor showings of both Windows 8 and the Surface tablet. Microsoft suffered a loss of more than US$32 billion.
In line with the maturing PC business, in July 2013, Microsoft announced that it would reorganize the business into four new business divisions, namely Operating systems, Apps, Cloud, and Devices. All previous divisions will be dissolved into new divisions without any workforce cuts. On September 3, 2013, Microsoft agreed to buy Nokia's mobile unit for $7 billion, following Amy Hood taking the role of CFO.
2014–2020: Windows 10, Microsoft Edge, and HoloLens
On February 4, 2014, Steve Ballmer stepped down as CEO of Microsoft and was succeeded by Satya Nadella, who previously led Microsoft's Cloud and Enterprise division. On the same day, John W. Thompson took on the role of chairman, in place of Bill Gates, who continued to participate as a technology advisor. Thompson became the second chairman in Microsoft's history. On April 25, 2014, Microsoft acquired Nokia Devices and Services for $7.2 billion. This new subsidiary was renamed Microsoft Mobile Oy. On September 15, 2014, Microsoft acquired the video game development company Mojang, best known for Minecraft, for $2.5 billion. On June 8, 2017, Microsoft acquired Hexadite, an Israeli security firm, for $100 million.
On January 21, 2015, Microsoft announced the release of their first Interactive whiteboard, Microsoft Surface Hub. On July 29, 2015, Windows 10 was released, with its server sibling, Windows Server 2016, released in September 2016. In Q1 2015, Microsoft was the third-largest maker of mobile phones, selling 33 million units (7.2% of all). While a large majority (at least 75%) of them do not run any version of Windows Phone — those other phones are not categorized as smartphones by Gartnerin the same timeframe 8 million Windows smartphones (2.5% of all smartphones) were made by all manufacturers (mostly Microsoft). Microsoft's share of the U.S. smartphone market in January 2016 was 2.7%. During the summer of 2015 the company lost $7.6 billion related to its mobile-phone business, firing 7,800 employees.
In 2015, the construction of a data center in Mecklenburg County, Virginia led to the destruction of a historic African American cemetery despite archeological recommendations for preservation.
On March 1, 2016, Microsoft announced the merger of its PC and Xbox divisions, with Phil Spencer announcing that Universal Windows Platform (UWP) apps would be the focus for Microsoft's gaming in the future. On January 24, 2017, Microsoft showcased Intune for Education at the BETT 2017 education technology conference in London. Intune for Education is a new cloud-based application and device management service for the education sector. In May 2016, the company announced it was laying off 1,850 workers, and taking an impairment and restructuring charge of $950 million.
In June 2016, Microsoft announced a project named Microsoft Azure Information Protection. It aims to help enterprises protect their data as it moves between servers and devices. In November 2016, Microsoft joined the Linux Foundation as a Platinum member during Microsoft's Connect(); developer event in New York. The cost of each Platinum membership is US$500,000 per year. Some analysts deemed this unthinkable ten years prior, however, as in 2001 then-CEO Steve Ballmer called Linux "cancer". Microsoft planned to launch a preview of Intune for Education "in the coming weeks", with general availability scheduled for spring 2017, priced at $30 per device, or through volume licensing agreements.
In January 2018, Microsoft patched Windows 10 to account for CPU problems related to Intel's Meltdown security breach. The patch led to issues with the Microsoft Azure virtual machines reliant on Intel's CPU architecture. On January 12, Microsoft released PowerShell Core 6.0 for the macOS and Linux operating systems. In February 2018, Microsoft killed notification support for their Windows Phone devices which effectively ended firmware updates for the discontinued devices. In March 2018, Microsoft recalled Windows 10 S to change it to a mode for the Windows operating system rather than a separate and unique operating system. In March the company also established guidelines that censor users of Office 365 from using profanity in private documents.
In April 2018, Microsoft released the source code for Windows File Manager under the MIT License to celebrate the program's 20th anniversary. In April the company further expressed willingness to embrace open source initiatives by announcing Azure Sphere as its own derivative of the Linux operating system. In May 2018, Microsoft partnered with 17 American intelligence agencies to develop cloud computing products. The project is dubbed "Azure Government" and has ties to the Joint Enterprise Defense Infrastructure (JEDI) surveillance program. On June 4, 2018, Microsoft officially announced the acquisition of GitHub for $7.5 billion, a deal that closed on October 26, 2018. On July 10, 2018, Microsoft revealed the Surface Go platform to the public. Later in the month, it converted Microsoft Teams to gratis. In August 2018, Microsoft released two projects called Microsoft AccountGuard and Defending Democracy. It also unveiled Snapdragon 850 compatibility for Windows 10 on the ARM architecture.
In August 2018, Toyota Tsusho began a partnership with Microsoft to create fish farming tools using the Microsoft Azure application suite for Internet of things (IoT) technologies related to water management. Developed in part by researchers from Kindai University, the water pump mechanisms use artificial intelligence to count the number of fish on a conveyor belt, analyze the number of fish, and deduce the effectiveness of water flow from the data the fish provide. The specific computer programs used in the process fall under the Azure Machine Learning and the Azure IoT Hub platforms.
In September 2018, Microsoft discontinued Skype Classic. On October 10, 2018, Microsoft joined the Open Invention Network community despite holding more than 60,000 patents. In November 2018, Microsoft agreed to supply 100,000 Microsoft HoloLens headsets to the United States military in order to "increase lethality by enhancing the ability to detect, decide and engage before the enemy." In November 2018, Microsoft introduced Azure Multi-Factor Authentication for Microsoft Azure. In December 2018, Microsoft announced Project Mu, an open source release of the Unified Extensible Firmware Interface (UEFI) core used in Microsoft Surface and Hyper-V products. The project promotes the idea of Firmware as a Service. In the same month, Microsoft announced the open source implementation of Windows Forms and the Windows Presentation Foundation (WPF) which will allow for further movement of the company toward the transparent release of key frameworks used in developing Windows desktop applications and software. December also saw the company discontinue the Microsoft Edge [Legacy] browser project in favor of the "New Edge" browser project, featuring a Chromium based backend.
On February 20, 2019, Microsoft Corp said it will offer its cyber security service AccountGuard to 12 new markets in Europe including Germany, France and Spain, to close security gaps and protect customers in political space from hacking. In February 2019, hundreds of Microsoft employees protested the company's war profiteering from a $480 million contract to develop virtual reality headsets for the United States Army.
2020–present: Acquisitions, Xbox Series X/S, and Windows 11
On March 26, 2020, Microsoft announced it was acquiring Affirmed Networks for about $1.35 billion. Due to the COVID-19 pandemic, Microsoft closed all of its retail stores indefinitely due to health concerns. On July 22, 2020, Microsoft announced plans to close its Mixer service, planning to move existing partners to Facebook Gaming.
On July 31, 2020, it was reported that Microsoft was in talks to acquire TikTok after the Trump administration ordered ByteDance to divest ownership of the application to the U.S. On August 3, 2020, after speculation on the deal, Donald Trump stated that Microsoft could buy the application, however, it should be completed by September 15, 2020, and that the United States Department of the Treasury should receive a portion if it were to go through.
On August 5, 2020, Microsoft stopped its xCloud game streaming test for iOS devices. According to Microsoft, the future of xCloud on iOS remains unclear and potentially out of Microsoft's hands. Apple has imposed a strict limit on "remote desktop clients" which means applications are only allowed to connect to a user-owned host device or gaming console owned by the user. On September 21, 2020, Microsoft announced its intent to acquire video game company ZeniMax Media, the parent company of Bethesda Softworks, for about $7.5 billion, with the deal expected to occur in the second half of 2021 fiscal year. On March 9, 2021, the acquisition was finalized and ZeniMax Media became part of Microsoft's Xbox Game Studios division. The total price of the deal was $8.1 billion.
On September 22, 2020, Microsoft announced that it had an exclusive license to use OpenAI's GPT-3 artificial intelligence language generator. The previous version of GPT-3, called GPT-2, made headlines for being "too dangerous to release" and had numerous capabilities, including designing websites, prescribing medication, answering questions, and penning articles.
On November 10, 2020, Microsoft released the Xbox Series X and Xbox Series S video game consoles.
In February 2021, Microsoft released Azure Quantum for public preview. The public cloud computing platform provides access to quantum software and quantum hardware including trapped ion, neutral atom, and superconducting systems.
In April 2021, Microsoft announced it would buy Nuance Communications for approximately $16 billion. The acquisition of Nuance was completed in March 2022. In 2021, in part due to the strong quarterly earnings spurred by the COVID-19 pandemic, Microsoft's valuation came to nearly $2 trillion. The increased necessity for remote work and distance education drove demand for cloud computing and grew the company's gaming sales.
On June 24, 2021, Microsoft announced Windows 11 during a Livestream. The announcement came with confusion after Microsoft announced Windows 10 would be the last version of the operating system; set to be released in the third quarter of 2021. It was released to the general public on October 5, 2021.
In September 2021, it was announced that the company had acquired Takelessons, an online platform that connects students and tutors in numerous subjects. The acquisition positioned Microsoft to grow its presence in the market of providing online education to large numbers of people. In the same month, Microsoft acquired Australia-based video editing software company Clipchamp.
In October 2021, Microsoft announced that it began rolling out end-to-end encryption (E2EE) support for Microsoft Teams calls in order to secure business communication while using video conferencing software. Users can ensure that their calls are encrypted and can utilize a security code that both parties on a call must verify on their respective ends. On October 7, Microsoft acquired Ally.io, a software service that measures companies' progress against OKRs. Microsoft plans to incorporate Ally.io into its Viva family of employee experience products.
On January 18, 2022, Microsoft announced the acquisition of American video game developer and holding company Activision Blizzard in an all-cash deal worth $68.7 billion. Activision Blizzard is best known for producing franchises, including but not limited to Warcraft, Diablo, Call of Duty, StarCraft, Candy Crush Saga, Crash Bandicoot, Spyro, Tony Hawk's, Guitar Hero, and Overwatch. Activision and Microsoft each released statements saying the acquisition was to benefit their businesses in the metaverse, many saw Microsoft's acquisition of video game studios as an attempt to compete against Meta Platforms, with TheStreet referring to Microsoft wanting to become "the Disney of the metaverse". Microsoft also named Phil Spencer, head of the Xbox brand since 2014, the inaugural CEO of the newly established Microsoft Gaming division, which now houses the Xbox operations team and the three publishers in the company's portfolio (Xbox Game Studios, ZeniMax Media, Activision Blizzard). Microsoft has not released statements regarding Activision's recent legal controversies regarding employee abuse, but reports have alleged that Activision CEO Bobby Kotick, a major target of the controversy, will leave the company after the acquisition is finalized. The deal was closed on October 13, 2023.
In December 2022, Microsoft announced a new 10-year deal with the London Stock Exchange Group for products including Microsoft Azure; Microsoft acquired around 4% of LSEG as part of the deal.
In January 2023, CEO Satya Nadella announced Microsoft would lay off some 10,000 employees. The announcement came a day after hosting a Sting concert for 50 people, including Microsoft executives, in Davos, Switzerland.
On January 23, 2023, Microsoft announced a new multi-year, multi-billion dollar investment deal with ChatGPT developer OpenAI.
In June 2023, Microsoft released Azure Quantum Elements to run molecular simulations and calculations in computational chemistry and materials science using a combination of AI, high-performance computing and quantum computing. The service includes Copilot, a GPT-4 based large language model tool to query and visualize data, write code, initiate simulations, and educate researchers.
At a November 2023 developer conference, Microsoft announced two new custom-designed computing chips: The Maia chip, designed to run large language models, and Cobalt CPU, designed to power general cloud services on Azure.
On November 20, 2023, Satya Nadella announced that Sam Altman, who had been ousted as CEO of OpenAI just days earlier, and Greg Brockman, who had resigned as president, would join Microsoft to lead a new advanced AI research team. However, the plan was short-lived, as Altman was subsequently reinstated as OpenAI's CEO and Brockman rejoined the company amid pressure from OpenAI's employees and investors on its board. In March 2024, Inflection AI's cofounders Mustafa Suleyman and Karen Simonyan announced their departure from the company in order to start Microsoft AI, with Microsoft acqui-hiring nearly the entirety of its 70-person workforce. As part of the deal, Microsoft paid Inflection $650 million to license its technology.
In January 2024, Microsoft became the most valued publicly traded company. Meanwhile, that month, the company announced a subscription offering of artificial intelligence for small businesses via Copilot Pro.
In April 2024, Microsoft made a $1.5 billion investment in the Emirati AI firm G42. As part of the deal, G42 said it would use the Microsoft Azure platform for its AI development and deployment. Later that month, Microsoft unveiled plans to invest $1.7 billion in developing AI and cloud infrastructure in Indonesia. The plan includes establishment of data centers and partnerships to support digital transformation efforts.
In May 2024, Microsoft announced a $3.3 billion investment to build an artificial intelligence hub in southeast Wisconsin, tripling its initial proposal. This initiative, unveiled by President Joe Biden in Racine County, includes constructing a data center, creating 2,300 construction jobs by 2025, and 2,000 permanent jobs over time, alongside establishing an AI co-innovation lab at UW-Milwaukee to train up to 1,000 individuals by 2030.
In June 2024, Microsoft announced it would be laying off 1,000 employees from the company's mixed reality and Azure cloud computing divisions.
In June 2024, Microsoft announced that they were building a "hyperscale data centre" in South East Leeds. In July 2024, it was reported that the company was laying off its diversity, equity, and inclusion (DEI) team.
On July 19, 2024, a global IT outage impacted Microsoft services, affecting businesses, airlines, and financial institutions worldwide. The outage was traced back to a flawed update of CrowdStrike's cybersecurity software, which resulted in Microsoft systems crashing and causing disruptions across various sectors. Despite CrowdStrike's CEO George Kurtz clarifying that the issue was not a cyberattack, the incident had widespread consequences, leading to delays in air travel, financial transactions, and medical services globally. Microsoft stated that the underlying cause had been fixed but acknowledged ongoing residual impacts on some Microsoft 365 apps and services.
In September 2024, BlackRock and Microsoft announced a $30 billion fund, the Global AI Infrastructure Investment Partnership, to invest in AI infrastructure such as data centers and energy projects. The fund has the potential to reach $100 billion with debt financing, and partners include Abu Dhabi-backed MGX and Nvidia, which will provide AI expertise. Investments will primarily focus on the U.S., with some in partner countries. Microsoft also announced relaunch of its controversial tool, Recall, in November 2024 after addressing privacy concerns. Initially criticized for taking regular screenshots without user consent, Recall was changed to an opt-in feature instead of being default on. The UK's Information Commissioner's Office monitored the situation and noted the adjustments, which included enhanced security measures like encryption and biometric access. While experts regarded these changes as improvements, they advised caution, with some recommending further testing before users opted in.
Corporate affairs
Microsoft is ranked No. 14 in the 2022 Fortune 500 rankings of the largest United States corporations by total revenue; and it was the world's largest software maker by revenue in 2022 according to Forbes Global 2000. In 2018, Microsoft became the most valuable publicly traded company in the world, a position it has repeatedly traded with Apple in the years since. In April 2019, Microsoft reached a market cap, becoming the third U.S. public company to be valued at over $1 trillion. , Microsoft has the third-highest global brand valuation. Microsoft is one of only two U.S.-based companies that have a prime credit rating of AAA.
Board of directors
The company is run by a board of directors made up of mostly company outsiders, as is customary for publicly traded companies. Members of the board of directors as of December 2023 are Satya Nadella, Reid Hoffman, Hugh Johnston, Teri List, Sandi Peterson, Penny Pritzker, Carlos Rodriguez, Charles Scharf, John W. Stanton, John W. Thompson, Emma Walmsley and Padmasree Warrior.
Board members are elected every year at the annual shareholders' meeting using a majority vote system. There are four committees within the board that oversee more specific matters. These committees include the Audit Committee, which handles accounting issues with the company including auditing and reporting; the Compensation Committee, which approves compensation for the CEO and other employees of the company; the Governance and Nominating Committee, which handles various corporate matters including the nomination of the board; and the Regulatory and Public Policy Committee, which includes legal/antitrust matters, along with privacy, trade, digital safety, artificial intelligence, and environmental sustainability.
On March 13, 2020, Gates announced that he is leaving the board of directors of Microsoft and Berkshire Hathaway to focus more on his philanthropic efforts. According to Aaron Tilley of The Wall Street Journal this is "marking the biggest boardroom departure in the tech industry since the death of longtime rival and Apple Inc. co-founder Steve Jobs."
On January 13, 2022, The Wall Street Journal reported that Microsoft's board of directors plans to hire an external law firm to review its sexual harassment and gender discrimination policies, and to release a summary of how the company handled past allegations of misconduct against Bill Gates and other corporate executives.
Chief executives
Bill Gates (1975–2000)
Steve Ballmer (2000–2014)
Satya Nadella (2014–present)
Financial
When Microsoft went public and launched its initial public offering (IPO) in 1986, the opening stock price was $21; after the trading day, the price closed at $27.75. As of July 2010, with the company's nine stock splits, any IPO shares would be multiplied by 288; if one were to buy the IPO today, given the splits and other factors, it would cost about 9 cents. The stock price peaked in 1999 at around $119 ($60.928, adjusting for splits). The company began to offer a dividend on January 16, 2003, starting at eight cents per share for the fiscal year followed by a dividend of sixteen cents per share the subsequent year, switching from yearly to quarterly dividends in 2005 with eight cents a share per quarter and a special one-time payout of three dollars per share for the second quarter of the fiscal year. Though the company had subsequent increases in dividend payouts, the price of Microsoft's stock remained steady for years.
Standard & Poor's and Moody's Investors Service have both given a AAA rating to Microsoft, whose assets were valued at $41 billion as compared to only $8.5 billion in unsecured debt. Consequently, in February 2011 Microsoft released a corporate bond amounting to $2.25 billion with relatively low borrowing rates compared to government bonds. For the first time in 20 years Apple Inc. surpassed Microsoft in Q1 2011 quarterly profits and revenues due to a slowdown in PC sales and continuing huge losses in Microsoft's Online Services Division (which contains its search engine Bing). Microsoft profits were $5.2 billion, while Apple Inc. profits were $6 billion, on revenues of $14.5 billion and $24.7 billion respectively. Microsoft's Online Services Division has been continuously loss-making since 2006 and in Q1 2011 it lost $726 million. This follows a loss of $2.5 billion for the year 2010.
On July 20, 2012, Microsoft posted its first quarterly loss ever, despite earning record revenues for the quarter and fiscal year, with a net loss of $492 million due to a writedown related to the advertising company aQuantive, which had been acquired for $6.2 billion back in 2007. As of January 2014, Microsoft's market capitalization stood at $314B, making it the 8th-largest company in the world by market capitalization. On November 14, 2014, Microsoft overtook ExxonMobil to become the second most-valuable company by market capitalization, behind only Apple Inc. Its total market value was over $410B—with the stock price hitting $50.04 a share, the highest since early 2000. In 2015, Reuters reported that Microsoft Corp had earnings abroad of $76.4 billion which were untaxed by the Internal Revenue Service. Under U.S. law, corporations do not pay income tax on overseas profits until the profits are brought into the United States.
The key trends of Microsoft are (as at the financial year ending June 30):
In November 2018, the company won a $480 million military contract with the U.S. government to bring augmented reality (AR) headset technology into the weapon repertoires of American soldiers. The two-year contract may result in follow-on orders of more than 100,000 headsets, according to documentation describing the bidding process. One of the contract's tag lines for the augmented reality technology seems to be its ability to enable "25 bloodless battles before the 1st battle", suggesting that actual combat training is going to be an essential aspect of the augmented reality headset capabilities.
Subsidiaries
Microsoft is an international business. As such, it needs subsidiaries present in whatever national markets it chooses to harvest. An example is Microsoft Canada, which it established in 1985. Other countries have similar installations, to funnel profits back up to Redmond and to distribute the dividends to the holders of MSFT stock.
Ownership
The 10 largest shareholder of Microsoft in early 2024 were:
Marketing
In 2004, Microsoft commissioned research firms to do independent studies comparing the total cost of ownership (TCO) of Windows Server 2003 to Linux; the firms concluded that companies found Windows easier to administrate than Linux, thus those using Windows would administrate faster resulting in lower costs for their company (i.e. lower TCO). This spurred a wave of related studies; a study by the Yankee Group concluded that upgrading from one version of Windows Server to another costs a fraction of the switching costs from Windows Server to Linux, although companies surveyed noted the increased security and reliability of Linux servers and concern about being locked into using Microsoft products. Another study, released by the Open Source Development Labs, claimed that the Microsoft studies were "simply outdated and one-sided" and their survey concluded that the TCO of Linux was lower due to Linux administrators managing more servers on average and other reasons.
As part of the "Get the Facts" campaign, Microsoft highlighted the .NET Framework trading platform that it had developed in partnership with Accenture for the London Stock Exchange, claiming that it provided "five nines" reliability. After suffering extended downtime and unreliability the London Stock Exchange announced in 2009 that it was planning to drop its Microsoft solution and switch to a Linux-based one in 2010.
In 2012, Microsoft hired a political pollster named Mark Penn, whom The New York Times called "famous for bulldozing" his political opponents as Executive Vice-president, Advertising and Strategy. Penn created a series of negative advertisements targeting one of Microsoft's chief competitors, Google. The advertisements, called "Scroogled", attempt to make the case that Google is "screwing" consumers with search results rigged to favor Google's paid advertisers, that Gmail violates the privacy of its users to place ad results related to the content of their emails and shopping results, which favor Google products. Tech publications like TechCrunch have been highly critical of the advertising campaign, while Google employees have embraced it.
Layoffs
In July 2014, Microsoft announced plans to lay off 18,000 employees. Microsoft employed 127,104 people as of June 5, 2014, making this about a 14 percent reduction of its workforce as the biggest Microsoft layoff ever. This included 12,500 professional and factory personnel. Previously, Microsoft had eliminated 5,800 jobs in 2009 in line with the Great Recession of 2008–2017. In September 2014, Microsoft laid off 2,100 people, including 747 people in the Seattle–Redmond area, where the company is headquartered. The firings came as a second wave of the layoffs that were previously announced. This brought the total number to over 15,000 out of the 18,000 expected cuts. In October 2014, Microsoft revealed that it was almost done with eliminating 18,000 employees, which was its largest-ever layoff sweep. In July 2015, Microsoft announced another 7,800 job cuts in the next several months. In May 2016, Microsoft announced another 1,850 job cuts mostly in its Nokia mobile phone division. As a result, the company will record an impairment and restructuring charge of approximately $950 million, of which approximately $200 million will relate to severance payments.
Microsoft laid off 1,900 employees in its gaming division in January 2024. The layoffs primarily affected Activision Blizzard employees, but some Xbox and ZeniMax employees were also affected. Blizzard president Mike Ybarra and Blizzard's chief design officer Allen Adham also resigned.
Unions
Microsoft recognizes seven trade unions representing 1,750 workers in the United States at its video game subsidiaries Activision Blizzard and ZeniMax Media. U.S. workers have been vocal in opposing military and law-enforcement contracts with Microsoft. Bethesda Game Studios is unionized in Canada. Microsoft South Korea recognizes its union since 2017. German employees have elected works councils since 1998.
United States government
Microsoft provides information about reported bugs in their software to intelligence agencies of the United States government, prior to the public release of the fix. A Microsoft spokesperson has stated that the corporation runs several programs that facilitate the sharing of such information with the U.S. government. Following media reports about PRISM, NSA's massive electronic surveillance program, in May 2013, several technology companies were identified as participants, including Microsoft. According to leaks of said program, Microsoft joined the PRISM program in 2007. However, in June 2013, an official statement from Microsoft flatly denied their participation in the program:
During the first six months of 2013, Microsoft received requests that affected between 15,000 and 15,999 accounts. In December 2013, the company made a statement to further emphasize the fact that they take their customers' privacy and data protection very seriously, even saying that "government snooping potentially now constitutes an 'advanced persistent threat,' alongside sophisticated malware and cyber attacks". The statement also marked the beginning of three-part program to enhance Microsoft's encryption and transparency efforts. On July 1, 2014, as part of this program, they opened the first (of many) Microsoft Transparency Center, which provides "participating governments with the ability to review source code for our key products, assure themselves of their software integrity, and confirm there are no "back doors." Microsoft has also argued that the United States Congress should enact strong privacy regulations to protect consumer data.
In April 2016, the company sued the U.S. government, argued that secrecy orders were preventing the company from disclosing warrants to customers in violation of the company's and customers' rights. Microsoft argued that it was unconstitutional for the government to indefinitely ban Microsoft from informing its users that the government was requesting their emails and other documents and that the Fourth Amendment made it so people or businesses had the right to know if the government searches or seizes their property. On October 23, 2017, Microsoft said it would drop the lawsuit as a result of a policy change by the United States Department of Justice (DoJ). The DoJ had "changed data request rules on alerting the Internet users about agencies accessing their information."
In 2022 Microsoft shared a $9 billion contract from the United States Department of Defense for cloud computing with Amazon, Google, and Oracle.
Security challenges
On a Friday afternoon in January 2024, Microsoft disclosed that a Russian state-sponsored group hacked into its corporate systems. The group, accessed "a very small percentage" of Microsoft corporate email accounts, which also included members of its senior leadership team and employees in its cybersecurity and legal teams. Microsoft noted in a blog post that the attack might have been prevented if the accounts in question had enabled multi-factor authentication, a defensive measure which is widely recommended in the industry, including by Microsoft itself.
Corporate identity
Corporate culture
Technical references for developers and articles for various Microsoft magazines such as Microsoft Systems Journal (MSJ) are available through the Microsoft Developer Network (MSDN). MSDN also offers subscriptions for companies and individuals, and the more expensive subscriptions usually offer access to pre-release beta versions of Microsoft software. In April 2004, Microsoft launched a community site for developers and users, titled Channel 9, that provides a wiki and an Internet forum. Another community site that provides daily videocasts and other services, On10.net, launched on March 3, 2006. Free technical support is traditionally provided through online Usenet newsgroups, and CompuServe in the past, monitored by Microsoft employees; there can be several newsgroups for a single product. Helpful people can be elected by peers or Microsoft employees for Microsoft Most Valuable Professional (MVP) status, which entitles them to a sort of special social status and possibilities for awards and other benefits.
Noted for its internal lexicon, the expression "eating your own dog food" is used to describe the policy of using pre-release and beta versions of products inside Microsoft to test them in "real-world" situations. This is usually shortened to just "dog food" and is used as a noun, verb, and adjective. Another bit of jargon, FYIFV or FYIV ("Fuck You, I'm [Fully] Vested"), is used by an employee to indicate they are financially independent and can avoid work anytime they wish.
Microsoft is an outspoken opponent of the cap on H-1B visas, which allows companies in the U.S. to employ certain foreign workers. Bill Gates claims the cap on H1B visas makes it difficult to hire employees for the company, stating "I'd certainly get rid of the H1B cap" in 2005. Critics of H1B visas argue that relaxing the limits would result in increased unemployment for U.S. citizens due to H1B workers working for lower salaries.
The Human Rights Campaign Corporate Equality Index, a report of how progressive the organization deems company policies towards LGBT employees, rated Microsoft as 87% from 2002 to 2004 and as 100% from 2005 to 2010 after they allowed gender expression.
In August 2018, Microsoft implemented a policy for all companies providing subcontractors to require 12 weeks of paid parental leave to each employee. This expands on the former requirement from 2015 requiring 15 days of paid vacation and sick leave each year. In 2015, Microsoft established its own parental leave policy to allow 12 weeks off for parental leave with an additional 8 weeks for the parent who gave birth.
Environment
In 2011, Greenpeace released a report rating the top ten big brands in cloud computing on their sources of electricity for their data centers. At the time, data centers consumed up to 2% of all global electricity, and this amount was projected to increase. Phil Radford of Greenpeace said, "We are concerned that this new explosion in electricity use could lock us into old, polluting energy sources instead of the clean energy available today", and called on "Amazon, Microsoft and other leaders of the information-technology industry must embrace clean energy to power their cloud-based data centers". In 2013, Microsoft agreed to buy power generated by a Texas wind project to power one of its data centers.
Microsoft is ranked on the 17th place in Greenpeace's Guide to Greener Electronics (16th Edition) that ranks 18 electronics manufacturers according to their policies on toxic chemicals, recycling, and climate change. Microsoft's timeline for phasing out brominated flame retardant (BFRs) and phthalates in all products was 2012 but its commitment to phasing out PVC is not clear. it has no products that are completely free from PVC and BFRs.
Microsoft's main U.S. campus received a silver certification from the Leadership in Energy and Environmental Design (LEED) program in 2008, and it installed over 2,000 solar panels on top of its buildings at its Silicon Valley campus, generating approximately 15 percent of the total energy needed by the facilities in April 2005. Microsoft makes use of alternative forms of transit. It created one of the world's largest private bus systems, the "Connector", to transport people from outside the company; for on-campus transportation, the "Shuttle Connect" uses a large fleet of hybrid cars to save fuel. The "Connector" does not compete with the public bus system and works with it to provide a cohesive transportation network not just for its employees but also for the public.
Microsoft also subsidizes regional public transport, provided by Sound Transit and King County Metro, as an incentive. In February 2010, however, Microsoft took a stance against adding additional public transport and high-occupancy vehicle (HOV) lanes to the State Route 520 and its floating bridge connecting Redmond to Seattle; the company did not want to delay the construction any further. Microsoft was ranked number 1 in the list of the World's Best Multinational Workplaces by the Great Place to Work Institute in 2011.
In January 2020, the company announced a strategy to take the company carbon negative by 2030 and to remove all carbon that it has emitted since its foundation in 1975. On October 9, 2020, Microsoft permanently allowed remote work. In January 2021, the company announced on Twitter to join the Climate Neutral Data Centre Pact, which engages the cloud infrastructure and data centers industries to reach carbon neutrality in Europe by 2030, and also disclosed an investment in Climeworks, a direct air capture company partnered with Carbfix for carbon sequestration. In the same year, it was awarded the EPA's Green Power Leadership Award, citing the company's all-renewable energy use since 2014.
In September 2023, Microsoft announced that it purchased $200 million in carbon credits to offset 315,000 metric tons of carbon dioxide over 10 years from Heirloom Carbon, a carbon removal company that mixes calcium oxide from heated crushed limestone with water to form carbon hydroxide to absorb carbon dioxide from the atmosphere to mineralize back into limestone while the released carbon dioxide is stored underground or injected into concrete. Despite spending spent more than $760 million through its Climate Innovation Fund by June 2024 on sustainability projects—including purchases of more than 5 million metric tonnes of carbon dioxide removal with carbon offsets and more than 34 megawatts of renewable energy—Microsoft's Scope 3 emissions had increased by 31% from the company's 2020 baseline, which caused the company's total emissions to rise by 29% in 2023.
In 2023 Microsoft consumed 24 TWh of electricity, more than countries such as Iceland, Ghana, the Dominican Republic, or Tunisia.
Headquarters
The corporate headquarters, informally known as the Microsoft Redmond campus, is located at One Microsoft Way in Redmond, Washington. Microsoft initially moved onto the grounds of the campus on February 26, 1986, weeks before the company went public on March 13. The headquarters has since experienced multiple expansions since its establishment. It is estimated to encompass over 8 million ft2 (750,000 m2) of office space and 30,000–40,000 employees. Additional offices are located in Bellevue and Issaquah, Washington (90,000 employees worldwide). The company is planning to upgrade its Mountain View, California, campus on a grand scale. The company has occupied this campus since 1981. In 2016, the company bought the campus, with plans to renovate and expand it by 25%. Microsoft operates an East Coast headquarters in Charlotte, North Carolina.
In April 2024, it was announced that Microsoft would be opening a state-of-the-art artificial intelligence 'hub' around Paddington in London, England. It was announced that the division would be led by Jordan Hoffman, who previously worked for Deepmind and Inflection.
Flagship stores
On October 26, 2015, the company opened its retail location on Fifth Avenue in New York City. The location features a five-story glass storefront and is 22,270 square feet. As per company executives, Microsoft had been on the lookout for a flagship location since 2009. The company's retail locations are part of a greater strategy to help build a connection with its consumers. The opening of the store coincided with the launch of the Surface Book and Surface Pro 4. On November 12, 2015, Microsoft opened a second flagship store, located in Sydney's Pitt Street Mall.
Logo
Microsoft adopted the so-called "Pac-Man Logo", designed by Scott Baker, on February 26, 1987, with the concept being similar to InFocus Corporation logo that was adapted a year earlier in 1986. Baker stated "The new logo, in Helvetica italic typeface, has a slash between the o and s to emphasize the "soft" part of the name and convey motion and speed". Dave Norris ran an internal joke campaign to save the old logo, which was green, in all uppercase, and featured a fanciful letter O, nicknamed the blibbet, but it was discarded.
Microsoft's logo with the tagline "Your potential. Our passion."—below the main corporate name—is based on a slogan Microsoft used in 2008. In 2002, the company started using the logo in the United States and eventually started a television campaign with the slogan, changed from the previous tagline of "Where do you want to go today?" During the private MGX (Microsoft Global Exchange) conference in 2010, Microsoft unveiled the company's next tagline, "Be What's Next." They also had a slogan/tagline "Making it all make sense." The Microsoft Pac-Man logo was used for 25 years, 5 months, and 28 days until August 23, 2012, being the longest enduring logo to be used by the company.
On August 23, 2012, Microsoft unveiled a new corporate logo at the opening of its 23rd Microsoft store in Boston, indicating the company's shift of focus from the classic style to the tile-centric modern interface, which it uses/will use on the Windows Phone platform, Xbox 360, Windows 8 and the upcoming Office Suites. The new logo also includes four squares with the colors of the then-current Windows logo which have been used to represent Microsoft's four major products: Windows (blue), Office (orange), Xbox (green) and Bing (yellow). The logo also resembles the opening of one of the commercials for Windows 95.
Sponsorship
The company was the official jersey sponsor of Finland's national basketball team at EuroBasket 2015.
The company was a major sponsor of the Toyota Gazoo Racing WRT (2017–2020).
The company was a sponsor of the Renault F1 Team (2016–2020).
Philanthropy
In 2015, Microsoft Philanthropies, an internal charitable organization, was established. Its mission is to bring the benefits of technology to parts of the world and segments of the population that have been denied the benefits of the digital revolution. Key areas of focus: donating cloud computing resources to university researchers and nonprofit groups; supporting the expansion of broadband access worldwide; funding international computer science education through YouthSpark; supporting tech education in the U.S. from kindergarten to high school; and donating to global child and refugee relief organizations.
During the COVID-19 pandemic, Microsoft's president, Brad Smith, announced that an initial batch of supplies, including 15,000 protection goggles, infrared thermometers, medical caps, and protective suits, was donated to Seattle, with further aid to come soon.
During Russian invasion of Ukraine Microsoft started monitoring cyberattacks originating from the Government of Russia and Russia-backed hackers. In June 2022, Microsoft published the report on Russian cyber attacks and concluded that state-backed Russian hackers "have engaged in "strategic espionage" against governments, think tanks, businesses and aid groups" in 42 countries supporting Kyiv.
Controversies
Criticism of Microsoft has followed various aspects of its products and business practices. Frequently criticized are the ease of use, robustness, and security of the company's software. They have also been criticized for the use of permatemp employees (employees employed for years as "temporary", and therefore without medical benefits), the use of forced retention tactics, which means that employees would be sued if they tried to leave. Historically, Microsoft has also been accused of overworking employees, in many cases, leading to burnout within just a few years of joining the company. The company is often referred to as a "Velvet Sweatshop", a term which originated in a 1989 Seattle Times article, and later became used to describe the company by some of Microsoft's own employees. This characterization is derived from the perception that Microsoft provides nearly everything for its employees in a convenient place, but in turn overworks them to a point where it would be bad for their (possibly long-term) health.
As reported by several news outlets, an Irish subsidiary of Microsoft based in the Republic of Ireland declared £220 bn in profits but paid no corporation tax for the year 2020. This is due to the company being tax resident in Bermuda as mentioned in the accounts for 'Microsoft Round Island One, a subsidiary that collects license fees from the use of Microsoft software worldwide. Dame Margaret Hodge, a Labour MP in the UK said, "It is unsurprising – yet still shocking – that massively wealthy global corporations openly, unashamedly and blatantly refuse to pay tax on the profits they make in the countries where they undertake business".
In 2020, ProPublica reported that the company had diverted more than $39 billion in U.S. profits to Puerto Rico using a mechanism structured to make it seem as if the company was unprofitable on paper. As a result, the company paid a tax rate on those profits of "nearly 0%". When the Internal Revenue Service audited these transactions, ProPublica reported that Microsoft aggressively fought back, including successfully lobbying Congress to change the law to make it harder for the agency to conduct audits of large corporations. In 2023, Microsoft reported in a securities filing that the U.S. Internal Revenue Service was alleging that the company owed the U.S. $28.9 billion in past taxes, plus penalties related to mis-allocation of corporate profits over a decade.
"Embrace, extend, and extinguish" (EEE), also known as "embrace, extend, and exterminate," is a phrase that the U.S. Department of Justice found that was used internally by Microsoft to describe its strategy for entering product categories involving widely used standards, extending those standards with proprietary capabilities, and then using those differences to strongly disadvantage competitors. Microsoft is frequently accused of using anticompetitive tactics and abusing its monopolistic power. People who use their products and services often end up becoming dependent on them, a process is known as vendor lock-in.
Microsoft was the first company to participate in the PRISM surveillance program, according to leaked NSA documents obtained by The Guardian and The Washington Post in June 2013, and acknowledged by government officials following the leak. The program authorizes the government to secretly access data of non-US citizens hosted by American companies without a warrant. Microsoft has denied participation in such a program.
Jesse Jackson believes Microsoft should hire more minorities and women. In 2015, he praised Microsoft for appointing two women to its board of directors.
In 2020, Salesforce, the manufacturer of the Slack platform, complained to European regulators about Microsoft due to the integration of the Teams service into Office 365. Negotiations with the European Commission continued until the summer of 2023, but, as it became known to the media, they reached an impasse. Microsoft is now facing an antitrust investigation.
In June 2024, Microsoft Corp. faced a potential EU fine after regulators accused it of abusing market power by bundling its Teams video-conferencing app with its Office 365 and Microsoft 365 software. The European Commission issued a statement of objections, alleging Microsoft's practice since 2019 gave Teams an unfair market advantage and limited interoperability with competing software. Despite Microsoft's efforts to avoid deeper scrutiny, including unbundling Teams, regulators remained unconvinced. This action followed a 2019 complaint from Slack, which was later acquired by Salesforce. Microsoft's Teams usage soared during the pandemic, growing from 2 million daily users in 2017 to 300 million in 2023. The company has a history of antitrust battles in the U.S. and Europe, with over €2 billion in EU fines previously imposed for similar abuses.
In October 2024, Microsoft fired two employees who organized an unauthorized vigil at its Redmond headquarters to honor Palestinians killed in Gaza during the conflict with Hamas. The employees, part of the group "No Azure for Apartheid," sought to address the company's involvement in the Israeli government's use of its technology.
In November 2024, the Federal Trade Commission (FTC) launched an investigation into Microsoft, focusing on potential antitrust violations related to its cloud computing, AI, and cybersecurity businesses. The probe scrutinized Microsoft's bundling of cloud services with products like Office and security tools, as well as its growing AI presence through its partnership with OpenAI. This inquiry is part of broader efforts by the U.S. government to curb the power of major tech companies, especially under FTC chair Lina Khan. Concerns were raised about Microsoft's licensing practices potentially locking customers into its services and its AI investments possibly sidestepping regulatory oversight.
See also
List of mergers and acquisitions by Microsoft
Microsoft engineering groups
Microsoft Enterprise Agreement
Notes
References
Bundled references
External links
1975 establishments in New Mexico
1980s initial public offerings
American brands
American companies established in 1975
Artificial intelligence companies
Business software companies
Cloud computing providers
Companies based in Redmond, Washington
Companies in the Dow Jones Global Titans 50
Companies in the Dow Jones Industrial Average
Companies in the Nasdaq-100
Companies in the PRISM network
Companies listed on the Nasdaq
Computer companies established in 1975
Computer companies of the United States
Computer hardware companies
Computer systems companies
Customer relationship management software companies
Defense companies of the United States
Electronics companies established in 1975
Electronics companies of the United States
ERP software companies
Mobile phone manufacturers
Multinational companies headquartered in the United States
Software companies based in Washington (state)
Software companies established in 1975
Software companies of the United States
Supply chain software companies
Technology companies established in 1975
Technology companies of the United States
Web service providers | Microsoft | Technology | 13,656 |
18,468,924 | https://en.wikipedia.org/wiki/Zombie%20taxon | In paleontology, a zombie taxon (plural zombie taxa) or the zombie effect refers to a fossil that was washed out of sediments and re-deposited in rocks and/or sediments millions of years younger. That basic mistake in the interpretation of the age of the fossil leads to its title, in that the discovered fossil was at some point mobile (or "walking") despite the original organism having been long dead. When that occurs, the fossil is described as a "reworked fossil".
See also
Convergent evolution
Dead clade walking
Extinction
Elvis taxon
Lazarus taxon
Living fossil
Further reading
Archibald, J. David. (1996). Dinosaur Extinction and the End of An Era. Columbia University Press, 672-684. , who defined the terms zombie effect and zombie taxon/taxa.
Weishampel, David B. et al. (2004). The Dinosauria. University of California Press. .
Abigail Lane et al. "Estimating paleodiversities: a test of the taxic and phylogenetic methods".
References
Extinction
Phylogenetics | Zombie taxon | Biology | 212 |
21,599,073 | https://en.wikipedia.org/wiki/Gentamicin%20protection%20assay | The gentamicin protection assay or survival assay or invasion assay is a method used in microbiology. It is used to quantify the ability of pathogenic bacteria to invade eukaryotic cells.
The assay is based on several observations made in the 1970s, in which the ability of internalized bacteria to avoid killing by antibiotics was reported. The assay started to be used in biological research in the early 1980s.
Background and principle
Intracellular bacteria need to enter host cells (cells of the infected organism) in order to replicate and propagate infection. Many species of Shigella (causes bacillary dysentery), Salmonella (typhoid fever), Mycobacterium (leprosy and tuberculosis) and Listeria (listeriosis), to name but a few, are intracellular.
Several antibiotics cannot penetrate eukaryotic cells. Therefore, these antibiotics cannot hurt intracellular bacteria that are already internalized. Using such antibiotics enables us to differentiate between bacteria that succeed in penetrating eukaryotic cells and those that do not. Applying such an antibiotic to a culture of eukaryotic cells infected with bacteria would kill the bacteria that remain outside the cells while sparing the ones that penetrated. The antibiotic of choice for this assay is the aminoglycoside gentamicin.
Procedure
HeLa cells are commonly used as eukaryotic cells in the gentamicin protection assay, but other cells can be used as well. As for bacteria, only species susceptible to gentamicin can be assayed.
The assay is performed in plastic microtiter plates, which are commonly used in laboratories for culturing eukaryotic cells. The cells are allowed to grow in the wells overnight, creating a flat layer. Bacteria are separately grown overnight. On the next day the eukaryotic cells are inoculated with the bacteria and are incubated together for an hour. Centrifuging the plates for a few minutes may help bring cells and bacteria in contact and initiate infection.
After infection gentamicin is added to the plates, and they are incubated for an hour, allowing the antibiotic to kill all bacteria that were not able to penetrate the cells and remained outside. The plates are then washed well to remove the dead bacteria. Next the eukaryotic cells are lysed using a detergent, most commonly Triton X-100.
The bacteria that penetrated the cells and remained alive are now released, and they are plated on solid medium plates. Counting the colonies formed on the plates on the next day, and knowing how many bacteria were used in the beginning of the assay, enables the researcher to calculate the percentage of bacteria that were able to invade the eukaryotic cells.
Usage, advantages and caveats
The gentamicin protection assay is commonly used in pathogen research. The contribution of specific genes or proteins to the bacteria's ability to invade cells can be easily assayed using this method. The gene in question can be knocked out, and the bacteria's invasiveness compared with that of normal, wild type bacteria. Environmental conditions, such as pH level and temperature, can also be assayed for their effect on invasiveness.
The gentamicin protection assay is very sensitive, as it can detect the internalization of even single bacteria. It has several drawbacks:
Gentamicin can sometimes penetrate eukaryotic cells and kill the internalized bacteria. This may happen if the permeability of the cells somehow increased during the assay, sometimes due to poor handling of the cells.
Internalized bacteria may sometimes not be entirely protected from the outside environment, such as when the phagosome (the vacuole surrounding the bacterium inside the cell) is defective in some way. Gentamicin may kill those bacteria.
Gentamicin may fail to kill all the bacteria that remained outside the cells.
To help assess the accuracy of a particular assay, positive and negative controls should be performed. When performing the assay as described above, bacteria that are known to be entirely invasive (positive control) and bacteria that are known as non-invasive (negative control) should be included in the assay.
An alternative invasion assay is the differential immunostaining assay, based on the binding of antibodies to bacteria before and after invasion. The antibodies emit fluorescent, colored light, and the results of this assay are viewed under the microscope.
See also
Obligate intracellular parasite
References
Microbiology techniques | Gentamicin protection assay | Chemistry,Biology | 935 |
20,000,136 | https://en.wikipedia.org/wiki/Interstellar%20formaldehyde | Interstellar formaldehyde (a topic relevant to astrochemistry) was first discovered in 1969 by L. Snyder et al. using the National Radio Astronomy Observatory. Formaldehyde (H2CO) was detected by means of the 111 - 110 ground state rotational transition at 4830 MHz. On 11 August 2014, astronomers released studies, using the Atacama Large Millimeter/Submillimeter Array (ALMA) for the first time, that detailed the distribution of HCN, HNC, H2CO, and dust inside the comae of comets C/2012 F6 (Lemmon) and C/2012 S1 (ISON).
Initial discovery
Formaldehyde was first discovered in interstellar space in 1969 by L. Snyder et al. using the National Radio Astronomy Observatory. H2CO was detected by means of the 111 - 110 ground state rotational transition at 4830 MHz.
Formaldehyde was the first polyatomic organic molecule detected in the interstellar medium and since its initial detection has been observed in many regions of the galaxy. The isotopic ratio of [12C]/[13C] was determined to be about or less than 50% in the galactic disk. Formaldehyde has been used to map out kinematic features of dark clouds located near Gould's Belt of local bright stars. In 2007, the first H2CO 6 cm maser flare was detected. It was a short duration outburst in IRAS 18566 + 0408 that produced a line profile consistent with the superposition of two Gaussian components, which leads to the belief that an event outside the maser gas triggered simultaneous flares at two different locations. Although this was the first maser flare detected, H2CO masers have been observed since 1974 by Downes and Wilson in NGC 7538. Unlike OH, H2O, and CH3OH, only five galactic star forming regions have associated formaldehyde maser emission, which has only been observed through the 110 → 111 transition.
According to Araya et al., H2CO are different from other masers in that they are weaker than most other masers (such as OH, CH3OH, and H2O) and have only been detected near very young massive stellar objects. Unlike OH, H2O, and CH3OH, only five galactic star forming regions have associated formaldehyde maser emission, which has only been observed through the 110 → 111 transition. Because of the widespread interest in interstellar formaldehyde it has recently been extensively studied, yielding new extragalactic sources, including NGC 253, NGC 520, NGC 660, NGC 891, NGC 2903, NGC 3079, NGC 3628, NGC 6240, NGC 6946, IC 342, IC 860, Arp 55, Arp 220, M82, M83, IRAS 10173+0828, IRAS 15107+0724, and IRAS 17468+1320.
Interstellar reactions
The gas-phase reaction that produces formaldehyde possesses modest barriers and is too inefficient to produce the abundance of formaldehyde that has been observed. One proposed mechanism for the formation is the hydrogenation of CO ice, shown below.
H + CO → HCO + H → H2CO (rate constant=9.2*10−3 s−1)
This is the basic production mechanism leading to H2CO; there are several side reactions that take place with each step of the reaction that are based on the nature of the ice on the grain according to David Woon. The rate constant presented is for the hydrogenation of CO. The rate constant for the hydrogenation of HCO was not provided as it was much larger than that of the hydrogenation of CO, likely because HCO is a radical. Awad et al. mention that this is a surface level reaction only and only the monolayer is considered in calculations; this includes the surface within cracks in the ice.
Formaldehyde is relatively inactive in gas phase chemistry in the interstellar medium. Its action is predominantly focused in grain-surface chemistry on dust grains in interstellar clouds,. Reactions involving formaldehyde have been observed to produce molecules containing C-H, C-O, O-H, and C-N bonds. While these products are not necessarily well known, Schutte et al. believe these to be typical products of formaldehyde reactions at higher temperatures, polyoxymethylene, methanolamine, methanediol, and methoxyethanol for example (see Table 2). Formaldehyde is believed to be the primary precursor for most of the complex organic material in the interstellar medium, including amino acids. Formaldehyde most often reacts with NH3, H2O, CH3OH, CO, and itself, H2CO,. The three dominating reactions are shown below.
H2CO + NH3 → amine (when [NH3]:[H2CO] > .2)
H2CO + H2O → diols (always dominate as [H2O] > [H2CO])
H2CO + H2CO → [-CH2-O-]n (catalyzed by NH3 when [NH3]:[H2CO] > .005)
There is no kinetic data available for these reactions as the entire reaction is not verified nor well understood. These reactions are believed to take place during warm-up of the ice on grains which releases the molecules to react. These reactions begin at temperatures as low as 40K - 80K but may take place at even lower temperatures.
Note that many other reactions are listed on the UMIST RATE06 database.
Importance of observation
Formaldehyde appears to be a useful probe for astrochemists due to its low reactivity in the gas phase and to the fact that the 110 - 111 and 211 - 212 K-doublet transitions are rather clear. Formaldehyde has been used in many capacities and to investigate many systems including,
Determination of the [12C]/[13C] ratio to be less than 50 in the galactic disc.
Mapping of the kinematic features of dark clouds located near Gould's Belt of local bright stars. The radial velocities determined for these clouds lead Sandqvist et al. to believe that the clouds participate in the expansion of the local system of H gas and bright stars.
Determination of the temperature of molecular formation from the ratio of ortho-/para- H2CO. H2CO is a good candidate for this process because of the near zero probability of nuclear spin conversion in gas phase protostar environments.
Determination of the spatial density of H2 and dense gas mass in several galaxies with varying luminosity (see Subsequent Discoveries for list of galaxies). The spatial densities calculated fell in the range of 104.7 to 105.7 cm−3 and dense gas masses calculated fell in the range of 0.6x108 to 0.77x109 solar masses. Mangum et al. noticed that the galaxies with lower infrared luminosity had lower dense gas masses and that this seemed to be a real trend despite the small data set.
Rotational spectrum
Above is the rotational spectrum at the ground state vibrational level of H2CO at 30 K. This spectrum was simulated using Pgopher and S-Reduction Rotational constants from Muller et al. The observed transitions are the 6.2 cm 111 - 110 and 2.1 cm 212 - 211 K-doublet transitions. At right is the rotational energy level diagram. The ortho/para splitting is determined by the parity of Ka, ortho if Ka is odd and para if Ka is even.
See also
List of interstellar and circumstellar molecules
References
Sources
Woon, D. E. 2002, Astrophysical Journal, 569, 541
Tudorie, M. et al. 2006, Astronomy and Astrophysics, 453, 755
Muller, H. S. P. et al. 2000, Journal of Molecular Spectroscopy, 200, 143
S. Brunken et al. 2003, Physical Chemistry Chemical Physics, 5, 1515
W. A. Schutte et al. 1993, Science, 259, 1143
W. A. Schutte et al. 1993, Icarus, 104, 118
Astrochemistry
Interstellar media | Interstellar formaldehyde | Chemistry,Astronomy | 1,724 |
2,155,310 | https://en.wikipedia.org/wiki/Common%20ethanol%20fuel%20mixtures | Several common ethanol fuel mixtures are in use around the world. The use of pure hydrous or anhydrous ethanol in internal combustion engines (ICEs) is only possible if the engines are designed or modified for that purpose, and used only in automobiles, light-duty trucks and motorcycles. Anhydrous ethanol can be blended with :gasoline (petrol) for use in gasoline engines, but with high ethanol content only after engine modifications to meter increased fuel volume since pure ethanol contains only 2/3 of the BTUs of an equivalent volume of pure gasoline. High percentage ethanol mixtures are used in some racing engine applications as the very high octane rating of ethanol is compatible with very high compression ratios.
Ethanol fuel mixtures have "E" numbers which describe the percentage of ethanol fuel in the mixture by volume, for example, E85 is 85% anhydrous ethanol and 15% gasoline. Low-ethanol blends are typically from E5 to E25, although internationally the most common use of the term refers to the E10 blend.
Blends of E10 or less are used in more than 20 countries around the world, led by the United States, where ethanol represented 10% of the U.S. gasoline fuel supply in 2011. Blends from E20 to E25 have been used in Brazil since the late 1970s. E85 is commonly used in the U.S. and Europe for flexible-fuel vehicles. Hydrous ethanol or E100 is used in Brazilian neat ethanol vehicles and flex-fuel light vehicles and hydrous E15 called hE15 for modern petrol cars in the Netherlands.
E10 or less
E10, a fuel mixture of 10% anhydrous ethanol and 90% gasoline sometimes called gasohol, can be used in the internal combustion engines of most modern automobiles and light-duty vehicles without need for any modification on the engine or fuel system. E10 blends are typically rated as being 2 to 3 octane numbers higher than regular gasoline and are approved for use in all new U.S. automobiles, and mandated in some areas for emissions and other reasons.
Other common blends include E5 and E7. These concentrations are generally safe for recent engines that should run on pure gasoline. As of 2006, mandates for blending bioethanol into vehicle fuels had been enacted in at least 36 states/provinces and 17 countries at the national level, with most mandates requiring a blend of 10 to 15% ethanol with gasoline.
One measure of alternative fuels in the U.S. is the "gasoline-equivalent gallon" (GEG). In 2002, the U.S. used as motor fuel, ethanol equal to , the energy equivalent of of gasoline. This was less than 1% of the total fuel used that year.
E10 and other blends of ethanol are considered to be useful in decreasing U.S. dependence on foreign oil, and can reduce carbon monoxide (CO) emissions by 20 to 30% under the right conditions. Although E10 does decrease emissions of CO and greenhouse gases such as CO2 by an estimated 2% over regular gasoline, it can cause increases in evaporative emissions and some pollutants depending on factors such as the age of the vehicle and weather conditions. According to the Philippine Department of Energy, the use of up to 10% ethanol-gasoline mixture is not harmful to cars' fuel systems. Generally, automobile gasoline containing alcohol (ethanol or methanol) is not recommended to be used in aircraft.
Availability
E10 became the standard fuel at petrol stations in the United Kingdom as of September 2021.
E10 was introduced nationwide in Thailand in 2007, and replaced 91 octane pure gasoline in that country in 2013.
E10 is commonly available in the Midwestern United States. It was also mandated for use in all standard automobile fuel in the state of Florida by the end of 2010. Due to the phasing out of MTBE as a gasoline additive and mainly due to the mandates established in the Energy Policy Act of 2005 and the Energy Independence and Security Act of 2007, ethanol blends have increased throughout the United States, and by 2009, the ethanol market share in the U.S. gasoline supply reached almost 8% by volume.
Mandatory blending of ethanol was approved in Mozambique, but the percentage in the blend has not been specified.
South Africa approved a biofuel strategy in 2007, and mandated an 8% blend of ethanol by 2013.
A 2007 Uruguayan law mandates a minimum of 5% of ethanol blended with gasoline starting in January 2015. The monopolic, state-owned fuel producer ANCAP started blending premium gasoline with 10% of bioethanol in December 2009, which will be available in all the country by early January 2010.
The Dominican Republic has a mandate for blending 15% of ethanol by 2015.
Chile is considering the introduction of E5, and Panama, Bolivia and Venezuela of E10.
India achieved the target of 10 percent ethanol blending, 5 months ahead of schedule, in June 2022.
From January 2018, all 92-octane fuel in Vietnam is mandated to contain 5 percent ethanol (E5). No ethanol blending is required for 95-octane fuel.
From June 2021, Argentina approved an E12 minimum (Law 27640), and after October 2022 a waiver for a maximum of E15.
A 2011 study conducted by VTT Technical Research Centre of Finland found practically no difference in fuel consumption in normal driving conditions between commercial gasoline grades 95E10 and 98E5 sold in Finland, despite the public perception that fuel consumption is significantly higher with 95E10. VTT performed the comparison test under controlled laboratory conditions and their measurements showed that over a distance of , the cars tested used an average of of 95E10, as opposed to of 98E5. The difference was 0.07 in favor of 98E5 on average, meaning that using 95E10 gasoline, which has a higher ethanol content, increases consumption by 0.7%. When the measurements are normalized, the difference becomes 1.0%, a result that is highly consistent with an estimation of calorific values based on approximate fuel composition, which came out at 1.1% in favour of E5.
Sweden
In Sweden, all 95-octane gasoline is E10 (6 to 10 percent of ethanol) since 1 August 2021, when the proportion of ethanol was increased from E5. In the early-mid-1990s, some fuel chains also sold E10. All newer and many older petrol cars bought in Sweden should handle this, since from January 2011, the Fuel Quality Directive (Directive 2009/30/EC) applied through its transposition into the law of Sweden as a member of the 27 member states of the EU.
E15
E15 contains 15% ethanol and 85% gasoline. This is generally the highest ratio of ethanol to gasoline that is possible to use in vehicles recommended by some auto manufacturers to run on E10 in the US. This is due to ethanol's hydrophilia and solvent power.
As a result of the Energy Independence and Security Act of 2007, which mandates an increase in renewable fuels for the transport sector, the U.S. Department of Energy began assessments for the feasibility of using intermediate ethanol blends in the existing vehicle fleet as a way to allow higher consumption of ethanol fuel. The National Renewable Energy Laboratory (NREL) conducted tests to evaluate the potential impacts of intermediate ethanol blends on legacy vehicles and other engines. In a preliminary report released in October 2008, the NREL presented the results of the first evaluations of the effects of E10, E15 and E20 gasoline blends on tailpipe and evaporative emissions, catalyst and engine durability, vehicle driveability, engine operability, and vehicle and engine materials. This preliminary report found none of the vehicles displayed a malfunction indicator light as a result of the ethanol blend used; no fuel filter plugging symptoms were observed; no cold start problems were observed at and laboratory conditions; and as expected, computer technology available in newer model vehicles adapts to the higher octane causing lower emissions with greater horsepower and in some cases greater fuel economy.
Other sources make the opposite claim about fuel economy. According to Consumer Reports, "ethanol isn’t as energy-dense as regular gasoline so you will see worse fuel economy with E15 gas.”
In March 2009, a lobbying group from the ethanol industry, Growth Energy, formally requested the U.S. Environmental Protection Agency (EPA) to allow the ethanol content in gasoline to be increased from 10% to 15%. Organizations doing such studies included the Energy Department, the State of Minnesota, the Renewable Fuels Association, the Rochester Institute of Technology, the Minnesota Center for Automotive Research, and Stockholm University in Sweden.
In October 2010, the EPA granted a waiver to allow up to 15% of ethanol blended with gasoline to be sold only for cars and light pickup trucks with a model year of 2007 or later, representing about 15% of vehicles on U.S. roads. In January 2011, the waiver was expanded to authorize use of E15 to include model year 2001 through 2006 passenger vehicles. The EPA also decided not to grant any waiver for E15 use in any motorcycles, heavy-duty vehicles, or nonroad engines because current testing data do not support such a waiver. According to the Renewable Fuels Association, the E15 waivers now cover 62% of vehicles on the road in the US, and the ethanol group estimates if all 2001 and newer cars and pickups were to use E15, the theoretical blend wall for ethanol use would be approximately 17.5 billion gallons (66.2 billion liters) per year. The EPA was still studying if older cars can withstand a 15% ethanol blend.
The EPA waiver authorizes sale of E15 only from Sep 15 to May 31 out of a black hose and a yellow hose to flex fuel vehicles only from June 1 to Sep 14. Retailers have shunned building infrastructure due to the costly regulatory requirements which have created a practical barrier to the commercialization of the higher blend. Most fuel stations do not have enough pumps to offer the new blend, few existing pumps are certified to dispense E15, and no dedicated tanks are readily available to store E15. Also, some state and federal regulations would have to change before E15 can be legally sold. The National Association of Convenience Stores, which represents most gasoline retailers, considers the potential for actual E15 demand is small, "because the auto industry is not embracing the fuel and is not adjusting their warranties or recommendations for the fuel type." One possible solution to the infrastructure barriers is the introduction of blender pumps that allow consumers to turn a dial to select the level of ethanol, which would also allow owners of flexible-fuel cars to buy E85 fuel.
In June 2011 EPA, in cooperation with the Federal Trade Commission, issued its final ruling regarding the E15 warning label required to be displayed in all E15 fuel dispensers in the U.S. to inform consumers about what vehicles can, and what vehicles and equipment cannot, use the E15 blend. Both the Alliance of Automobile Manufacturers and the National Petrochemical and Refiners Association complained that relying solely on this warning label is not enough to protect consumers from misfueling. In July 2012, a fueling station in Lawrence, Kansas became the first in the U.S. to sell the E15 blend. The fuel is sold through a blender pump that allows customers to choose between E10, E15, E30 or E85, with the latter blends sold only to flexible-fuel vehicles. , there are about 24 fueling stations selling E15 out of 180,000 stations across the U.S.
In December 2010, several groups, including the Alliance of Automobile Manufacturers, the American Petroleum Institute, the Association of International Automobile Manufacturers, the National Marine Manufacturers Association, the Outdoor Power Equipment Institute, and the Grocery Manufacturers Association, filed suit against the EPA in the United States Court of Appeals for the District of Columbia Circuit. The plaintiffs argued the EPA does not have the authority to issue a “partial waiver” that covers some cars and not others. Among other arguments, the groups argued that the higher ethanol blend is not only a problem for cars, but also for fuel pumps and underground tanks not designed for the E15 mixture. It was also argued that the rise in ethanol has contributed to the big jump in corn prices in recent years. In August 2012, the federal appeals court rejected the suit against the EPA. The case was thrown out on a technical reason, as the court ruled the groups did not have legal standing to challenge EPA's decision to issue the waiver for E15. In June 2013 the U.S. Supreme Court declined to hear an appeal from industry groups opposed to the EPA ruling about E15, and let the 2012 federal appeals court ruling stand.
, sales of E15 are not authorized in California, and according to the California Air Resources Board (CARB), the blend is still awaiting approval, and in a public statement the agency said that "it would take several years to complete the vehicle testing and rule development necessary to introduce a new transportation fuel into California's market."
According to a survey conducted by the American Automobile Association (AAA) in 2012, only about 12 million out of the more than 240 million light-duty vehicles on the U.S. roads in 2012 are approved by manufacturers are fully compliant with E15 gasoline. According with the association, BMW, Chrysler, Nissan, Toyota, and Volkswagen warned that their warranties will not cover E15-related damage. Despite the controversy, in order to adjust to EPA regulations, 2012 and 2013 model year vehicles manufactured by General Motors can use fuel containing up to 15 percent ethanol, as indicated in the vehicle owners' manuals. However, the carmaker warned that for model year 2011 or earlier vehicles, they "strongly recommend that GM customers refer to their owners manuals for the proper fuel designation for their vehicles." Ford Motor Company also is manufacturing all of its 2013 vehicles E15 compatible, including hybrid electrics and vehicles with Ecoboost engines. Also Porsches built since 2001 are approved by its manufacturer to use E15. Volkswagen announced that for the 2014 model year, its entire lineup will be E15 capable. Fiat Chrysler Automobiles announced in August 2015 that all 2016 model year Chrysler/Fiat, Jeep, Dodge and Ram vehicles will be E15 compatible.
In November 2013, the Environmental Protection Agency opened for public comment its proposal to reduce the amount of ethanol required in the U.S. gasoline supply as mandated by the Energy Independence and Security Act of 2007. The agency cited problems with increasing the blend of ethanol above 10%. This limit, known as the "blend wall," refers to the practical difficulty in incorporating increasing amounts of ethanol into the transportation fuel supply at volumes exceeding those achieved by the sale of nearly all gasoline as E10.
hE15
A 15% hydrous ethanol and 85% gasoline blend, hE15, has been introduced at public gas stations in the Netherlands since 2008. Ethanol fuel specifications worldwide traditionally dictate use of anhydrous ethanol (less than 1% water) for gasoline blending. This results in additional costs, energy usage and environmental impacts associated with the extra processing step required to dehydrate the hydrous ethanol produced via distillation (3.5-4.9 vol.% water) to meet the current anhydrous ethanol specifications. A patented discovery reveals hydrous ethanol can be effectively used in most ethanol/gasoline blending applications.
According to the Brazilian Agência Nacional do Petróleo (ANP) specification, hydrous ethanol contains up to 4.9 vol.% water. In hE15, this would be up to 0.74 vol.% water in the overall mixture. Japanese and German scientific evidence revealed the water is an inhibitor for corrosion by ethanol.
The experiments show that water in fuel ethanol inhibits dry corrosion. At 10,000 ppm water in the E50 experiments by JARI and 3,500 ppm water in the E20 experiments by TU Darmstadt the alcoholate/alkoxide corrosion stopped. In the fuel ethanol this resembles 20,000 ppm or 2 volume% in the case of JARI and 5 x 3500 = 17,500 ppm of 1.75 volume% in the case of TU Darmstadt. The observations are in line with the fact that hydrous ethanol is known for being less corrosive than anhydrous ethanol. The reaction mechanism will be the same at lower-mid blends. When enough water is present in the fuel, the aluminum will react preferably with water to produce aluminum oxide, repairing the protective aluminum oxide layer, which is why the corrosion stops. The aluminum alcoholate/alkoxide does not make a tight oxide layer, which is why the corrosion continues. In other words, water is essential to repair the holes in the oxide layer. Based on the Japanese/German results, a minimum of 2 vol.% or 2.52% m/m water is currently proposed in the revision of the hydrous ethanol specification for blending in petrol at E10+ levels. Water injection has additional positive effects on the engine performance (thermodynamic efficiency) and reduces overall CO2 emissions.
Overall, a transition from anhydrous to hydrous ethanol for gasoline blending is expected to make a significant contribution to ethanol's cost-competitiveness, fuel cycle net energy balance, air quality, and greenhouse gas emissions.
The level of blending above 10% (V/V) is chosen both from a technical (safety) perspective and to distinguish the product in Europe from regular unleaded petrol for reasons of taxes and customer clarity. Small-scale tests have shown many vehicles with modern engine types can run smoothly on this hydrous ethanol blend. Mixed tanking scenarios with anhydrous ethanol blends at 5% or 10% level do not induce phase separation. As avoiding mixing with E0, in particular at extremely low temperatures, in logistic systems and engines is not recommended, a separate specification for controlled usage is presented in a Netherlands Technical Agreement NTA 8115. The NTA 8115 is written for a worldwide application in trading and fuel blending.
E20, E25
E20 contains 20% ethanol and 80% gasoline, while E25 contains 25% ethanol. These blends have been widely used in Brazil since the late 1970s. As a response to the 1973 oil crisis, the Brazilian government made mandatory the blend of ethanol fuel with gasoline, fluctuating between 10% and 22% from 1976 until 1992. Due to this mandatory minimum gasoline blend, pure gasoline (E0) is no longer sold in Brazil. A federal law was passed in October 1993 establishing a mandatory blend of 22% anhydrous ethanol (E22) in the entire country. This law also authorized the Executive to set different percentages of ethanol within pre-established boundaries, and since 2003, these limits were fixed at a maximum of 25% (E25) and a minimum of 20% (E20) by volume. Since then, the government has set the percentage on the ethanol blend according to the results of the sugarcane harvest and ethanol production from sugarcane, resulting in blend variations even within the same year.
Since July 1, 2007, the mandatory blend was set at 25% of anhydrous ethanol (E25) by executive decree, and this has been the standard gasoline blend sold throughout Brazil most of the time as of 2011. However, as a result of a supply shortage and the resulting high ethanol fuel prices, in 2010, the government mandated a temporary 90-day blend reduction from E25 to E20 beginning February 1, 2010. As prices rose abruptly again due to supply shortages that took place again between the 2010 and 2011 harvest seasons, some ethanol had to be imported from the United States, and in April 2011, the government reduced the minimum mandatory blend to 18%, leaving the mandatory blend range between E18 and E25.
All Brazilian automakers have adapted their gasoline engines to run smoothly with this range of mixtures, thus, all gasoline vehicles are built to run with blends from E20 to E25, defined by local law as "common gasoline type C". Some vehicles might work properly with lower concentrations of ethanol, but with a few exceptions, they are unable to run smoothly with pure gasoline, which causes engine knocking, as vehicles traveling to neighboring South American countries have demonstrated. Flex-fuel vehicles, which can run on any type of gasoline E20-E25 up to 100% hydrous ethanol (E100 or hydrated ethanol) ratios, were first available in mid-2003. In July 2008, 86% of all new light vehicles sold in Brazil were flexible-fuel, and only two carmakers build models with a flex-fuel engine optimized to operate with pure gasoline (E0): Renault with the models Clio, Symbol, Logan, Sandero and Mégane, and Fiat with the Siena Tetrafuel.
Thailand introduced E20 in 2008, but shortages in ethanol supplies by mid-2008 caused a delay in the expansion of the E20 fueling station network in the country. By mid-2010, 161 fueling stations were selling E20, and sales have risen 80% since April 2009. The rapid growth in E20 demand is because most vehicle models launched since 2009 were E20-compatible, and sales of E20 are expected to grow faster once more local automakers start producing small, E20-compatible, fuel-efficient cars. The Thai government is promoting ethanol usage through subsidies, as ethanol costs four baht (about 12 US cents) a litre more than gasoline.
A state law approved in Minnesota in 2005 mandated that ethanol comprise 20% of all gasoline sold in this American state beginning in 2013. Successful tests have been conducted to determine the performance under E20 by current vehicles and fuel dispensing equipment designed for E10. However, this mandate was later delayed to 2015, and has never taken effect because the federal EPA has yet to authorize the use of E20 as a replacement for gasoline.
A study commissioned by BP and published in September 2013, concluded that the use of advanced biofuels in the UK, and particularly E20 cellulosic ethanol, is a more cost-effective way of reducing emissions than using plug-in electric vehicles (PEVs) in the timeframe to 2030. The study also found that the use of higher blends of biofuels is complementary to hybrid electric vehicles (HEVs) and plug-in hybrids (PHEVs). Battery electric vehicles (BEVs) can deliver strong CO2 savings with a decarbonised electric grid, but are expected to have significantly higher costs than internal combustion engine vehicles and hybrid cars to 2030, as the latter are expected to be the most popular models by 2030. According to the study, in 2030 an E20 blend in an HEV can achieve a 10% emission savings compared to an HEV running on E5, for an annual fuel cost premium of compared to an annual cost of for an all-electric car.
E70, E75
E70 contains 70% ethanol and 30% gasoline, while E75 contains 75% ethanol. These winter blends are used in the United States and Sweden for E85 flexible-fuel vehicles during the cold weather, but still sold at the pump labeled as E85. The seasonal reduction of the ethanol content to an E85 winter blend is mandated to avoid cold starting problems at low temperatures.
In the US, this seasonal reduction of the ethanol content to E70 applies only in cold regions, where temperatures fall below during the winter. In Wyoming for example, E70 is sold as E85 from October to May. In Sweden, all E85 flexible-fuel vehicles use an E75 winter blend. This blend was introduced since the winter 2006-07 and E75 is used from November until March.
For temperatures below , all E85 flex vehicles require an engine block heater to avoid cold starting problems. The use of this device is also recommended for gasoline vehicles when temperatures drop below . Another option when extreme cold weather is expected is to add more pure gasoline in the tank, thus reducing the ethanol content below the E70 winter blend, or simply not to use E85 during extreme low temperature spells.
E85
E85, a mixture of 85% ethanol and ~15% gasoline, is generally the highest ethanol fuel mixture found in the United States and several European countries, particularly in Sweden, as this blend is the standard fuel for flexible-fuel vehicles. This mixture has an octane rating of 108, however, the Ethanol molecule also carries with it an oxygen atom, where-as gasoline does not, effectively requiring the internal combustion engine to ingest less air per unit-volume by its own accord, which reduces pumping losses, and further increases the exo-thermic chemical reaction. Ethanol fuel is considered – although not widely known as – a form of "chemical supercharging", similar to that of Nitrous Oxide (N2O) & Nitromethane (CH3NO2).
The 85% limit in the ethanol content was set to reduce ethanol emissions at low temperatures and to avoid cold starting problems during cold weather, at temperatures lower than . A further reduction in the ethanol content is used during the winter in regions where temperatures fall below and this blend is called Winter E85, as the fuel is still sold under the E85 label. A winter blend of E70 is mandated in some regions in the US, while Sweden mandates E75. Some regions in the United States now allow E51 (51% ethanol, 49% gasoline) to be sold as E85 in the winter months.
As of October 2010, nearly 3,000 E85 fuel pumps were in Europe, led by Sweden with 1,699 filling stations. The United States had 3,354 public E85 fuel pumps located in 2,154 cities by August 2014, mostly concentrated in the Midwest.
Thailand introduced E85 fuel by the end of 2008, and by mid-2010, only four E85 filling stations were available, with plans to expand to 15 stations by 2012.
A major restriction hampering sales of E85 flex vehicles or fuelling with E85, is the limited infrastructure available to sell E85 to the public, as by 2014 only 2 percent of motor fuel stations offered E85, up from about 1 percent in 2011. , there were only 3,218 gasoline fueling stations selling E85 to the public in the entire U.S., while about 156,000 retail motor fuel outlets do not offer the E85 blend. The number of E85 grew from 1,229 in 2007 to 2,442 in 2011, but only increased by 7% from 2011 to 2013, when the total reached 2,625. There is a great concentration of E85 stations in the Corn Belt states, and , the leading state is Minnesota with 274 stations, followed by Michigan with 231, Illinois with 225, Iowa with 204, Indiana with 188, Texas with 181, Wisconsin with 152, and Ohio with 126. Only eight states do not have E85 available to the public, Alaska, Delaware, Hawaii, Montana, Maine, New Hampshire, Rhode Island, and Vermont. The main constraint for a more rapid expansion of E85 availability is that it requires dedicated storage tanks at filling stations, at an estimated cost of for each dedicated ethanol tank. A study conducted by the U.S. Department of Energy concluded that every service station in America could be converted to handle E85 at a cost of $3.4 billion to $10.1 billion.
ED95
ED95 designates a blend of 95% ethanol and 5% ignition improver; it is used in modified diesel engines where high compression is used to ignite the fuel, as opposed to the operation of gasoline engines, where spark plugs are used. This fuel was developed by Swedish ethanol producer SEKAB. Because of the high ignition temperatures of pure ethanol, the addition of ignition improver is necessary for successful diesel engine operation. A diesel engine running on ethanol also has a higher compression ratio and an adapted fuel system.
This fuel has been used with success in many Swedish Scania buses since 1985, which has produced around 700 ethanol buses, more than 600 of them to Swedish cities, and more recently has also delivered ethanol buses for commercial service in Great Britain, Spain, Italy, Belgium, and Norway. As of June 2010 Stockholm has the largest ethanol ED95 bus fleet in the world.
As of 2010, the Swedish ED95 engine is in its third generation and already has complied with Euro 5 emission standards, without any kind of post-treatment of the exhaust gases. The ethanol-powered engine is also being certified as environmentally enhanced vehicle (EEV) in the Stockholm municipality. The EEV rule still has no date to enter into force in Europe and is stricter than the Euro 5 standard.
Nottingham became the first city in England to operate a regular bus service with ethanol-fuelled vehicles. Three ED95 single-deck buses entered regular service in the city in March 2008. Soon after, Reading also introduced ED95 double-deck buses.
Under the auspices of the BioEthanol for Sustainable Transport project, more than 138 bioethanol ED95 buses were part of demonstration trial at four cities, three in Europe, and one in Brazil, between 2006 and 2009. A total of 127 ED95 buses operated in Stockholm, five buses operated in Madrid, three in La Spezia, and one in Brazil.
In Brazil, the first Scania ED95 bus with a modified diesel engine was introduced as a trial in São Paulo city in December 2007, and since November 2009, two ED95 buses were in regular service. The Brazilian trial project ran for three years and performance and emissions were monitored by the National Reference Center on Biomass (CENBIO- ) at the Universidade de São Paulo.
In November 2010, the municipal government of São Paulo city signed an agreement with UNICA, Cosan, Scania and Viação Metropolitana, a local bus operator, to introduced a fleet of 50 ethanol-powered ED95 buses by May 2011. Scania manufactures the bus engine and chassis in its plant located in São Bernardo do Campo, São Paulo, using the same technology and fuel as the ED95 buses already operating in Stockholm. The bus body is a Brazilian CAIO. The first ethanol-powered buses were delivered in May 2011, and the 50 buses will start regular service in June 2011 in the southern region of São Paulo. The 50 ED95 buses had a cost of R$ 20 million () and due to the higher cost of the ED95 fuel and the lower energy content of ethanol as compared to diesel, one of the firms participating in the cooperation agreement, Raísen (a joint venture between Royal Dutch Shell and Cosan), supplies the fuel to the municipality at 70% of the market price of regular diesel.
E100
E100 is pure ethanol fuel. Straight hydrous ethanol as an automotive fuel has been widely used in Brazil since the late 1970s for neat ethanol vehicles and more recently for flexible-fuel vehicles. The ethanol fuel used in Brazil is distilled close to the azeotrope mixture of 95.63% ethanol and 4.37% water (by weight) which is approximately 3.5% water by volume.
The azeotrope is the highest concentration of ethanol that can be achieved by simple fractional distillation. The maximum water concentration according to the Agência Nacional do Petróleo (ANP) specification is 4.9 vol.% (approximately 6.1 weight%) The E nomenclature is not adopted in Brazil, but hydrated ethanol can be tagged as E100, meaning it does not have any gasoline, because the water content is not an additive, but rather a residue from the distillation process. However, straight hydrous ethanol is also called E95 by some authors.
The first commercial vehicle capable of running on pure ethanol was the Ford Model T, produced from 1908 through 1927. It was fitted with a carburetor with adjustable jetting, allowing use of gasoline or ethanol, or a combination of both. At that time, other car manufacturers also provided engines for ethanol fuel use. Thereafter, and as a response to the 1973 and 1979 energy crises, the first modern vehicle capable of running with pure hydrous ethanol (E100) was launched in the Brazilian market, the Fiat 147, after testing with several prototypes developed by the Brazilian subsidiaries of Fiat, Volkswagen, General Motors and Ford. , there were 1.1 million neat ethanol vehicles still in use in Brazil. Since 2003, Brazilian newer flex-fuel vehicles are capable of running on pure hydrous ethanol (E100) or blended with any combination of E20 to E27.5 gasoline (a mixture made with anhydrous ethanol), the national mandatory blend. , there were 17.1 million flexible-fuel vehicles running on Brazilian roads.
E100 imposes a limitation on normal vehicle operation, as ethanol's lower evaporative pressure (as compared to gasoline) causes problems when cold starting the engine at temperatures below . For this reason, both pure ethanol and E100 flex-fuel vehicles are built with an additional small gasoline reservoir inside the engine compartment to help in starting the engine when cold by initially injecting gasoline. Once started, the engine is then switched back to ethanol. An improved flex-fuel engine generation was developed to eliminate the need for the secondary gas tank by warming the ethanol fuel during starting, and allowing them to start at temperatures as low as , the lowest temperature expected anywhere in the Brazilian territory. The Polo E-Flex, launched in March 2009, was the first flex-fuel model without an auxiliary tank for cold start. The warming system, called Flex Start, was developed by Robert Bosch GmbH.
Swedish carmakers have developed ethanol-only capable engines for the new Saab Aero X BioPower 100 Concept E100, with a V6 engine which is fuelled entirely by E100 bioethanol, and the limited edition of the Koenigsegg CCXR, a version of the CCX converted to use E85 or E100, as well as standard 98-octane gasoline, and currently the fastest and most powerful flex-fuel vehicle with its twin-supercharged V8 producing 1018 hp when running on biofuel, as compared to 806 hp on 91-octane unleaded gasoline.
The higher fuel efficiency of E100 (compared to methanol) in high performance race cars resulted in Indianapolis 500 races in 2007 and 2008 being run on 100% fuel-grade ethanol.
Use limitations
Modifications to engines
The use of ethanol blends in conventional gasoline vehicles is restricted to low mixtures, as ethanol-gasoline is corrosive and can degrade some of the materials in the engine and fuel system. Also, the engine has to be adjusted for a higher compression ratio as compared to a pure gasoline engine to take advantage of ethanol's higher oxygen content, thus allowing an improvement in fuel efficiency and a reduction of tailpipe emissions. The following table shows the required modifications to gasoline engines to run smoothly and without degrading any materials. This information is based on the modifications made by the Brazilian automotive industry at the beginning of the ethanol program in that country in the late 1970s, and reflects the experience of Volkswagen do Brasil.
Disadvantages to ethanol fuel blends when used in engines designed exclusively for gasoline include lowered fuel mileage, metal corrosion, deterioration of plastic and rubber fuel system components, clogged fuel systems, fuel injectors, and carburetors, delamination of composite fuel tanks, varnish buildup on engine parts, damaged or destroyed internal engine components, water absorption, fuel phase separation, and shortened fuel storage life. Many major auto, marine, motorcycle, lawn equipment, generator, and other internal combustion engine manufacturers have issued warnings and precautions about the use of ethanol-blended gasolines of any type in their engines, and the Federal Aviation Administration and major aviation engine manufacturers have prohibited the use of automotive gasolines blended with ethanol in light aircraft due to safety issues from fuel system and engine damage.
Other disadvantages
See also
Butanol fuel
Ethanol fuel
Ethanol fuel energy balance
Ethanol fuel in Brazil
Biofuel in Sweden
Ethanol fuel in the United States
Food vs. fuel
Indirect land use change impacts of biofuels
List of flexible-fuel vehicles by car manufacturer
List of gasoline additives
Notes
References
External links
2011 NACS Annual Fuels Report
Ethanol fuel
Petroleum products | Common ethanol fuel mixtures | Chemistry | 7,470 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.