id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
76,010,471
https://en.wikipedia.org/wiki/Derrick%20Brown%20%28computer%20scientist%29
Derrick Brown was an American Computer Scientist. Brown helped create "the Black equivalent of the original Yahoo index", called Universal Black Pages. Brown was born in Elloree, South Carolina in 1969. References Living people Computer scientists People from Elloree, South Carolina 1969 births
Derrick Brown (computer scientist)
Technology
57
810,183
https://en.wikipedia.org/wiki/Access%20network
An access network is a type of telecommunications network which connects subscribers to their immediate service provider. It is contrasted with the core network, which connects local providers to one another. The access network may be further divided between feeder plant or distribution network, and drop plant or edge network. Telephone heritage An access network, also referred to as an outside plant, refers to the series of wires, cables and equipment lying between a consumer/business telephone termination point (the point at which a telephone connection reaches the customer) and the local telephone exchange. The local exchange contains banks of automated switching equipment which direct a call or connection to the consumer. The access network is perhaps one of the oldest assets a telecoms operator would own. In 2007–2008 many telecommunication operators experienced increasing problems maintaining the quality of the records which describe the network. In 2006, according to an independent Yankee Group report, globally operators experience profit leakage in excess of $17 billion each year. The access network is also perhaps the most valuable asset an operator owns since this is what physically allows them to offer a service. Access networks consist largely of pairs of copper wires, each traveling in a direct path between the exchange and the customer. In some instances, these wires may even consist of aluminum, which was commonly used in the 1960s and 1970s following a massive increase in the cost of copper. The price increase was temporary, but the effects of this decision are still felt today as electromigration within the aluminum wires can cause an increase in on-state resistance. This resistance causes degradation which can eventually lead to the complete failure of the wire to transport data. Access is essential to the future profitability of operators who are experiencing massive reductions in revenue from plain old telephone services, due in part to the opening of historically nationalized companies to competition, and in part to increased use of mobile phones and voice over IP (VoIP) services. Operators offered additional services such as xDSL based broadband and IPTV (Internet Protocol television) to guarantee profit. The access network is again the main barrier to achieving these profits since operators worldwide have accurate records of only 40% to 60% of the network. Without understanding or even knowing the characteristics of these enormous copper spider webs, it is very difficult, and expensive to 'provision' (connect) new customers and assure the data rates required to receive next-generation services. Access networks around the world evolved to include more and more optical fiber technology. Optical fiber already makes up the majority of core networks and will start to creep closer and closer to the customer, until a full transition is achieved, delivering value-added services over fiber to the home (FTTH). Access process The process of communicating with a network begins with an access attempt, in which one or more users interact with a communications system to enable initiation of user information transfer. An access attempt itself begins with issuance of an access request by an access originator. An access attempt ends either in successful access or in access failure - an unsuccessful access that results in termination of the attempt in any manner other than initiation of user information transfer between the intended source and destination (sink) within the specified maximum access time. Access time is the time delay or latency between a requested access attempt and successful access being completed. In a telecommunications system, access time values are measured only on access attempts that result in successful access. Access failure can be the result of access outage, user blocking, incorrect access, or access denial. Access denial (system blocking) can include: Access failure caused by the issuing of a system blocking signal by a communications system that does not have a camp-on busy signal feature. Access failure caused by exceeding the maximum access time and nominal system access time fraction during an access attempt. Charging for access An access charge is a charge made by a local exchange carrier for use of its local exchange facilities for a purpose such as the origination or termination of network traffic that is carried to or from a distant exchange by an interexchange carrier. Although some access charges are billed directly to interexchange carriers, a significant percentage of all access charges are paid by the local end users. Mobile access networks GERAN UTRAN E-UTRAN CDMA2000 GSM UMTS 1xEVDO voLTE Wi-Fi in* WiMAX Optical distribution network A passive optical distribution network (PON) uses single-mode optical fiber in the outside plant, optical splitters and optical distribution frames, duplexed so that both upstream and downstream signals share the same fiber on separate wavelengths. Faster PON standards generally support a higher split ratio of users per PON, but may also use reach extenders/amplifiers where extra coverage is needed. Optical splitters creating a point to multipoint topology are also the same technology regardless of the type of PON system, making any PON network upgradable by changing the optical network terminals (ONT) and optical line terminal (OLT) terminals at each end, with minimal change to the physical network. Access networks usually also must support point-to-point technologies such as Ethernet, which bypasses any outside plant splitter to achieve a dedicated link to the telephone exchange. Some PON networks use a "home run" topology where roadside cabinets only contain patch panels so that all splitters are located centrally. While a 20% higher capital cost could be expected, home run networks may encourage a more competitive wholesale market since providers' equipment can achieve higher use. See also Edge device Hierarchical internetworking model Internet access IP connectivity access network Local loop Passive Optical Network References External links Interactive presentation introducing the technology and design of access networks Telecommunications infrastructure Network access Fiber to the premises
Access network
Engineering
1,139
31,694,592
https://en.wikipedia.org/wiki/Wozencraft%20ensemble
In coding theory, the Wozencraft ensemble is a set of linear codes in which most of codes satisfy the Gilbert-Varshamov bound. It is named after John Wozencraft, who proved its existence. The ensemble is described by , who attributes it to Wozencraft. used the Wozencraft ensemble as the inner codes in his construction of strongly explicit asymptotically good code. Existence theorem Theorem: Let For a large enough , there exists an ensemble of inner codes of rate , where , such that for at least values of has relative distance . Here relative distance is the ratio of minimum distance to block length. And is the q-ary entropy function defined as follows: In fact, to show the existence of this set of linear codes, we will specify this ensemble explicitly as follows: for , define the inner code Here we can notice that and . We can do the multiplication since is isomorphic to . This ensemble is due to Wozencraft and is called the Wozencraft ensemble. For all , we have the following facts: For any So is a linear code for every . Now we know that Wozencraft ensemble contains linear codes with rate . In the following proof, we will show that there are at least those linear codes having the relative distance , i.e. they meet the Gilbert-Varshamov bound. Proof To prove that there are at least number of linear codes in the Wozencraft ensemble having relative distance , we will prove that there are at most number of linear codes having relative distance i.e., having distance Notice that in a linear code, the distance is equal to the minimum weight of all codewords of that code. This fact is the property of linear code. So if one non-zero codeword has weight , then that code has distance Let be the set of linear codes having distance Then there are linear codes having some codeword that has weight Lemma. Two linear codes and with distinct and non-zero, do not share any non-zero codeword. Proof. Suppose there exist distinct non-zero elements such that the linear codes and contain the same non-zero codeword Now since for some and similarly for some Moreover since is non-zero we have Therefore , then and This implies , which is a contradiction. Any linear code having distance has some codeword of weight Now the Lemma implies that we have at least different such that (one such codeword for each linear code). Here denotes the weight of codeword , which is the number of non-zero positions of . Denote Then: So , therefore the set of linear codes having the relative distance has at least elements. See also Hamming bound Justesen code Linear code References . . External links Lecture 28: Justesen Code. Coding theory's course. Prof. Atri Rudra. Lecture 9: Bounds on the Volume of a Hamming Ball. Coding theory's course. Prof. Atri Rudra. Coding Theory's Notes: Gilbert-Varshamov Bound. Venkatesan Guruswami Error detection and correction
Wozencraft ensemble
Engineering
624
46,883,788
https://en.wikipedia.org/wiki/Penicillium%20neomiczynskii
Penicillium neomiczynskii is a species of fungus in the genus Penicillium. References neomiczynskii Fungi described in 2011 Fungus species
Penicillium neomiczynskii
Biology
36
20,244,070
https://en.wikipedia.org/wiki/Philips%20Intimate%20Massager
The Philips Intimate Massager is a range of electric personal massagers made by Philips which were first introduced to the UK market during 2008. When the line was launched, commentators questioned whether Philips' movement into the sex-toy market was a sign that sex-toys were gaining mainstream acceptance. After only two years, however, the line was discontinued due to "lack of demand". See also Hitachi Magic Wand References Sources and further reading Philips Intimate Massager official site Philips changes the mood with Warm Intimate Massager Electronics giant Philips to launch sex toy range Massage devices Intimate massager
Philips Intimate Massager
Biology
117
41,113
https://en.wikipedia.org/wiki/Epoch
In chronology and periodization, an epoch or reference epoch is an instant in time chosen as the origin of a particular calendar era. The "epoch" serves as a reference point from which time is measured. The moment of epoch is usually decided by congruity, or by following conventions understood from the epoch in question. The epoch moment or date is usually defined from a specific, clear event of change, an epoch event. In a more gradual change, a deciding moment is chosen when the epoch criterion was reached. Calendar eras Pre-modern eras The Yoruba calendar (Kọ́jọ́dá) uses 8042 BC as the epoch, regarded as the year of the creation of Ile-Ife by the god Obatala, also regarded as the creation of the earth. Anno Mundi (years since the creation of the world) is used in the Byzantine calendar (5509 BC). Anno Mundi (years since the creation of the world) as used in the Hebrew calendar (3761 BC). The Mesoamerican Long Count Calendar uses the creation of the fourth world in 3114 BC. Olympiads, the ancient Greek era of four-year periods between Olympic Games, beginning in 776 BC. Ab urbe condita ("from the foundation of the city"), used to some extent by Roman calendars of the Roman imperial period (753 BC). Buddhist calendars tend to use the epoch of 544 BC (date of Buddha's parinirvana). The term Hindu calendar may refer to a number of traditional Indian calendars. A notable example of a Hindu epoch is the Vikram Samvat (58 BC), also used in modern times as the national calendars of Nepal and Bangladesh. The Julian and Gregorian calendars use as epoch the Incarnation of Jesus as calculated in the 6th century by Dionysius Exiguus. (Subsequent research has shown that this moment is about four years after the best estimate for the date of birth of Jesus.) This epoch was applied retrospectively to the Julian calendar, long after its original creation by Julius Caesar. The epoch of the Islamic calendar is the Hijra (AD 622). The year count in this calendar shifts relative to the solar year count, as the calendar is purely lunar: its year consists of 12 lunations and is thus ten or eleven days shorter than a solar year. This calendar denotes "lunar years" as Anno Hegiræ ([since] the year of the Hijra) or AH. This calendar is used in Sunni Islam and related sects. The epoch of the official Iranian calendar is also the Hijra, but it is a solar calendar; each year begins at the Northern spring equinox. This calendar is used in Shia Islam and related sects. Modern eras The Bahá'í calendar is dated from the vernal equinox of the year the Báb proclaimed his religion (AD 1844). Years are grouped in Váḥids of 19 years, and Kull-i-Shay of 361 (19×19) years. In Thailand in 1888 King Chulalongkorn decreed a National Thai Era dating from the founding of Bangkok on April 6, 1782. In 1912, New Year's Day was shifted to April 1. In 1941, Prime Minister Phibunsongkhram decided to count the years since 543 BC. This is the Thai solar calendar using the Thai Buddhist Era. Except for this era, it is the Gregorian calendar. In the French Republican Calendar, a calendar used by the French government for about twelve years from late 1793, the epoch was the beginning of the "Republican Era", September 22, 1792 (the day the French First Republic was proclaimed, one day after the Convention abolished the Ancien Regime). The Indian national calendar, introduced in 1957, follows the Saka era (AD 78). The Minguo calendar used by officials of Taiwan and its predecessor dates from January 1, 1912, the first year after the Xinhai Revolution, which overthrew the Qing Empire. North Korea uses a system that starts in 1912 (= Juche 1), the year of the birth of its founder Kim Il-Sung. The Fascist Era dates to Mussolini's March on Rome in 1922, and was in use only in countries under hegemony of the Fascist regime of Benito Mussolini. It has been defunct since the fall of the Italian Social Republic in 1945. In the scientific Before Present system of numbering years for purposes of radiocarbon dating, the reference date is January 1, 1950 (though the specific date January 1 is quite unnecessary, as radiocarbon dating has limited precision). Different branches of Freemasonry have selected different years to date their documents according to a Masonic era, such as the Anno Lucis (A.L.). The Holocene calendar uses 10,000 BC as the epoch, the beginning of the Holocene epoch on the geological time scale. Regnal eras The official Japanese system numbers years from the accession of the current emperor, regarding the calendar year during which the accession occurred as the first year. A similar system existed in China before 1912, being based on the accession year of the emperor (1911 was thus the third year of the Xuantong period). With the establishment of the Republic of China in 1912, the republican era was introduced. It is still very common in Taiwan to date events via the republican era. The People's Republic of China adopted the common era calendar in 1949 (the 38th year of the Chinese Republic). Other applications An epoch in computing is the time at which the representation is zero. For example, Unix time is represented as the number of seconds since 00:00:00 UTC on 1 January 1970, not counting leap seconds. An epoch in astronomy is a reference time used for consistency in calculation of positions and orbits. A common astronomical epoch is J2000, which is noon on January 1, 2000, Terrestrial Time. An epoch in Geochronology is a period of time, typically in the order of tens of millions of years. The current epoch is the Holocene. See also References Calendar eras Calendaring standards Chronology
Epoch
Physics
1,264
101,388
https://en.wikipedia.org/wiki/Lists%20of%20engineers
Types of engineer include: Chartered Engineer European Engineer Incorporated Engineer Professional Engineer Royal Engineer Lists of individual engineers by discipline include: List of aerospace engineers List of canal engineers List of chemical engineers List of civil engineers List of combat engineering corps List of electrical engineers List of environmental engineers List of genetic engineers List of industrial engineers List of mechanical engineers List of structural engineers List of systems engineers See also List of British engineers List of inventors List of architects List of urban planners Lists of scientists List of fictional scientists and engineers Lists of people in STEM fields Lists of engineering lists
Lists of engineers
Technology
110
4,295,487
https://en.wikipedia.org/wiki/One-electron%20universe
The one-electron universe postulate, proposed by theoretical physicist John Wheeler in a telephone call to Richard Feynman in the spring of 1940, is the hypothesis that all electrons and positrons are actually manifestations of a single entity moving backwards and forwards in time. According to Feynman: A similar "zigzag world line description of pair annihilation" was independently devised by E. C. G. Stueckelberg at the same time. Overview The idea is based on the world lines traced out across spacetime by every electron. Rather than have myriad such lines, Wheeler suggested that they could all be parts of one single line like a huge tangled knot, traced out by the one electron. Any given moment in time is represented by a slice across spacetime, and would meet the knotted line a great many times. Each such meeting point represents a real electron at that moment. At those points, half the lines will be directed forward in time and half will have looped round and be directed backwards. Wheeler suggested that these backwards sections appeared as the antiparticle to the electron, the positron. Many more electrons have been observed than positrons, and electrons are thought to comfortably outnumber them. According to Feynman he raised this issue with Wheeler, who speculated that the missing positrons might be hidden within protons. Feynman was struck by Wheeler's insight that antiparticles could be represented by reversed world lines, and credits this to Wheeler, saying in his Nobel speech: Feynman later proposed this interpretation of the positron as an electron moving backward in time in his 1949 paper "The Theory of Positrons". Yoichiro Nambu later applied it to all production and annihilation of particle-antiparticle pairs, stating that "the eventual creation and annihilation of pairs that may occur now and then, is no creation nor annihilation, but only a change of directions of moving particles, from past to future, or from future to past." See also Eddington number Identical particles Retrocausality T-symmetry References External links Thought experiments in quantum mechanics Quantum electrodynamics 1940 in science Physical cosmology Conceptual models Richard Feynman Electron
One-electron universe
Physics,Chemistry,Astronomy
461
22,928,014
https://en.wikipedia.org/wiki/Elementary%20cellular%20automaton
In mathematics and computability theory, an elementary cellular automaton is a one-dimensional cellular automaton where there are two possible states (labeled 0 and 1) and the rule to determine the state of a cell in the next generation depends only on the current state of the cell and its two immediate neighbors. There is an elementary cellular automaton (rule 110, defined below) which is capable of universal computation, and as such it is one of the simplest possible models of computation. The numbering system There are 8 = 23 possible configurations for a cell and its two immediate neighbors. The rule defining the cellular automaton must specify the resulting state for each of these possibilities so there are 256 = 223 possible elementary cellular automata. Stephen Wolfram proposed a scheme, known as the Wolfram code, to assign each rule a number from 0 to 255 which has become standard. Each possible current configuration is written in order, 111, 110, ..., 001, 000, and the resulting state for each of these configurations is written in the same order and interpreted as the binary representation of an integer. This number is taken to be the rule number of the automaton. For example, 110d=011011102. So rule 110 is defined by the transition rule: Reflections and complements Although there are 256 possible rules, many of these are trivially equivalent to each other up to a simple transformation of the underlying geometry. The first such transformation is reflection through a vertical axis and the result of applying this transformation to a given rule is called the mirrored rule. These rules will exhibit the same behavior up to reflection through a vertical axis, and so are equivalent in a computational sense. For example, if the definition of rule 110 is reflected through a vertical line, the following rule (rule 124) is obtained: Rules which are the same as their mirrored rule are called amphichiral. Of the 256 elementary cellular automata, 64 are amphichiral. The second such transformation is to exchange the roles of 0 and 1 in the definition. The result of applying this transformation to a given rule is called the complementary rule. For example, if this transformation is applied to rule 110, we get the following rule and, after reordering, we discover that this is rule 137: There are 16 rules which are the same as their complementary rules. Finally, the previous two transformations can be applied successively to a rule to obtain the mirrored complementary rule. For example, the mirrored complementary rule of rule 110 is rule 193. There are 16 rules which are the same as their mirrored complementary rules. Of the 256 elementary cellular automata, there are 88 which are inequivalent under these transformations. It turns out that reflection and complementation are automorphisms of the monoid of one-dimensional cellular automata, as they both preserve composition. Single 1 histories One method used to study these automata is to follow its history with an initial state of all 0s except for a single cell with a 1. When the rule number is even (so that an input of 000 does not compute to a 1) it makes sense to interpret state at each time, t, as an integer expressed in binary, producing a sequence a(t) of integers. In many cases these sequences have simple, closed form expressions or have a generating function with a simple form. The following rules are notable: Rule 28 The sequence generated is 1, 3, 5, 11, 21, 43, 85, 171, ... . This is the sequence of Jacobsthal numbers and has generating function . It has the closed form expression Rule 156 generates the same sequence. Rule 50 The sequence generated is 1, 5, 21, 85, 341, 1365, 5461, 21845, ... . This has generating function . It has the closed form expression . Note that rules 58, 114, 122, 178, 186, 242 and 250 generate the same sequence. Rule 54 The sequence generated is 1, 7, 17, 119, 273, 1911, 4369, 30583, ... . This has generating function . It has the closed form expression . Rule 60 The sequence generated is 1, 3, 5, 15, 17, 51, 85, 255, .... This can be obtained by taking successive rows of Pascal's triangle modulo 2 and interpreting them as integers in binary, which can be graphically represented by a Sierpinski triangle. Rule 90 The sequence generated is 1, 5, 17, 85, 257, 1285, 4369, 21845, ... . This can be obtained by taking successive rows of Pascal's triangle modulo 2 and interpreting them as integers in base 4. Note that rules 18, 26, 82, 146, 154, 210 and 218 generate the same sequence. Rule 94 The sequence generated is 1, 7, 27, 119, 427, 1879, 6827, 30039, ... . This can be expressed as . This has generating function . Rule 102 The sequence generated is 1, 6, 20, 120, 272, 1632, 5440, 32640, ... . This is simply the sequence generated by rule 60 (which is its mirror rule) multiplied by successive powers of 2. Rule 110 The sequence generated is 1, 6, 28, 104, 496, 1568, 7360, 27520, 130304, 396800, ... . Rule 110 has the perhaps surprising property that it is Turing complete, and thus capable of universal computation. Rule 150 The sequence generated is 1, 7, 21, 107, 273, 1911, 5189, 28123, ... . This can be obtained by taking the coefficients of the successive powers of (1+x+x2) modulo 2 and interpreting them as integers in binary. Rule 158 The sequence generated is 1, 7, 29, 115, 477, 1843, 7645, 29491, ... . This has generating function . Rule 188 The sequence generated is 1, 3, 5, 15, 29, 55, 93, 247, ... . This has generating function . Rule 190 The sequence generated is 1, 7, 29, 119, 477, 1911, 7645, 30583, ... . This has generating function . Rule 220 The sequence generated is 1, 3, 7, 15, 31, 63, 127, 255, ... . This is the sequence of Mersenne numbers and has generating function . It has the closed form expression . Note: rule 252 generates the same sequence. Rule 222 The sequence generated is 1, 7, 31, 127, 511, 2047, 8191, 32767, ... . This is every other entry in the sequence of Mersenne numbers and has generating function . It has the closed form expression . Note that rule 254 generates the same sequence. Images for rules 0-99 These images depict space-time diagrams, in which each row of pixels shows the cells of the automaton at a single point in time, with time increasing downwards. They start with an initial automaton state in which a single cell, the pixel in the center of the top row of pixels, is in state 1 and all other cells are 0. Random initial state A second way to investigate the behavior of these automata is to examine its history starting with a random state. This behavior can be better understood in terms of Wolfram classes. Wolfram gives the following examples as typical rules of each class. Class 1: Cellular automata which rapidly converge to a uniform state. Examples are rules 0, 32, 160 and 232. Class 2: Cellular automata which rapidly converge to a repetitive or stable state. Examples are rules 4, 108, 218 and 250. Class 3: Cellular automata which appear to remain in a random state. Examples are rules 22, 30, 126, 150, 182. Class 4: Cellular automata which form areas of repetitive or stable states, but also form structures that interact with each other in complicated ways. An example is rule 110. Rule 110 has been shown to be capable of universal computation. Each computed result is placed under that result's source creating a two-dimensional representation of the system's evolution. In the following gallery, this evolution from random initial conditions is shown for each of the 88 inequivalent rules. Below each image is the rule number used to produce the image, and in brackets the rule numbers of equivalent rules produced by reflection or complementing are included, if they exist. As mentioned above, the reflected rule would produce a reflected image, while the complementary rule would produce an image with black and white swapped. Unusual cases In some cases the behavior of a cellular automaton is not immediately obvious. For example, for Rule 62, interacting structures develop as in a Class 4. But in these interactions at least one of the structures is annihilated so the automaton eventually enters a repetitive state and the cellular automaton is Class 2. Rule 73 is Class 2 because any time there are two consecutive 1s surrounded by 0s, this feature is preserved in succeeding generations. This effectively creates walls which block the flow of information between different parts of the array. There are a finite number of possible configurations in the section between two walls so the automaton must eventually start repeating inside each section, though the period may be very long if the section is wide enough. These walls will form with probability 1 for completely random initial conditions. However, if the condition is added that the lengths of runs of consecutive 0s or 1s must always be odd, then the automaton displays Class 3 behavior since the walls can never form. Rule 54 is Class 4 and also appears to be capable of universal computation, but has not been studied as thoroughly as Rule 110. Many interacting structures have been cataloged which collectively are expected to be sufficient for universality. References External links "Elementary Cellular Automata" at the Wolfram Atlas of Simple Programs 32 bytes long MS-DOS executable drawing by cellular automaton (Rule 110 by default) A showcase of all the rules picked at random Minimal CA emulation with Wolfram rule parser online in vanilla Javascript Cellular automata
Elementary cellular automaton
Mathematics
2,100
32,544,581
https://en.wikipedia.org/wiki/Uniformity%20tape
Uniformity tape is a microstructured thin-film mechanism for mixing and diffusing the light generated by light-emitting diodes (LEDs) in edge-lit digital displays, including computer monitors, televisions and signage. Purpose Compared to other sources of illumination, such as fluorescent and incandescent bulbs, LEDs are energy efficient and increasingly inexpensive. As hard-point light sources, however, LEDs have several significant limitations in edge-lit digital displays. First, the light generated by LEDs must be spread evenly to all parts of the display by a light guide (typically a plate of poly(methyl methacrylate)), which transports light by total internal reflection. Extraction patterns on the surface of the light-guide help to distribute the light evenly. However, even with a light guide, dark zones can be noticeable along the injection edge closest to the LEDs. The more widely spaced the LEDs are, the more pronounced the dark zones, but closely packed LEDs are less energy-efficient and can create thermal management issues. Dark zones can be camouflaged by a border or bezel, but this limits design options and the space available for displaying information. Uniformity tape is designed to be applied directly to the injection edge of a light guide for the purpose of defusing the light generated by LEDs. The microstructure consists of 12-50 μm linear aspheric prisms aligned perpendicular to the plane of the guide. Since the microstructure is small compared to the spacing of the LEDs, no registration of the tape with the LEDs is required. Research conducted by 3M indicates that the tape enables a 50 percent reduction in LED count with little effect on overall system efficiency. Injection losses can be as low as 1-2 percent with a properly designed light guide. Additionally, when applied to the input edge of the light guide, the optically clear adhesive will wet out and conform to the surface roughness of the light guide edge, meaning that the PMMA plate does not require the level of polishing required in a conventional light guide. See also Backlight Poly(methyl methacrylate) Total internal reflection References Optical materials Light-emitting diodes
Uniformity tape
Physics
454
10,228,607
https://en.wikipedia.org/wiki/Strychnine%20poisoning
Strychnine poisoning is poisoning induced by strychnine. It can be fatal to humans and other animals and can occur by inhalation, swallowing or absorption through eyes or mouth. It produces some of the most dramatic and painful symptoms of any known toxic reaction, making it quite noticeable and a common choice for assassinations and poison attacks. For this reason, strychnine poisoning is often portrayed in literature and film, such as the murder mysteries written by Agatha Christie. The probable lethal oral dose in humans is 1.5 to 2 mg/kg. Similarly, the median lethal dose for dogs, cats, and rats ranges from 0.5 to 2.35 mg/kg. Presentation in humans Ten to twenty minutes after exposure, the body's muscles begin to spasm, starting with the head and neck in the form of trismus and risus sardonicus. The spasms then spread to every muscle in the body, with nearly continuous convulsions, and get worse at the slightest stimulus. The convulsions progress, increasing in intensity and frequency until the backbone arches continually. Convulsions lead to lactic acidosis, hyperthermia and rhabdomyolysis. These are followed by postictal depression. Death comes from asphyxiation caused by paralysis of the neural pathways that control breathing, or by exhaustion from the convulsions. The subject usually dies within two to three hours after exposure. One medical student in 1896 described the experience in a letter to The Lancet: Three years ago I was reading for an examination, and feeling "run down". I took 10 minims of strychnia solution (B.P.) with the same quantity of dilute phosphoric acid well diluted twice a day. On the second day of taking it, towards the evening, I felt a tightness in the "facial muscles" and a peculiar metallic taste in the mouth. There was great uneasiness and restlessness, and I felt a desire to walk about and do something rather than sit still and read. I lay on the bed and the calf muscles began to stiffen and jerk. My toes drew up under my feet, and as I moved or turned my head flashes of light kept darting across my eyes. I then knew something serious was developing, so I crawled off the bed and scrambled to a case in my room and got out (fortunately) the bromide of potassium and the chloral. I had no confidence or courage to weigh them, so I guessed the quantity—about 30 gr. [30 grains, about 2 grams] bromide of potassium and 10 gr. chloral—put them in a tumbler with some water, and drank it off. My whole body was in a cold sweat, with anginous attacks in the precordial region, and a feeling of "going off." I did not call for medical aid, as I thought that the symptoms were declining. I felt better, but my lower limbs were as cold as ice, and the calf muscles kept tense and were jerking. There was no opisthotonos, only a slight stiffness at the back of the neck. Half an hour later, as I could judge, I took the same quantity of bromide, potassium and chloral—and a little time after I lost consciousness and fell into a "profound sleep," awaking in the morning with no unpleasant symptoms, no headache, &c., but a desire "to be on the move" and a slight feeling of stiffness in the jaw. These worked off during the day. Treatment There is no antidote for strychnine poisoning. Strychnine poisoning demands aggressive management with early control of muscle spasms, intubation for loss of airway control, toxin removal (decontamination), intravenous hydration and potentially active cooling efforts in the context of hyperthermia as well as hemodialysis in kidney failure (strychnine has not been shown to be removed by hemodialysis). Treatment involves oral administration of activated charcoal, which adsorbs strychnine within the digestive tract; unabsorbed strychnine is removed from the stomach by gastric lavage, along with tannic acid or potassium permanganate solutions to oxidize strychnine. Activated charcoal Activated charcoal is a substance that can bind to certain toxins in the digestive tract and prevent their absorption into the bloodstream. The effectiveness of this treatment, as well as how long it is effective after ingestion, are subject to debate. According to one source, activated charcoal is only effective within one hour of poison being ingested, although the source does not regard strychnine specifically. Other sources specific to strychnine state that activated charcoal may be used after one hour of ingestion, depending on dose and type of strychnine-containing product. Therefore, other treatment options are generally favoured over activated charcoal. The use of activated charcoal is considered dangerous in patients with tenuous airways or altered mental states. Other treatments Most other treatment options focus on controlling the convulsions that arise from strychnine poisoning. These treatments involve keeping the patient in a quiet and darkened room, anticonvulsants such as phenobarbital or diazepam, muscle relaxants such as dantrolene, barbiturates and propofol, and chloroform or heavy doses of chloral, bromide, urethane or amyl nitrite. If a poisoned person is able to survive for 6 to 12 hours subsequent to initial dose, they have a good prognosis. The sine qua non of strychnine toxicity is the "awake" seizure, in which tonic-clonic activity occurs but the patient is alert and oriented throughout and afterwards. Accordingly, George Harley (1829–1896) showed in 1850 that curare (wourali) was effective for the treatment of tetanus and strychnine poisoning. Detection in biological specimens Strychnine is easily quantitated in body fluids and tissues using instrumental methods in order to confirm a diagnosis of poisoning in hospitalized victims or to assist in the forensic investigation of a case of fatal overdosage. The concentrations in blood or urine of those with symptoms are often in the 1–30 mg/L range. Strychnine toxicity in animals Strychnine poisoning in animals occurs usually from ingestion of baits designed for use against rodents (especially gophers and moles) and coyotes. Rodent baits are commonly available over-the-counter, but coyote baits are illegal in the United States. However, since 1990 in the United States most baits containing strychnine have been replaced with zinc phosphide baits. The most common domestic animal to be affected is the dog, either through accidental ingestion or intentional poisoning. The onset of symptoms is 10 to 120 minutes after ingestion. Symptoms include seizures, a "sawhorse" stance, and opisthotonus (rigid extension of all four limbs). Death is usually secondary to respiratory paralysis. Treatment is by detoxification using activated charcoal, pentobarbital for the symptoms, and artificial respiration for apnea. In most western nations a special license is needed to use and possess strychnine for agricultural use. Notable instances The most notable incidents which probably involved strychnine poisoning, are listed here. Alexander the Great may have been poisoned by strychnine in contaminated wine in 323 BC. Christiana Edmunds, the "Chocolate Cream Poisoner", laced chocolates with strychnine. She poisoned a number of people and murdered a four-year-old boy in Brighton in the 1870s. Emeline Meaker murdered her husband's eight-year-old niece Alice by lacing her drink with strychnine. As Alice convulsed from the effects of the poison, Meaker held her hand over Alice's mouth to muffle her cries until the girl was dead. Emeline Meaker was executed for Alice's murder in 1883. Margot Begemann, a friend of Vincent van Gogh, attempted suicide by ingesting strychnine in 1884. In the late 19th century, serial killer Thomas Neill Cream used strychnine to murder several prostitutes on the streets of London. Walter Horsford was hanged in 1898 for murdering his cousin with strychnine, to whom he'd sent it on the pretence it was an otherwise harmless abortifacient. He was implicated in two other murders which also involved mailing it to women who suspected they were pregnant by him. Belle Gunness of La Porte, Indiana, also known as "Lady Bluebeard", allegedly used strychnine to murder some of her victims at the turn of the 20th century. Jane Stanford, co-founder of Stanford University and wife of California governor Leland Stanford, died from strychnine poisoning in 1905. Her last recorded words were "My jaws are stiff. This is a horrible death to die." Her murderer was never identified. Early 20th-century Portuguese poet and novelist Mário de Sá-Carneiro committed suicide via strychnine poisoning in 1916 aged 25. French inventor Jean-Pierre Vaquier poisoned Alfred Jones, the husband of his lover Mabel Jones, by putting strychnine in his hangover cure in Byfleet, Surrey, in 1924. Vaquier was hanged for the crime. Hubert Chevis, a lieutenant in the British Army, died in suspicious circumstances after eating partridge laced with strychnine at Blackdown Camp, Surrey, in 1931. The poisoner was never identified. Yoshio Nishimura, a prominent Japanese expatriate and president of the Japanese Association, died of strychnine poisoning shortly after arriving at police headquarters in Singapore for questioning by Special Branch in 1934. The coroner rendered an open verdict. The incident was speculated to be connected to espionage. In 1938, Delta Blues legend Robert Johnson died after drinking a bottle of whiskey which was allegedly laced with strychnine. This account of Johnson's death is disputed, as he died several days after the alleged poisoning. Oskar Dirlewanger, the leader of the SS Sturmbrigade Dirlewanger in the Second World War, was known to have given death sentences to several Jewish women by stripping them naked and having them injected with strychnine. He and his officers then watched them convulse until death. Irene Bates, mother of possible Zodiac Killer victim Cheri Jo Bates, died of strychnine poisoning in early July, 1969. She had been living in the city of Riverside, California. On April 9, 1973, Rev. Jimmy Ray Williams and Buford Pack ingested strychnine during a signs following religious service in the Holiness Church of God in Jesus Name in Carson Springs, Tennessee. They both refused medical treatment and died as a result of strychnine poisoning. Carolyn Nadine Davis died of strychnine poisoning in mid-July, 1973. She is included among the Santa Rosa Hitchhiker Murder victims. In October 1987, successful wax museum owner Patsy Wright died from taking cold medicine laced with strychnine. The story was featured on a segment of Unsolved Mysteries, and it is suggested that someone very close to Wright knew her habit of taking nighttime cold medicine when she had trouble sleeping and laced her cold medicine with strychnine. The case remains unsolved. A woman in San Diego, California, was poisoned with strychnine by her husband in 1990. Though she dialed 911, she did not mention her name or address, and rescue workers had difficulty locating the victim. Persistence on the part of the dispatcher and the rescue workers allowed them to locate and extract the victim, but she eventually died in the hospital. Turgut Özal, 8th president of the Republic of Turkey, was said to have been assassinated in 1993 by strychnine poisoning. A special investigation into the former president's death was commissioned. His body was exhumed for testing in 2012, but the results were inconclusive. In 2008, Hannes Hirtzberger, the Mayor of Spitz in Lower Austria, was reported to have been poisoned by local wine producer Helmut Osberger using strychnine. Hirtzberger barely survived and suffered permanent disability. The body of David Lytton was found on Saddleworth Moor, northwest England, in December 2015 after he consumed a lethal dose of strychnine. His identity remained a mystery until January 2017. In folklore Mount Chocorua in the White Mountains of New Hampshire is named for a Native American Chief who reputedly died near the summit after being hunted by a posse in response to a killing spree he went on. One account says that the cause of his attacks was the death of his young son from an accidental dose of strychnine while in the care of a friendly white settler. Some Pentecostal snake handlers in the United States claim to have drunk strychnine in order to demonstrate their faith, following a Biblical passage: "They shall take up serpents; and if they drink any deadly thing, it shall not hurt them..." () In music In "Cyanide Sweet Tooth Suicide", Shinedown mentions a woman addicted to substances taking strychnine. In his song "I'm Gonna Kill You", Hank Green sings about wanting to put someone on a strychnine diet. In "The End of All Things To Come", Mudvayne sings about killing the entire world with strychnine. The Sonics' song "Strychnine" (later covered by The Cramps and The Fuzztones), is about the consumption of strychnine. In the song "You Love Us" by Manic Street Preachers, strychnine is mentioned. Strychnine is mentioned in Hannah Fury's song "The Necklace of Marie Antoinette". Tom Lehrer's song "Poisoning Pigeons in The Park" mentions feeding strychnine to a pigeon. In "Composing" from Boys Night Out's concept album Trainwreck, The Patient poisons his entire family at the dinner table with strychnine. In "Visions", Twisted Insane mentions strychnine twice. In "The Bomb Song", Darwin Deez sings about people being sick from strychnine in the water. Strychnos nux-vomica, a natural source of strychnine, is mentioned in "Hill of the Poison Tree", by death metal band Miseration. Strychnine.213, the sixth studio album by Belgian death metal band Aborted, takes its title from strychnine. "I Killed Robert Johnson" by The Stone Foxes mentions killing a man with strychnine. Immortal Technique in the song "That's What It Is". Yeasayer mentions, "deadly quaker buttons" in the song "I Am Chemistry", these are the seeds of the strychnine tree (*Strychnos nux-vomica L.). Graham Parker song Harridan of Yore contains the lyrics A tiny vial of strychnine hung around her neck" Brazilian artist Elis Regina in the song "Tiro Ao Álvaro" sings to the subject that "teu olhar mata mais do...que veneno estriquinina", literally "your gaze kills more than strychnine poison". In "Coyote, My Little Brother," American folksinger Peter La Farge sings how the environment has been "strychnined" to kill off coyote populations. In The Mountain Goats song "An Antidote for Strychnine" the narrator sings about trying to find an antidote to being poisoned by strychnine. The Jellyfish song "Too Much, Too Little, Too Late" features the lyrics "Remember when murder was only killing time and an axe to grind was a bitter gulp of strychnine?" Fictional instances Strychnine has also served as an inspiration in several books, movies and TV series. In literature In William S. Burroughs novel Naked Lunch, strychnine is described as a "hot shot", a poisonous shot of heroin sold to informants. In Anne of Green Gables Miss Cuthbert is warned against adopting an orphan girl with a story about a girl who poisoned her entire adopted family by putting strychnine in the well. In Agatha Christie's novel The Mysterious Affair at Styles, Mrs. Emily Inglethorp was killed by strychnine poisoning. In Agatha Christie's short story The Coming of Mr Quin, Mr Appleton died of strychnine poisoning. In Agatha Christie's story How Does Your Garden Grow?, Miss Amelia Barrowby was killed by strychnine poisoning. The Joker makes a cameo appearance in the DC Comics Elseworld graphic novel Gotham by Gaslight as a serial killer who tries to kill himself with strychnine; the poison causes muscle contractions that leave him with a permanent grin. Additionally, a derivative of strychnine is cited as a key ingredient in the Joker's deadly toxic gas in the main continuity. In the James Herriott novels All Creatures Great and Small (1972) and All Things Wise and Wonderful (1977), the main character/local veterinarian deals with several victims of strychnine poisoning when a dog-killer attacks the neighborhood dogs. In "The Fox Hunter" chapter of William Le Queux's Secrets of the Foreign Office, a strychnine derivative is suspected in the murder of Beatrice Graham and the attempted murder of the protagonist Duckworth Drew. The poison was applied to pins concealed in Graham's fur shawl and Drew's hotel towel. In Gabriel García Márquez's novel One Hundred Years of Solitude, Colonel Aureliano Buendía survived strychnine poisoning. Herb in Die Softly by Christopher Pike. In Peter Robinson's novel Cold Is the Grave, Chief Constable Riddle's daughter, Emily, is accidentally killed by cocaine laced with a lethal dose of strychnine. In Hans Scherfig's novel Stolen Spring, a high school student kills his teacher with a strychnine-tainted malt drop. In the manga Spiral: Suiri no Kizuna (by Kyou Shirodaira and illustrated by Eita Mizuno), main character Ayumu Narumi takes strychnine after he is threatened by Rio Takeuchi to test his luck in a game. In The Sign of the Four by Sir Arthur Conan Doyle, where Bartholomew Sholto is killed by a poison dart. Dr. Watson confirms it was strychnine poisoning, causing tetanus, thus the devilish grin on the dead Sholto's face. In The Invisible Man by H. G. Wells, the Invisible Man relates that he took strychnine as a sleeping aid. "Strychnine," he says, "is a grand tonic...to take the flabbiness out of a man." In The Count of Monte Cristo by Alexandre Dumas, the Saint-Mérans and the servant Barrois are consecutively poisoned to death having ingested beverages containing strychnine. The death of Barrois is depicted with symptoms of acute convulsions, asphyxia, severe pain, ringing in the ears and visual glares that are precipitated by touch. In The Anubis Gates the protagonist combats strychnine poisoning by eating ash and cinder of a fireplace, remembering that carbon neutralizes strychnine from stomach. In "Ghoul" (1987), a serial killer police procedural by Michael Slade, a woman is essentially tortured to death by strychnine poisoning. She is tied spread-eagle on a waterbed by ropes as she suffers escalating muscle spasms. The undulations of the fluid mattress encourages more and more agonizing spasms until death ensues. Police detectives examining the crime scene later note how rope loops tied to the bedposts were flattened by the force put upon them by the victim's contortions. In Stephen King's novel Mr. Mercedes, Brady Hartsfield plans to poison a dog using hamburger laced with strychnine-based gopher poison. His mother finds and eats the hamburger herself, and Brady comes home to find her suffering agonizing convulsions. When she dies, her mouth is twisted into a grin. In Jack London's short story "The Story of Jees Uck", Neil Bonner is poisoned by eating biscuits laced with strychnine by Amos Pentley. Neil survives and sends Amos into the frozen wilderness to his death. In Jack London's short story "Just Meat", partners-in-crime Matt and Jim successfully steal $500,000 of diamonds and pearls from an unscrupulous jewel merchant. Overcome by greed, both characters want to eliminate the other and unknowingly poison each other with strychnine. In Jack London's short story "Moon-Face", the unnamed protagonist/narrator develops a deep and obsessive hate for his neighbor who is always cheerful even under the most dire situations. He poisons the neighbor's dog with strychnine and beefsteak in an effort to make him even the least bit unhappy. The neighbor, despite the death of his dog, continues to be unreasonably merry and joyful, forcing the protagonist to create a devious plan. Onscreen, in film A Blueprint for Murder (1953) is about how a stepmother is stopped after beginning to kill her family members for insurance money. Norman Bates' mother and her lover were killed with strychnine in Alfred Hitchcock's Psycho (1960). The sheriff comments: "Ugly way to die." The source book by Robert Bloch provides additional details about the strychnine murders. In J. Lee Thompsons's movie Cape Fear (1962), Max Cady poisons Sam Bowden's dog with strychnine. At the end of the movie Office Space (1999), Milton mentions to a waiter: "And yes, I won't be leaving a tip, 'cause I could... I could shut this whole resort down. Sir? I'll take my traveler's checks to a competing resort. I could write a letter to your board of tourism and I could have this place condemned. I could put... I could put... strychnine in the guacamole. There was salt on the glass, BIG grains of salt." In Wes Anderson's The Grand Budapest Hotel (2014), Madame Desgoffe-und-Taxis is found dead by strychnine poisoning. Later, a bottle labeled "strychnine poison" is seen on the desk of an assassin in her son Dmitri's employ. In Rituparno Ghosh's Bengali film Shubho Mahurat (2003) (an adaptation of Agatha Christie's The Mirror Crack'd from Side to Side), veteran actress Padmini Chowdhury (played by Sharmila Tagore) commits a series of murders by variously administering strychnine on the victims. Upon being exposed by Ranga Pishima (played by Rakhee Gulzar) Padmini also commits suicide using strychnine In the Bollywood film Detective Byomkesh Bakshy! (2015), council member Gajanand Sikdaar is killed by adding strychnine to his breakfast just before he can reveal the murderer's name to the protagonist, Bakshy. A bottle of strychnine is found in his nephew, Sukumar's room. It is later revealed that his mistress, Angoori Devi had poisoned him and framed Sukumar on orders of her beloved Yang Guang. In the film The Wild Geese (1978), Roger Moore's character Shawn Flynn poisoned the son of a crime lord by making him eat the drugs he had him transport having laced them with Strychnine. In the film Red Dog (2011), the red kelpie was believed to be poisoned deliberately in 1979 by strychnine. In Steven Spielberg's Jaws (1975), Mr. Hooper planned to kill the shark with an injection of strychnine nitrate administered through a shark dart. In the film Tracks (2013), the main character's dog Diggity has to be put down after it is implied that she ate strychnine laced bait intended to kill wild dingos. Onscreen, in television The murder in the Monk episode "Mr. Monk and the Secret Santa" is carried out by poisoning a bottle of port with strychnine. In New York Undercover season 4, episode 10 – "Sign o' the Times" – a serial killer kills young men at raves by giving them strychnine-laced Ecstasy. Inmates in the popular TV series The Wire were given cocaine and heroin doses laced with strychnine. In season 9 of The Office Dwight tells Angela that his Aunt had poisoned her nurse with Strychnine. In season 4 of "The Glades" episode "Glade-iators!" the victim is poisoned with moisturizer laced with strychnine-based rat poison. In season 3 of “Father Brown” episode “The Time Machine” The murderer has used strychnine to kill two people and make it look like suicide. In season 6 of "ER" episode "Humpty Dumpty" a patient comes in with Strychnine poisoning, as diagnosed by Dr. Gabriel Lawrence. In season 4 of "Game of Thrones" episode "The Lion and the Rose" King Joffrey dies from poison. The symptoms resemble those of Strychnine poisoning. In the tenth episode of The Haunting of Hill House, Luke Crain nearly dies after injecting himself with strychnine rat poison while under the spell of a malevolent ghost. In season 8B of the popular Australian prison series Wentworth, inmate Sheila Bausch (Marta Dusseldorp) is given one final choice by fellow inmate Lou Kelly (Kate Box) – ingest a vial of strychnine, or have her throat slit. Bausch opts for the former. Bausch is subsequently euthanised by Marie Winter (Susie Porter) to end the pain and suffering caused by the poisoning. References External links CDC Emergency Preparedness and Response: Facts About Strychnine The Merck Veterinary Manual: Strychnine Poisoning: Introduction Poisons
Strychnine poisoning
Environmental_science
5,531
7,376
https://en.wikipedia.org/wiki/Cosmic%20microwave%20background
The cosmic microwave background (CMB, CMBR), or relic radiation, is microwave radiation that fills all space in the observable universe. With a standard optical telescope, the background space between stars and galaxies is almost completely dark. However, a sufficiently sensitive radio telescope detects a faint background glow that is almost uniform and is not associated with any star, galaxy, or other object. This glow is strongest in the microwave region of the electromagnetic spectrum. The accidental discovery of the CMB in 1965 by American radio astronomers Arno Penzias and Robert Wilson was the culmination of work initiated in the 1940s. The CMB is landmark evidence of the Big Bang theory for the origin of the universe. In the Big Bang cosmological models, during the earliest periods, the universe was filled with an opaque fog of dense, hot plasma of sub-atomic particles. As the universe expanded, this plasma cooled to the point where protons and electrons combined to form neutral atoms of mostly hydrogen. Unlike the plasma, these atoms could not scatter thermal radiation by Thomson scattering, and so the universe became transparent. Known as the recombination epoch, this decoupling event released photons to travel freely through space. However, the photons have grown less energetic due to the cosmological redshift associated with the expansion of the universe. The surface of last scattering refers to a shell at the right distance in space so photons are now received that were originally emitted at the time of decoupling. The CMB is not completely smooth and uniform, showing a faint anisotropy that can be mapped by sensitive detectors. Ground and space-based experiments such as COBE, WMAP and Planck have been used to measure these temperature inhomogeneities. The anisotropy structure is determined by various interactions of matter and photons up to the point of decoupling, which results in a characteristic lumpy pattern that varies with angular scale. The distribution of the anisotropy across the sky has frequency components that can be represented by a power spectrum displaying a sequence of peaks and valleys. The peak values of this spectrum hold important information about the physical properties of the early universe: the first peak determines the overall curvature of the universe, while the second and third peak detail the density of normal matter and so-called dark matter, respectively. Extracting fine details from the CMB data can be challenging, since the emission has undergone modification by foreground features such as galaxy clusters. Features The cosmic microwave background radiation is an emission of uniform black body thermal energy coming from all directions. Intensity of the CMB is expressed in kelvin (K), the SI unit of temperature. The CMB has a thermal black body spectrum at a temperature of . Variations in intensity are expressed as variations in temperature. The blackbody temperature uniquely characterizes the intensity of the radiation at all wavelengths; a measured brightness temperature at any wavelength can be converted to a blackbody temperature. The radiation is remarkably uniform across the sky, very unlike the almost point-like structure of stars or clumps of stars in galaxies. The radiation is isotropic to roughly one part in 25,000: the root mean square variations are just over 100 μK, after subtracting a dipole anisotropy from the Doppler shift of the background radiation. The latter is caused by the peculiar velocity of the Sun relative to the comoving cosmic rest frame as it moves at 369.82 ± 0.11 km/s towards the constellation Crater near its boundary with the constellation Leo The CMB dipole and aberration at higher multipoles have been measured, consistent with galactic motion. Despite the very small degree of anisotropy in the CMB, many aspects can be measured with high precision and such measurements are critical for cosmological theories. In addition to temperature anisotropy, the CMB should have an angular variation in polarization. The polarization at each direction in the sky has an orientation described in terms of E-mode and B-mode polarization. The E-mode signal is a factor of 10 less strong than the temperature anisotropy; it supplements the temperature data as they are correlated. The B-mode signal is even weaker but may contain additional cosmological data. The anisotropy is related to physical origin of the polarization. Excitation of an electron by linear polarized light generates polarized light at 90 degrees to the incident direction. If the incoming radiation is isotropic, different incoming directions create polarizations that cancel out. If the incoming radiation has quadrupole anisotropy, residual polarization will be seen. Other than the temperature and polarization anisotropy, the CMB frequency spectrum is expected to feature tiny departures from the black-body law known as spectral distortions. These are also at the focus of an active research effort with the hope of a first measurement within the forthcoming decades, as they contain a wealth of information about the primordial universe and the formation of structures at late time. The CMB contains the vast majority of photons in the universe by a factor of 400 to 1; the number density of photons in the CMB is one billion times (109) the number density of matter in the universe. Without the expansion of the universe to cause the cooling of the CMB, the night sky would shine as brightly as the Sun. The energy density of the CMB is , about 411 photons/cm3. History Early speculations In 1931, Georges Lemaître speculated that remnants of the early universe may be observable as radiation, but his candidate was cosmic rays. Richard C. Tolman showed in 1934 that expansion of the universe would cool blackbody radiation while maintaining a thermal spectrum. The cosmic microwave background was first predicted in 1948 by Ralph Alpher and Robert Herman, in a correction they prepared for a paper by Alpher's PhD advisor George Gamow. Alpher and Herman were able to estimate the temperature of the cosmic microwave background to be 5 K. Discovery The first published recognition of the CMB radiation as a detectable phenomenon appeared in a brief paper by Soviet astrophysicists A. G. Doroshkevich and Igor Novikov, in the spring of 1964. In 1964, David Todd Wilkinson and Peter Roll, Robert H. Dicke's colleagues at Princeton University, began constructing a Dicke radiometer to measure the cosmic microwave background. In 1964, Arno Penzias and Robert Woodrow Wilson at the Crawford Hill location of Bell Telephone Laboratories in nearby Holmdel Township, New Jersey had built a Dicke radiometer that they intended to use for radio astronomy and satellite communication experiments. The antenna was constructed in 1959 to support Project Echo—the National Aeronautics and Space Administration's passive communications satellites, which used large earth orbiting aluminized plastic balloons as reflectors to bounce radio signals from one point on the Earth to another. On 20 May 1964 they made their first measurement clearly showing the presence of the microwave background, with their instrument having an excess 4.2K antenna temperature which they could not account for. After receiving a telephone call from Crawford Hill, Dicke said "Boys, we've been scooped." A meeting between the Princeton and Crawford Hill groups determined that the antenna temperature was indeed due to the microwave background. Penzias and Wilson received the 1978 Nobel Prize in Physics for their discovery. Cosmic origin The interpretation of the cosmic microwave background was a controversial issue in the late 1960s. Alternative explanations included energy from within the solar system, from galaxies, from intergalactic plasma and from multiple extragalactic radio sources. Two requirements would show that the microwave radiation was truly "cosmic". First, the intensity vs frequency or spectrum needed to be shown to match a thermal or blackbody source. This was accomplished by 1968 in a series of measurements of the radiation temperature at higher and lower wavelengths. Second, the radiation needed be shown to be isotropic, the same from all directions. This was also accomplished by 1970, demonstrating that this radiation was truly cosmic in origin. Progress on theory In the 1970s numerous studies showed that tiny deviations from isotropy in the CMB could result from events in the early universe. Harrison, Peebles and Yu, and Zel'dovich realized that the early universe would require quantum inhomogeneities that would result in temperature anisotropy at the level of 10−4 or 10−5. Rashid Sunyaev, using the alternative name relic radiation, calculated the observable imprint that these inhomogeneities would have on the cosmic microwave background. COBE After a lull in the 1970s caused in part by the many experimental difficulties in measuring CMB at high precision, increasingly stringent limits on the anisotropy of the cosmic microwave background were set by ground-based experiments during the 1980s. RELIKT-1, a Soviet cosmic microwave background anisotropy experiment on board the Prognoz 9 satellite (launched 1 July 1983), gave the first upper limits on the large-scale anisotropy. The other key event in the 1980s was the proposal by Alan Guth for cosmic inflation. This theory of rapid spatial expansion gave an explanation for large-scale isotropy by allowing causal connection just before the epoch of last scattering. With this and similar theories, detailed prediction encouraged larger and more ambitious experiments. The NASA Cosmic Background Explorer (COBE) satellite orbited Earth in 1989–1996 detected and quantified the large scale anisotropies at the limit of its detection capabilities. The NASA COBE mission clearly confirmed the primary anisotropy with the Differential Microwave Radiometer instrument, publishing their findings in 1992. The team received the Nobel Prize in physics for 2006 for this discovery. Precision cosmology Inspired by the COBE results, a series of ground and balloon-based experiments measured cosmic microwave background anisotropies on smaller angular scales over the two decades. The sensitivity of the new experiments improved dramatically, with a reduction in internal noise by three orders of magnitude. The primary goal of these experiments was to measure the scale of the first acoustic peak, which COBE did not have sufficient resolution to resolve. This peak corresponds to large scale density variations in the early universe that are created by gravitational instabilities, resulting in acoustical oscillations in the plasma. The first peak in the anisotropy was tentatively detected by the MAT/TOCO experiment and the result was confirmed by the BOOMERanG and MAXIMA experiments. These measurements demonstrated that the geometry of the universe is approximately flat, rather than curved. They ruled out cosmic strings as a major component of cosmic structure formation and suggested cosmic inflation was the right theory of structure formation. Observations after COBE Inspired by the initial COBE results of an extremely isotropic and homogeneous background, a series of ground- and balloon-based experiments quantified CMB anisotropies on smaller angular scales over the next decade. The primary goal of these experiments was to measure the angular scale of the first acoustic peak, for which COBE did not have sufficient resolution. These measurements were able to rule out cosmic strings as the leading theory of cosmic structure formation, and suggested cosmic inflation was the right theory. During the 1990s, the first peak was measured with increasing sensitivity and by 2000 the BOOMERanG experiment reported that the highest power fluctuations occur at scales of approximately one degree. Together with other cosmological data, these results implied that the geometry of the universe is flat. A number of ground-based interferometers provided measurements of the fluctuations with higher accuracy over the next three years, including the Very Small Array, Degree Angular Scale Interferometer (DASI), and the Cosmic Background Imager (CBI). DASI made the first detection of the polarization of the CMB and the CBI provided the first E-mode polarization spectrum with compelling evidence that it is out of phase with the T-mode spectrum. Wilkinson Microwave Anisotropy Probe In June 2001, NASA launched a second CMB space mission, WMAP, to make much more precise measurements of the large scale anisotropies over the full sky. WMAP used symmetric, rapid-multi-modulated scanning, rapid switching radiometers at five frequencies to minimize non-sky signal noise. The data from the mission was released in five installments, the last being the nine year summary. The results are broadly consistent Lambda CDM models based on 6 free parameters and fitting in to Big Bang cosmology with cosmic inflation. Degree Angular Scale Interferometer Atacama Cosmology Telescope Planck Surveyor A third space mission, the ESA (European Space Agency) Planck Surveyor, was launched in May 2009 and performed an even more detailed investigation until it was shut down in October 2013. Planck employed both HEMT radiometers and bolometer technology and measured the CMB at a smaller scale than WMAP. Its detectors were trialled in the Antarctic Viper telescope as ACBAR (Arcminute Cosmology Bolometer Array Receiver) experiment—which has produced the most precise measurements at small angular scales to date—and in the Archeops balloon telescope. On 21 March 2013, the European-led research team behind the Planck cosmology probe released the mission's all-sky map (565x318 jpeg, 3600x1800 jpeg) of the cosmic microwave background. The map suggests the universe is slightly older than researchers expected. According to the map, subtle fluctuations in temperature were imprinted on the deep sky when the cosmos was about years old. The imprint reflects ripples that arose as early, in the existence of the universe, as the first nonillionth (10−30) of a second. Apparently, these ripples gave rise to the present vast cosmic web of galaxy clusters and dark matter. Based on the 2013 data, the universe contains 4.9% ordinary matter, 26.8% dark matter and 68.3% dark energy. On 5 February 2015, new data was released by the Planck mission, according to which the age of the universe is billion years old and the Hubble constant was measured to be . South Pole Telescope Theoretical models The cosmic microwave background radiation and the cosmological redshift-distance relation are together regarded as the best available evidence for the Big Bang event. Measurements of the CMB have made the inflationary Big Bang model the Standard Cosmological Model. The discovery of the CMB in the mid-1960s curtailed interest in alternatives such as the steady state theory. In the Big Bang model for the formation of the universe, inflationary cosmology predicts that after about 10−37 seconds the nascent universe underwent exponential growth that smoothed out nearly all irregularities. The remaining irregularities were caused by quantum fluctuations in the inflaton field that caused the inflation event. Long before the formation of stars and planets, the early universe was more compact, much hotter and, starting 10−6 seconds after the Big Bang, filled with a uniform glow from its white-hot fog of interacting plasma of photons, electrons, and baryons. As the universe expanded, adiabatic cooling caused the energy density of the plasma to decrease until it became favorable for electrons to combine with protons, forming hydrogen atoms. This recombination event happened when the temperature was around 3000 K or when the universe was approximately 379,000 years old. As photons did not interact with these electrically neutral atoms, the former began to travel freely through space, resulting in the decoupling of matter and radiation. The color temperature of the ensemble of decoupled photons has continued to diminish ever since; now down to , it will continue to drop as the universe expands. The intensity of the radiation corresponds to black-body radiation at 2.726 K because red-shifted black-body radiation is just like black-body radiation at a lower temperature. According to the Big Bang model, the radiation from the sky we measure today comes from a spherical surface called the surface of last scattering. This represents the set of locations in space at which the decoupling event is estimated to have occurred and at a point in time such that the photons from that distance have just reached observers. Most of the radiation energy in the universe is in the cosmic microwave background, making up a fraction of roughly of the total density of the universe. Two of the greatest successes of the Big Bang theory are its prediction of the almost perfect black body spectrum and its detailed prediction of the anisotropies in the cosmic microwave background. The CMB spectrum has become the most precisely measured black body spectrum in nature. Predictions based on the Big Bang model In the late 1940s Alpher and Herman reasoned that if there was a Big Bang, the expansion of the universe would have stretched the high-energy radiation of the very early universe into the microwave region of the electromagnetic spectrum, and down to a temperature of about 5 K. They were slightly off with their estimate, but they had the right idea. They predicted the CMB. It took another 15 years for Penzias and Wilson to discover that the microwave background was actually there. According to standard cosmology, the CMB gives a snapshot of the hot early universe at the point in time when the temperature dropped enough to allow electrons and protons to form hydrogen atoms. This event made the universe nearly transparent to radiation because light was no longer being scattered off free electrons. When this occurred some 380,000 years after the Big Bang, the temperature of the universe was about 3,000 K. This corresponds to an ambient energy of about , which is much less than the ionization energy of hydrogen. This epoch is generally known as the "time of last scattering" or the period of recombination or decoupling. Since decoupling, the color temperature of the background radiation has dropped by an average factor of 1,089 due to the expansion of the universe. As the universe expands, the CMB photons are redshifted, causing them to decrease in energy. The color temperature of this radiation stays inversely proportional to a parameter that describes the relative expansion of the universe over time, known as the scale length. The color temperature Tr of the CMB as a function of redshift, z, can be shown to be proportional to the color temperature of the CMB as observed in the present day (2.725 K or 0.2348 meV): Tr = 2.725 K × (1 + z) The high degree of uniformity throughout the observable universe and its faint but measured anisotropy lend strong support for the Big Bang model in general and the ΛCDM ("Lambda Cold Dark Matter") model in particular. Moreover, the fluctuations are coherent on angular scales that are larger than the apparent cosmological horizon at recombination. Either such coherence is acausally fine-tuned, or cosmic inflation occurred. Primary anisotropy The anisotropy, or directional dependency, of the cosmic microwave background is divided into two types: primary anisotropy, due to effects that occur at the surface of last scattering and before; and secondary anisotropy, due to effects such as interactions of the background radiation with intervening hot gas or gravitational potentials, which occur between the last scattering surface and the observer. The structure of the cosmic microwave background anisotropies is principally determined by two effects: acoustic oscillations and diffusion damping (also called collisionless damping or Silk damping). The acoustic oscillations arise because of a conflict in the photon–baryon plasma in the early universe. The pressure of the photons tends to erase anisotropies, whereas the gravitational attraction of the baryons, moving at speeds much slower than light, makes them tend to collapse to form overdensities. These two effects compete to create acoustic oscillations, which give the microwave background its characteristic peak structure. The peaks correspond, roughly, to resonances in which the photons decouple when a particular mode is at its peak amplitude. The peaks contain interesting physical signatures. The angular scale of the first peak determines the curvature of the universe (but not the topology of the universe). The next peak—ratio of the odd peaks to the even peaks—determines the reduced baryon density. The third peak can be used to get information about the dark-matter density. The locations of the peaks give important information about the nature of the primordial density perturbations. There are two fundamental types of density perturbations called adiabatic and isocurvature. A general density perturbation is a mixture of both, and different theories that purport to explain the primordial density perturbation spectrum predict different mixtures. Adiabatic density perturbationsIn an adiabatic density perturbation, the fractional additional number density of each type of particle (baryons, photons, etc.) is the same. That is, if at one place there is a 1% higher number density of baryons than average, then at that place there is a 1% higher number density of photons (and a 1% higher number density in neutrinos) than average. Cosmic inflation predicts that the primordial perturbations are adiabatic. Isocurvature density perturbationsIn an isocurvature density perturbation, the sum (over different types of particle) of the fractional additional densities is zero. That is, a perturbation where at some spot there is 1% more energy in baryons than average, 1% more energy in photons than average, and 2% energy in neutrinos than average, would be a pure isocurvature perturbation. Hypothetical cosmic strings would produce mostly isocurvature primordial perturbations. The CMB spectrum can distinguish between these two because these two types of perturbations produce different peak locations. Isocurvature density perturbations produce a series of peaks whose angular scales (ℓ values of the peaks) are roughly in the ratio 1 : 3 : 5 : ..., while adiabatic density perturbations produce peaks whose locations are in the ratio 1 : 2 : 3 : ... Observations are consistent with the primordial density perturbations being entirely adiabatic, providing key support for inflation, and ruling out many models of structure formation involving, for example, cosmic strings. Collisionless damping is caused by two effects, when the treatment of the primordial plasma as fluid begins to break down: the increasing mean free path of the photons as the primordial plasma becomes increasingly rarefied in an expanding universe, the finite depth of the last scattering surface (LSS), which causes the mean free path to increase rapidly during decoupling, even while some Compton scattering is still occurring. These effects contribute about equally to the suppression of anisotropies at small scales and give rise to the characteristic exponential damping tail seen in the very small angular scale anisotropies. The depth of the LSS refers to the fact that the decoupling of the photons and baryons does not happen instantaneously, but instead requires an appreciable fraction of the age of the universe up to that era. One method of quantifying how long this process took uses the photon visibility function (PVF). This function is defined so that, denoting the PVF by P(t), the probability that a CMB photon last scattered between time t and is given by P(t)dt. The maximum of the PVF (the time when it is most likely that a given CMB photon last scattered) is known quite precisely. The first-year WMAP results put the time at which P(t) has a maximum as 372,000 years. This is often taken as the "time" at which the CMB formed. However, to figure out how it took the photons and baryons to decouple, we need a measure of the width of the PVF. The WMAP team finds that the PVF is greater than half of its maximal value (the "full width at half maximum", or FWHM) over an interval of 115,000 years. By this measure, decoupling took place over roughly 115,000 years, and thus when it was complete, the universe was roughly 487,000 years old. Late time anisotropy Since the CMB came into existence, it has apparently been modified by several subsequent physical processes, which are collectively referred to as late-time anisotropy, or secondary anisotropy. When the CMB photons became free to travel unimpeded, ordinary matter in the universe was mostly in the form of neutral hydrogen and helium atoms. However, observations of galaxies today seem to indicate that most of the volume of the intergalactic medium (IGM) consists of ionized material (since there are few absorption lines due to hydrogen atoms). This implies a period of reionization during which some of the material of the universe was broken into hydrogen ions. The CMB photons are scattered by free charges such as electrons that are not bound in atoms. In an ionized universe, such charged particles have been liberated from neutral atoms by ionizing (ultraviolet) radiation. Today these free charges are at sufficiently low density in most of the volume of the universe that they do not measurably affect the CMB. However, if the IGM was ionized at very early times when the universe was still denser, then there are two main effects on the CMB: Small scale anisotropies are erased. (Just as when looking at an object through fog, details of the object appear fuzzy.) The physics of how photons are scattered by free electrons (Thomson scattering) induces polarization anisotropies on large angular scales. This broad angle polarization is correlated with the broad angle temperature perturbation. Both of these effects have been observed by the WMAP spacecraft, providing evidence that the universe was ionized at very early times, at a redshift around 10. The detailed provenance of this early ionizing radiation is still a matter of scientific debate. It may have included starlight from the very first population of stars (population III stars), supernovae when these first stars reached the end of their lives, or the ionizing radiation produced by the accretion disks of massive black holes. The time following the emission of the cosmic microwave background—and before the observation of the first stars—is semi-humorously referred to by cosmologists as the Dark Age, and is a period which is under intense study by astronomers (see 21 centimeter radiation). Two other effects which occurred between reionization and our observations of the cosmic microwave background, and which appear to cause anisotropies, are the Sunyaev–Zeldovich effect, where a cloud of high-energy electrons scatters the radiation, transferring some of its energy to the CMB photons, and the Sachs–Wolfe effect, which causes photons from the Cosmic Microwave Background to be gravitationally redshifted or blueshifted due to changing gravitational fields. Alternative theories The standard cosmology that includes the Big Bang "enjoys considerable popularity among the practicing cosmologists" However, there are challenges to the standard big bang framework for explaining CMB data. In particular standard cosmology requires fine-tuning of some free parameters, with different values supported by different experimental data. As an example of the fine-tuning issue, standard cosmology cannot predict the present temperature of the relic radiation, . This value of is one of the best results of experimental cosmology and the steady state model can predict it. However, alternative models have their own set of problems and they have only made post-facto explanations of existing observations. Nevertheless, these alternatives have played an important historic role in providing ideas for and challenges to the standard explanation. Polarization The cosmic microwave background is polarized at the level of a few microkelvin. There are two types of polarization, called E-mode (or gradient-mode) and B-mode (or curl mode). This is in analogy to electrostatics, in which the electric field (E-field) has a vanishing curl and the magnetic field (B-field) has a vanishing divergence. E-modes The E-modes arise from Thomson scattering in a heterogeneous plasma. E-modes were first seen in 2002 by the Degree Angular Scale Interferometer (DASI). B-modes B-modes are expected to be an order of magnitude weaker than the E-modes. The former are not produced by standard scalar type perturbations, but are generated by gravitational waves during cosmic inflation shortly after the big bang. However, gravitational lensing of the stronger E-modes can also produce B-mode polarization. Detecting the original B-modes signal requires analysis of the contamination caused by lensing of the relatively strong E-mode signal. Primordial gravitational waves Models of "slow-roll" cosmic inflation in the early universe predicts primordial gravitational waves that would impact the polarisation of the cosmic microwave background, creating a specific pattern of B-mode polarization. Detection of this pattern would support the theory of inflation and their strength can confirm and exclude different models of inflation. Claims that this characteristic pattern of B-mode polarization had been measured by BICEP2 instrument were later attributed to cosmic dust due to new results of the Planck experiment. Gravitational lensing The second type of B-modes was discovered in 2013 using the South Pole Telescope with help from the Herschel Space Observatory. In October 2014, a measurement of the B-mode polarization at 150 GHz was published by the POLARBEAR experiment. Compared to BICEP2, POLARBEAR focuses on a smaller patch of the sky and is less susceptible to dust effects. The team reported that POLARBEAR's measured B-mode polarization was of cosmological origin (and not just due to dust) at a 97.2% confidence level. Multipole analysis The CMB angular anisotropies are usually presented in terms of power per multipole. The map of temperature across the sky, is written as coefficients of spherical harmonics, where the term measures the strength of the angular oscillation in , and ℓ is the multipole number while m is the azimuthal number. The azimuthal variation is not significant and is removed by applying the angular correlation function, giving power spectrum term  Increasing values of ℓ correspond to higher multipole moments of CMB, meaning more rapid variation with angle. CMBR monopole term (ℓ = 0) The monopole term, , is the constant isotropic mean temperature of the CMB, with one standard deviation confidence. This term must be measured with absolute temperature devices, such as the FIRAS instrument on the COBE satellite. CMBR dipole anisotropy (ℓ = 1) CMB dipole represents the largest anisotropy, which is in the first spherical harmonic (), a cosine function. The amplitude of CMB dipole is around . The CMB dipole moment is interpreted as the peculiar motion of the Earth relative to the CMB. Its amplitude depends on the time due to the Earth's orbit about the barycenter of the solar system. This enables us to add a time-dependent term to the dipole expression. The modulation of this term is 1 year, which fits the observation done by COBE FIRAS. The dipole moment does not encode any primordial information. From the CMB data, it is seen that the Sun appears to be moving at relative to the reference frame of the CMB (also called the CMB rest frame, or the frame of reference in which there is no motion through the CMB). The Local Group — the galaxy group that includes our own Milky Way galaxy — appears to be moving at in the direction of galactic longitude , . The dipole is now used to calibrate mapping studies. Multipole (ℓ ≥ 2) The temperature variation in the CMB temperature maps at higher multipoles, or , is considered to be the result of perturbations of the density in the early Universe, before the recombination epoch at a redshift of around . Before recombination, the Universe consisted of a hot, dense plasma of electrons and baryons. In such a hot dense environment, electrons and protons could not form any neutral atoms. The baryons in such early Universe remained highly ionized and so were tightly coupled with photons through the effect of Thompson scattering. These phenomena caused the pressure and gravitational effects to act against each other, and triggered fluctuations in the photon-baryon plasma. Quickly after the recombination epoch, the rapid expansion of the universe caused the plasma to cool down and these fluctuations are "frozen into" the CMB maps we observe today. Data analysis challenges Raw CMBR data, even from space vehicles such as WMAP or Planck, contain foreground effects that completely obscure the fine-scale structure of the cosmic microwave background. The fine-scale structure is superimposed on the raw CMBR data but is too small to be seen at the scale of the raw data. The most prominent of the foreground effects is the dipole anisotropy caused by the Sun's motion relative to the CMBR background. The dipole anisotropy and others due to Earth's annual motion relative to the Sun and numerous microwave sources in the galactic plane and elsewhere must be subtracted out to reveal the extremely tiny variations characterizing the fine-scale structure of the CMBR background. The detailed analysis of CMBR data to produce maps, an angular power spectrum, and ultimately cosmological parameters is a complicated, computationally difficult problem. In practice it is hard to take the effects of noise and foreground sources into account. In particular, these foregrounds are dominated by galactic emissions such as Bremsstrahlung, synchrotron, and dust that emit in the microwave band; in practice, the galaxy has to be removed, resulting in a CMB map that is not a full-sky map. In addition, point sources like galaxies and clusters represent another source of foreground which must be removed so as not to distort the short scale structure of the CMB power spectrum. Constraints on many cosmological parameters can be obtained from their effects on the power spectrum, and results are often calculated using Markov chain Monte Carlo sampling techniques. Anomalies With the increasingly precise data provided by WMAP, there have been a number of claims that the CMB exhibits anomalies, such as very large scale anisotropies, anomalous alignments, and non-Gaussian distributions. The most longstanding of these is the low-ℓ multipole controversy. Even in the COBE map, it was observed that the quadrupole (, spherical harmonic) has a low amplitude compared to the predictions of the Big Bang. In particular, the quadrupole and octupole () modes appear to have an unexplained alignment with each other and with both the ecliptic plane and equinoxes. A number of groups have suggested that this could be the signature of new physics at the greatest observable scales; other groups suspect systematic errors in the data. Ultimately, due to the foregrounds and the cosmic variance problem, the greatest modes will never be as well measured as the small angular scale modes. The analyses were performed on two maps that have had the foregrounds removed as far as possible: the "internal linear combination" map of the WMAP collaboration and a similar map prepared by Max Tegmark and others. Later analyses have pointed out that these are the modes most susceptible to foreground contamination from synchrotron, dust, and Bremsstrahlung emission, and from experimental uncertainty in the monopole and dipole. A full Bayesian analysis of the WMAP power spectrum demonstrates that the quadrupole prediction of Lambda-CDM cosmology is consistent with the data at the 10% level and that the observed octupole is not remarkable. Carefully accounting for the procedure used to remove the foregrounds from the full sky map further reduces the significance of the alignment by ~5%. Recent observations with the Planck telescope, which is very much more sensitive than WMAP and has a larger angular resolution, record the same anomaly, and so instrumental error (but not foreground contamination) appears to be ruled out. Coincidence is a possible explanation, chief scientist from WMAP, Charles L. Bennett suggested coincidence and human psychology were involved, "I do think there is a bit of a psychological effect; people want to find unusual things." Measurements of the density of quasars based on Wide-field Infrared Survey Explorer data finds a dipole significantly different from the one extracted from the CMB anisotropy. This difference is conflict with the cosmological principle. Future evolution Assuming the universe keeps expanding and it does not suffer a Big Crunch, a Big Rip, or another similar fate, the cosmic microwave background will continue redshifting until it will no longer be detectable, and will be superseded first by the one produced by starlight, and perhaps, later by the background radiation fields of processes that may take place in the far future of the universe such as proton decay, evaporation of black holes, and positronium decay. Timeline of prediction, discovery and interpretation Thermal (non-microwave background) temperature predictions 1896 – Charles Édouard Guillaume estimates the "radiation of the stars" to be 5–6 K. 1926 – Sir Arthur Eddington estimates the non-thermal radiation of starlight in the galaxy "... by the formula the effective temperature corresponding to this density is 3.18° absolute ... black body". 1930s – Cosmologist Erich Regener calculates that the non-thermal spectrum of cosmic rays in the galaxy has an effective temperature of 2.8 K. 1931 – Term microwave first used in print: "When trials with wavelengths as low as 18 cm. were made known, there was undisguised surprise+that the problem of the micro-wave had been solved so soon." Telegraph & Telephone Journal XVII. 179/1 1934 – Richard Tolman shows that black-body radiation in an expanding universe cools but remains thermal. 1946 – Robert Dicke predicts "... radiation from cosmic matter" at < 20 K, but did not refer to background radiation. 1946 – George Gamow calculates a temperature of 50 K (assuming a 3-billion year old universe), commenting it "... is in reasonable agreement with the actual temperature of interstellar space", but does not mention background radiation. 1953 – Erwin Finlay-Freundlich in support of his tired light theory, derives a blackbody temperature for intergalactic space of 2.3 K and in the following year values of 1.9K and 6.0K. Microwave background radiation predictions and measurements 1941 – Andrew McKellar detected a "rotational" temperature of 2.3 K for the interstellar medium by comparing the population of CN doublet lines measured by W. S. Adams in a B star. 1948 – Ralph Alpher and Robert Herman estimate "the temperature in the universe" at 5 K. Although they do not specifically mention microwave background radiation, it may be inferred. 1953 – George Gamow estimates 7 K based on a model that does not rely on a free parameter 1955 – Émile Le Roux of the Nançay Radio Observatory, in a sky survey at λ = 33 cm, initially reported a near-isotropic background radiation of 3 kelvins, plus or minus 2; he did not recognize the cosmological significance and later revised the error bars to 20K. 1957 – Tigran Shmaonov reports that "the absolute effective temperature of the radioemission background ... is 4±3 K". with radiation intensity was independent of either time or direction of observation. Although Shamonov did not recognize it at the time, it is now clear that Shmaonov did observe the cosmic microwave background at a wavelength of 3.2 cm 1964 – A. G. Doroshkevich and Igor Dmitrievich Novikov publish a brief paper suggesting microwave searches for the black-body radiation predicted by Gamow, Alpher, and Herman, where they name the CMB radiation phenomenon as detectable. 1964–65 – Arno Penzias and Robert Woodrow Wilson measure the temperature to be approximately 3 K. Robert Dicke, James Peebles, P. G. Roll, and D. T. Wilkinson interpret this radiation as a signature of the Big Bang. 1966 – Rainer K. Sachs and Arthur M. Wolfe theoretically predict microwave background fluctuation amplitudes created by gravitational potential variations between observers and the last scattering surface (see Sachs–Wolfe effect). 1968 – Martin Rees and Dennis Sciama theoretically predict microwave background fluctuation amplitudes created by photons traversing time-dependent wells of potential. 1969 – R. A. Sunyaev and Yakov Zel'dovich study the inverse Compton scattering of microwave background photons by hot electrons (see Sunyaev–Zel'dovich effect). 1983 – Researchers from the Cambridge Radio Astronomy Group and the Owens Valley Radio Observatory first detect the Sunyaev–Zel'dovich effect from clusters of galaxies. 1983 – RELIKT-1 Soviet CMB anisotropy experiment was launched. 1990 – FIRAS on the Cosmic Background Explorer (COBE) satellite measures the black body form of the CMB spectrum with exquisite precision, and shows that the microwave background has a nearly perfect black-body spectrum with T = 2.73 K and thereby strongly constrains the density of the intergalactic medium. January 1992 – Scientists that analysed data from the RELIKT-1 report the discovery of anisotropy in the cosmic microwave background at the Moscow astrophysical seminar. 1992 – Scientists that analysed data from COBE DMR report the discovery of anisotropy in the cosmic microwave background. 1995 – The Cosmic Anisotropy Telescope performs the first high resolution observations of the cosmic microwave background. 1999 – First measurements of acoustic oscillations in the CMB anisotropy angular power spectrum from the MAT/TOCO, BOOMERANG, and Maxima Experiments. The BOOMERanG experiment makes higher quality maps at intermediate resolution, and confirms that the universe is "flat". 2002 – Polarization discovered by DASI. 2003 – E-mode polarization spectrum obtained by the CBI. The CBI and the Very Small Array produces yet higher quality maps at high resolution (covering small areas of the sky). 2003 – The Wilkinson Microwave Anisotropy Probe spacecraft produces an even higher quality map at low and intermediate resolution of the whole sky (WMAP provides high-resolution data, but improves on the intermediate resolution maps from BOOMERanG). 2004 – E-mode polarization spectrum obtained by the CBI. 2004 – The Arcminute Cosmology Bolometer Array Receiver produces a higher quality map of the high resolution structure not mapped by WMAP. 2005 – The Arcminute Microkelvin Imager and the Sunyaev–Zel'dovich Array begin the first surveys for very high redshift clusters of galaxies using the Sunyaev–Zel'dovich effect. 2005 – Ralph A. Alpher is awarded the National Medal of Science for his groundbreaking work in nucleosynthesis and prediction that the universe expansion leaves behind background radiation, thus providing a model for the Big Bang theory. 2006 – The long-awaited three-year WMAP results are released, confirming previous analysis, correcting several points, and including polarization data. 2006 – Two of COBE's principal investigators, George Smoot and John Mather, received the Nobel Prize in Physics in 2006 for their work on precision measurement of the CMBR. 2006–2011 – Improved measurements from WMAP, new supernova surveys ESSENCE and SNLS, and baryon acoustic oscillations from SDSS and WiggleZ, continue to be consistent with the standard Lambda-CDM model. 2010 – The first all-sky map from the Planck telescope is released. 2013 – An improved all-sky map from the Planck telescope is released, improving the measurements of WMAP and extending them to much smaller scales. 2014 – On March 17, 2014, astrophysicists of the BICEP2 collaboration announced the detection of inflationary gravitational waves in the B-mode power spectrum, which if confirmed, would provide clear experimental evidence for the theory of inflation. However, on 19 June 2014, lowered confidence in confirming the cosmic inflation findings was reported. 2015 – On January 30, 2015, the same team of astronomers from BICEP2 withdrew the claim made on the previous year. Based on the combined data of BICEP2 and Planck, the European Space Agency announced that the signal can be entirely attributed to dust in the Milky Way. 2018 – The final data and maps from the Planck telescope is released, with improved measurements of the polarization on large scales. 2019 – Planck telescope analyses of their final 2018 data continue to be released. In popular culture In the Stargate Universe TV series (2009–2011), an ancient spaceship, Destiny, was built to study patterns in the CMBR which is a sentient message left over from the beginning of time. In Wheelers, a novel (2000) by Ian Stewart & Jack Cohen, CMBR is explained as the encrypted transmissions of an ancient civilization. This allows the Jovian "blimps" to have a society older than the currently-observed age of the universe. In The Three-Body Problem, a 2008 novel by Liu Cixin, a probe from an alien civilization compromises instruments monitoring the CMBR in order to deceive a character into believing the civilization has the power to manipulate the CMBR itself. The 2017 issue of the Swiss 20 francs bill lists several astronomical objects with their distances – the CMB is mentioned with 430 · 1015 light-seconds. In the 2021 Marvel series WandaVision, a mysterious television broadcast is discovered within the Cosmic Microwave Background. See also Notes References Further reading External links Student Friendly Intro to the CMB A pedagogic, step-by-step introduction to the cosmic microwave background power spectrum analysis suitable for those with an undergraduate physics background. More in depth than typical online sites. Less dense than cosmology texts. CMBR Theme on arxiv.org Audio: Fraser Cain and Dr. Pamela Gay – Astronomy Cast. The Big Bang and Cosmic Microwave Background – October 2006 Visualization of the CMB data from the Planck mission Astronomical radio sources Astrophysics Cosmic background radiation B-modes Inflation (cosmology) Observational astronomy Physical cosmological concepts Radio astronomy
Cosmic microwave background
Physics,Astronomy
9,697
19,159,493
https://en.wikipedia.org/wiki/Microsoft%20POSIX%20subsystem
Microsoft POSIX subsystem is one of four subsystems shipped with the first versions of Windows NT, the other three being the Win32 subsystem which provided the primary API for Windows NT, plus the OS/2 and security subsystems. This subsystem implements only the POSIX.1 standard also known as IEEE Std 1003.1-1990 or ISO/IEC 9945-1:1990 primarily covering the kernel and C library programming interfaces which allowed a program written for other POSIX.1-compliant operating systems to be compiled and run under Windows NT. The Windows NT POSIX subsystem did not provide the interactive user environment parts of POSIX, originally standardized as POSIX.2. That is, Windows NT did not provide a POSIX shell nor any Unix commands out of the box, except for pax. The NT POSIX subsystem also did not provide any of the POSIX extensions that postdated the creation of Windows NT 3.1, such as those for POSIX Threads or POSIX IPC. The NT POSIX subsystem was included with the first versions of Windows NT because of 1980s US federal government requirements listed in Federal Information Processing Standard (FIPS) 151-2. Briefly, these documents required that certain types of government purchases be POSIX-compliant, so that if Windows NT had not included this subsystem, computing systems based on it would not have been eligible for some government contracts. Windows NT versions 3.5, 3.51 and 4.0 were certified as compliant with FIPS 151-2. The runtime environment of the subsystem is provided by two files: and . A POSIX application uses to communicate with the subsystem while communicating with to provide display capabilities on the Windows desktop. The POSIX subsystem was replaced in Windows XP and Windows Server 2003 by "Windows Services for UNIX", (SFU) which is based in part on OpenBSD code and other technology developed by Interix, a company later purchased by Microsoft. SFU was removed from later versions of Windows 8 and Windows Server 2012. SFU is logically, though not formally, replaced by the Windows Subsystem for Linux (WSL) in the Windows 10 Anniversary Update and Windows Server 2016 Version 1709 respectively. See also MKS Toolkit UWIN Cygwin UnxUtils Windows Subsystem for Linux References Further reading Compiling Executables for the Classic POSIX Subsystem on Windows, a guide by Markus Gaasedelen Windows components POSIX Compatibility layers
Microsoft POSIX subsystem
Technology
552
1,980,807
https://en.wikipedia.org/wiki/Boogie
Boogie is a repetitive, swung note or shuffle rhythm, "groove" or pattern used in blues which was originally played on the piano in boogie-woogie music. The characteristic rhythm and feel of the boogie was then adapted to guitar, double bass, and other instruments. The earliest recorded boogie-woogie song was in 1916. By the 1930s, Swing bands such as Benny Goodman, Glenn Miller, Tommy Dorsey and Louis Jordan all had boogie hits. By the 1950s, boogie became incorporated into the emerging rockabilly and rock and roll styles. In the late 1980s and the early 1990s country bands released country boogies. Today, the term "boogie" usually refers to dancing to pop, disco, or rock music. History The boogie was originally played on the piano in boogie-woogie music and adapted to guitar. Boogie-woogie is a style of blues piano playing characterized by an up-tempo rhythm, a repeated melodic pattern in the bass, and a series of improvised variations in the treble. Boogie woogie developed from a piano style that developed in the rough barrelhouse bars in the Southern states, where a piano player performed for the hard-drinking patrons. The origin of the term boogie-woogie is unknown, according to Webster's Third New International Dictionary. The Oxford English Dictionary states that the word is a redoubling of boogie, which was used for rent parties as early as 1913. The term may be derived from Black West African English, from the Sierra Leone term "bogi", which means "to dance"; as well, it may be akin to the phrase "hausa buga", which means "to beat drums". In the late 1920s and early 1930s, the term "could mean anything from a racy style of dance to a raucous party or to a sexually transmitted disease." In Peter Silvester's book on boogie woogie, Left Hand Like God – the Story of Boogie Woogie he states that, in 1929, "boogie-woogie is used to mean either dancing or music in the city of Detroit". Boogie hit the charts with Pine Top Smith's Pine Top's Boogie in 1929, which garnered the number 20 spot. In the late 1930s, boogie became part of the then popular Swing style, as big bands such as "Glenn Miller, Tommy Dorsey, and Louis Jordan...all had boogie hits." Swing big band audiences expected to hear boogie tunes, because the beat could be used for the then-popular dances such as the jitterbug and the Lindy Hop. As well, country artists began playing boogie woogie in the late 1930s, when Johnny Barfield recorded "Boogie Woogie". The Delmore Brothers "Freight Train Boogie" shows how country music and blues were being blended to form the genre which would become known as rockabilly. The Sun Records-era rockabilly sound used "wild country boogie piano" as part of its sound. By the early 1950s, boogie became less popular, but the new rock and roll sometimes incorporated its patterns. In the 1960s, a new form, boogie rock, emerged. However, it did not rely on the same patterns as the earlier styles. By the mid-1970s, the meaning of the term returned to its roots, in a certain sense, as during the disco era, "to boogie" meant "to dance in a disco style" with one hit song in particular sung by the Euro disco group Silver Convention, "Get Up and Boogie". Usage The boogie groove is often used in rock and roll and country music. A simple rhythm guitar or accompaniment boogie pattern, sometimes called country boogie, is as follows: The "B" and "C" notes are played by stretching the fourth finger from the "A" two and three frets up to "B" and "C" respectively on the same string. This pattern is an elaboration or decoration of the chord or level and is the same on all the primary triads (I, IV, V), although the dominant, or any chord, may include the seventh on the third beat (see also, degree (music)). A simple lead guitar boogie pattern is as follows: Boogie patterns are played with a swing or shuffle rhythm and generally follow the "one finger per fret" rule, where, as in the case directly above, if the third finger always covers the notes on the third fret, the second finger going only on the second fret, etc. The swung notes or shuffle note are a rhythmic device in which the duration of the initial note in a pair is augmented and that of the second is diminished. Also known as "notes inégales", swung notes are widely used in jazz music and other jazz-influenced music such as blues and Western swing. A swing or shuffle rhythm is the rhythm produced by playing repeated pairs of notes in this way. See also Boogie-woogie Boogie rock References Blues 1970s slang Rhythm and meter Slang
Boogie
Physics
1,012
1,597,970
https://en.wikipedia.org/wiki/An%20Essay%20on%20the%20Inequality%20of%20the%20Human%20Races
An Essay on the Inequality of the Human Races (originally: Essai sur l'inégalité des races humaines), published between 1853 and 1855, is a racialist work of French diplomat and writer Arthur de Gobineau. It argues that there are intellectual differences between human races, that civilizations decline and fall when the races are mixed and that the white race is superior. It is today considered to be one of if not the earliest example of scientific racism. Expanding upon Boulainvilliers' use of ethnography to defend the Ancien Régime against the claims of the Third Estate, Gobineau aimed for an explanatory system universal in scope: namely, that race is the primary force determining world events. Using scientific disciplines as varied as linguistics and anthropology, Gobineau divides the human species into three major groupings, white, yellow and black, claiming to demonstrate that "history springs only from contact with the white races." Among the white races, he distinguishes the Aryan race, specifically the Nordic race and Germanic peoples, as the pinnacle of human development, comprising the basis of all European aristocracies. However, inevitable miscegenation led to the "downfall of civilizations". Background Gobineau was a Legitimist who despaired at France's decline into republicanism and centralization. The book was written after the 1848 revolution when Gobineau began studying the works of physiologists Xavier Bichat and Johann Blumenbach. The book was dedicated to King George V of Hanover (1851–66), the last king of Hanover. In the dedication, Gobineau writes that he presents to His Majesty the fruits of his speculations and studies into the hidden causes of the "revolutions, bloody wars, and lawlessness" ("révolutions, guerres sanglantes, renversements de lois") of the age. In a letter to Count Anton von Prokesch-Osten in 1856 he describes the book as based upon "a hatred for democracy and its weapon, the Revolution, which I satisfied by showing, in a variety of ways, where revolution and democracy come from and where they are going." Gobineau and the Bible In Vol I, chapter 11, "Les différences ethniques sont permanentes" ("The ethnic differences are permanent"), Gobineau writes that "Adam is the originator of our white species" ("Adam soit l'auteur de notre espèce blanche"), and creatures not part of the white race are not part of that species. By this Gobineau refers to his division of humans into three main races: white, black, and yellow. The biblical division into Hamites, Semites, and Japhetites is for Gobineau a division within the white race. In general, Gobineau considers the Bible to be a reliable source of actual history, and he was not a supporter of the idea of polygenesis. Influence Steven Kale argues that Gobineau's "influence on the development of racial theory has been exaggerated and his ideas have been routinely misconstrued". Gobineau's ideas found an audience in the United States and in German-speaking areas more so than in France, becoming the inspiration for a host of racial theories, for example those of Houston Stewart Chamberlain. "Gobineau was the first to theorize that race was the deciding factor in history and the precursors of Nazism repeated some of his ideas, but his principle arguments were either ignored, deformed, or taken out of context in German racial thought". German historian Joachim C. Fest, who wrote a biography of Hitler, describes Gobineau, in particular his negative views on race-mixing as expressed in his essay, as an eminent influence on Adolf Hitler and Nazism. Fest writes that the influence of Gobineau on Hitler can be easily seen and that Gobineau's ideas were used by Hitler in simplified form for demagogic purposes: "Significantly, Hitler simplified Gobineau's elaborate doctrine until it became demagogically usable and offered a set of plausible explanations for all the discontents, anxieties, and crises of the contemporary scene." However, Professor Steven Kale has cautioned that "Gobineau's influence on German racism has been repeatedly overstated". Although cited by groups such as the Nazi Party, the text implicitly criticizes antisemitism and describes Jews in positive terms, the Jews being seen as a superbly forged race of "ancient Greek-like strength" of cohesion. Implicitly, the folk of Judah merely represented a wandering, semi-austral variation of Ur-Aryan blood-stock. Gobineau stated, "Jews... became a people that succeeded in everything it undertook, a free, strong, and intelligent people, and one which, before it lost, sword in hand, the name of an independent nation, had given as many learned men to the world as it had merchants." Philo-Judaic sentiment was intermixed with ethnological theories concerning the primally Indo-Iranian/Indo-Aryan archeogenetic matrix whence sprang the Jews. In these lines of speculative anthropology, the Jews were anciently (supposedly) primordially interpreted as of atypical Indo-European ethnicity: Judaic racial typology emerged from Iranid–Nordid founders, the details considered inessential, possessors of compatibly "white" "Aryan" blood being the main point. The latter-day "Hamiticized" Jewish folk came into existence from non-Afro-Asiatic Hurrian (or Horite), Jebusite, Amorite or early-Hittite, Mittani-affiliated racial nuclei, the "consensus science" of the time asserted. The blatantly, ironically almost aggressive pro-Jewish attitude of Gobineau, akin to Nietzsche in sheer admiration and lionization of the Jews as one of the "highest races", proved ideologically vertiginous to the Nazi propagandists and Procrustean thinkers—here Gobineau unmistakably contradicted perhaps the main pillar of Nazi political ideology, which has been described as the schizoid, neo-Gnostic dualism of "Jewish demonology". Incompatible with Nazi ideology, the Count's fervent Judaic positivity and total dearth of antisemitism the Nazis could only attempt to ignore or minimize away in the silence of hypocrisy. The book continued to influence the white supremacist movement in the United States in the early 21st century. Translations Josiah Clark Nott hired Henry Hotze to translate the work into English. Hotze's translation was published in 1856 as The Moral and Intellectual Diversity of Races, with an added essay from Hotze and appendix from Nott. However, it "omitted the laws of repulsion and attraction, which were at the heart of Gobineau's account of the role of race-mixing in the rise and fall of civilizations". Gobineau was not pleased with the version; Gobineau was "particularly concerned that Hotze had ignored his comments on 'American decay generally and upon slaveholding in particular'." The German translation Versuch über die Ungleichheit der Menschenrassen first appeared in 1897 and was translated by Ludwig Schemann, a member of the Bayreuth Circle and "one of the most important racial theorists of imperial and Weimar Germany". A new English-language version The Inequality of Human Races, translated by Adrian Collins, was published in Britain and the US in 1915 and remains the standard English-language version. It continues to be republished in the US. See also IQ and Global Inequality References Bibliography Gobineau, Arthur (Count Joseph Arthur de Gobineau) The Inequality of Human Races translated by Adrian Collins Gobineau, Arthur (Count Joseph Arthur de Gobineau) The Moral and Intellectual Diversity of Races, with particular reference to their perspective influence in the civil and political history of mankind translated by Henry Hotze Gobineau, Arthur (Count Joseph Arthur de Gobineau) Versuch Uber Die Ungleichheit Der Menschenracen translated by Ludwig Schemann External links Essai sur l'Inegalite de Races Humaine in French at Google Books Vol. 1, Vol. 2, Vol. 4 Versuch über die Ungleichheit der Menschenracen trans. by Ludwig Schemann at Google Books Vol. 1, Vol. 2, Vol. 3, Vol. 4 The Moral and Intellectual Diversity of Races: With Particular Reference to Their Respective trans. by H. Hotz, with an Appendix by J. C. Nott 1855 books 1855 essays Ethnography Pseudoscience literature Race and intelligence controversy Scientific racism Sociology books White supremacy Works about the theory of history
An Essay on the Inequality of the Human Races
Biology
1,833
8,312,093
https://en.wikipedia.org/wiki/History%20of%20wind%20power
Wind power has been used as long as humans have put sails into the wind. Wind-powered machines used to grind grain and pump water — the windmill and wind pump — were developed in what is now Iran, Afghanistan, and Pakistan by the 9th century. Wind power was widely available and not confined to the banks of fast-flowing streams, or later, requiring sources of fuel. Wind-powered pumps drained the polders of the Netherlands, and in arid regions such as the American midwest or the Australian outback, wind pumps provided water for livestock and steam engines. With the development of electric power, wind power found new applications in lighting buildings remote from centrally generated power. Throughout the 20th century, parallel paths developed small wind plants suitable for farms or residences and larger utility-scale wind generators that could be connected to electricity grids for remote use of power. Wind-powered generators operate in sizes ranging between tiny plants for battery charging at isolated residences up to near-gigawatt sized offshore wind farms that provide electricity to national electrical networks. The first electricity-generating wind turbine was installed by the Austrian Josef Friedländer at the Vienna International Electrical Exhibition in 1883, followed by wind generators, e.g., in Scotland in July 1887 by Prof James Blyth of Anderson's College, Glasgow (the precursor of Strathclyde University). Blyth's high cloth-sailed wind turbine was installed in the garden of his holiday cottage at Marykirk in Kincardineshire, and was used to charge accumulators developed by the Frenchman Camille Alphonse Faure to power the lighting in the cottage, thus making it the first house in the world to have its electric power supplied by wind power. Blyth offered the surplus electric power to the people of Marykirk for lighting the main street; however, they turned down the offer, as they thought electric power was "the work of the devil." Although he later built a wind turbine to supply emergency power to the local Lunatic Asylum, Infirmary, and Dispensary of Montrose, the invention never really caught on, as the technology was not considered to be economically viable. Across the Atlantic, in Cleveland, Ohio, a larger and heavily engineered machine was designed and constructed in the winter of 1887–1888 by Charles F. Brush. This was built by his engineering company at his home and operated from 1886 until 1900. The Brush wind turbine had a rotor in diameter and was mounted on an tower. Although large by today's standards, the machine was only rated at 12 kW. The connected dynamo was used either to charge a bank of batteries or to operate up to 100 incandescent light bulbs, three arc lamps, and various motors in Brush's laboratory. With the development of electric power, wind power found new applications in lighting buildings remote from centrally generated power. Throughout the 20th century, parallel paths developed small wind stations suitable for farms or residences. From 1932, many isolated properties in Australia ran their lighting and electric fans from batteries, charged by a "Freelite" wind-driven generator, producing 100watts of electrical power from as little wind speed as . The 1973 oil crisis triggered the investigation in Denmark and the United States that led to larger utility-scale wind generators that could be connected to electric power grids for remote use of power. By 2008, the U.S. installed capacity had reached 25.4 gigawatts, and by 2012, the installed capacity was 60 gigawatts. Today, wind-powered generators operate in every size range, between tiny stations for battery charging at isolated residences up to gigawatt-sized offshore wind farms that provide electric power to national electrical networks. By the early 2020s, wind produced 3% of global total primary energy and generated 7% of electricity. Antiquity Sailboats and sailing ships have been using wind power for at least 5,500 years, and architects have used wind-driven natural ventilation in buildings since similarly ancient times. The use of wind to provide mechanical power came somewhat later in antiquity. The Babylonian emperor Hammurabi planned to use wind power for his ambitious irrigation project in the 17th century BC. Hero of Alexandria (Heron) in first-century Roman Egypt described what appears to be a wind-driven wheel to power a machine. His description of a wind-powered organ is not a practical windmill, but was either an early wind-powered toy, or a design concept for a wind-powered machine that may or may not have been a working device, as there is ambiguity in the text and issues with the design. Another early example of a wind-driven wheel was the prayer wheel, which is believed to have been first used in Tibet and China, though there is uncertainty over the date of its first appearance, which could have been either circa 400, the 7th century, or later. Early Middle Ages Wind-powered machines used to grind grain and pump water, the windmill and wind pump, were developed in what are now Iran, Afghanistan and Pakistan by the 9th century. The first practical windmills were in use in Sistan, a region in Iran and bordering Afghanistan, at least by the 9th century and possibly as early as the mid-to-late 7th century. These Panemone windmills were horizontal windmills, which had long vertical driveshafts with six to twelve rectangular sails covered in reed matting or cloth. These windmills were used to pump water, and in the gristmilling and sugarcane industries. The use of windmills became widespread across the Middle East and Central Asia, and later spread to China and India. Vertical windmills were later used extensively in Northwestern Europe to grind flour beginning in the 1180s, and many examples still exist. By 500 AD, windmills were used to pump seawater for salt-making in China and Sicily. Wind-powered automata are known from the mid-8th century: wind-powered statues that "turned with the wind over the domes of the four gates and the palace complex of the Round City of Baghdad". The "Green Dome of the palace was surmounted by the statue of a horseman carrying a lance that was believed to point toward the enemy. This public spectacle of wind-powered statues had its private counterpart in the 'Abbasid palaces where automata of various types were predominantly displayed." Late Middle Ages The first windmills in Europe appear in sources dating to the twelfth century. These early European windmills were sunk post mills. The earliest certain reference to a windmill dates from 1185, in Weedley, Yorkshire, although a number of earlier but less certainly dated twelfth-century European sources referring to windmills have also been adduced. While it is sometimes argued that crusaders may have been inspired by windmills in the Middle East, this is unlikely since the European vertical windmills were of significantly different design than the horizontal windmills of Afghanistan. Lynn White Jr., a specialist in medieval European technology, asserts that the European windmill was an "independent invention;" he argues that it is unlikely that the Afghanistan-style horizontal windmill had spread as far west as the Levant during the Crusader period. In medieval England rights to waterpower sites were often confined to nobility and clergy, so wind power was an important resource to a new middle class. In addition, windmills, unlike water mills, were not rendered inoperable by the freezing of water in the winter. By the 14th century Dutch windmills were in use to drain areas of the Rhine River delta. 18th century Windmills were used to pump water for salt making on the island of Bermuda, and on Cape Cod during the American revolution. In Mykonos and in other islands of Greece, windmills were used to mill flour and remained in use until the early 20th century. Many of them are now refurbished to be inhabited. 19th century The first wind turbine used for the production of electricity was built in Scotland in July 1887 by Prof James Blyth of Anderson's College, Glasgow (the precursor of the University of Strathclyde). Blyth's 10 m high, cloth-sailed wind turbine was installed in the garden of his holiday cottage at Marykirk in Kincardineshire and was used to charge accumulators developed by the Frenchman Camille Alphonse Faure, to power the lighting in the cottage, thus making it the first house in the world to have its electricity supplied by wind power. Blyth offered the surplus electricity to the people of Marykirk for lighting the main street, however, they turned down the offer as they thought electricity was "the work of the devil." Although he later built a wind turbine to supply emergency power to the local Lunatic Asylum, Infirmary and Dispensary of Montrose, the invention never really caught on as the technology was not considered to be economically viable. Across the Atlantic, in Cleveland, Ohio a larger and heavily engineered machine was designed and constructed between 1887 and 1888 by Charles F. Brush, this was built by his engineering company at his home and operated from 1888 until 1900. The Brush wind turbine had a rotor 17 m (56 foot) in diameter and was mounted on an 18 m (60 foot) tower. Although large by today's standards, the machine was only rated at 12 kW; it turned relatively slowly since it had 144 blades. The connected dynamo was used either to charge a bank of batteries or to operate up to 100 incandescent light bulbs, three arc lamps, and various motors in Brush's laboratory. The machine fell into disuse after 1900 when electricity became available from Cleveland's central stations, and was abandoned in 1908. In 1891 Danish scientist, Poul la Cour, constructed a wind turbine to generate electricity, which was used to produce hydrogen by electrolysis to be stored for use in experiments and to light the Askov Folk High School. He later solved the problem of producing a steady supply of power by inventing a regulator, the Kratostate, and in 1895 converted his windmill into a prototype electrical power plant that was used to light the village of Askov. In Denmark there were about 2,500 windmills by 1900, used for mechanical loads such as pumps and mills, producing an estimated combined peak power of about 30 MW. In the American midwest between 1850 and 1900, a large number of small windmills, perhaps six million, were installed on farms to operate irrigation pumps. Firms such as Star, Eclipse, Fairbanks-Morse, and Aeromotor became famed suppliers in North and South America. 20th century Development in the 20th century might be usefully divided into the periods: 1900–1973, when widespread use of individual wind generators competed against fossil fuel plants and centrally-generated electricity 1973–onward, when the oil price crisis spurred investigation of non-petroleum energy sources. 1900–1973 Danish development In Denmark wind power was an important part of a decentralized electrification in the first quarter of the 20th century, partly because of Poul la Cour from his first practical development in 1891 at Askov. By 1908 there were 72 wind-driven electric generators from 5 kW to 25 kW. The largest machines were on 24 m (79 ft) towers with four-bladed 23 m (75 ft) diameter rotors. In 1957 Johannes Juul installed a 24 m diameter wind turbine at Gedser, which ran from 1957 until 1967. This was a three-bladed, horizontal-axis, upwind, stall-regulated turbine similar to those now used for commercial wind power development. Farm power and isolated plants In 1927 the brothers Joe Jacobs and Marcellus Jacobs opened a factory, Jacobs Wind in Minneapolis to produce wind turbine generators for farm use. These would typically be used for lighting or battery charging, on farms out of reach of central-station electricity and distribution lines. In 30 years the firm produced about 30,000 small wind turbines, some of which ran for many years in remote locations in Africa and on the Richard Evelyn Byrd expedition to Antarctica. Many other manufacturers produced small wind turbine sets for the same market, including companies called Wincharger, Miller Airlite, Universal Aeroelectric, Paris-Dunn, Airline and Winpower. In 1931 the Darrieus wind turbine was invented, with its vertical axis providing a different mix of design tradeoffs from the conventional horizontal-axis wind turbine. The vertical orientation accepts wind from any direction with no need for adjustments, and the heavy generator and gearbox equipment can rest on the ground instead of atop a tower. By the 1930s, windmills were widely used to generate electricity on farms in the United States where distribution systems had not yet been installed. Used to replenish battery storage banks, these machines typically had generating capacities of a few hundred watts to several kilowatts. Beside providing farm power, they were also used for isolated applications such as electrifying bridge structures to prevent corrosion. In this period, high tensile steel was cheap, and windmills were placed atop prefabricated open steel lattice towers. The most widely used small wind generator produced for American farms in the 1930s was a two-bladed horizontal-axis machine manufactured by the Wincharger Corporation. It had a peak output of 200 watts. Blade speed was regulated by curved air brakes near the hub that deployed at excessive rotational velocities. These machines were still being manufactured in the United States during the 1980s. In 1936, the U.S. started a rural electrification project that killed the natural market for wind-generated power, since network power distribution provided a farm with more dependable usable energy for a given amount of capital investment. In Australia, the Dunlite Corporation built hundreds of small wind generators to provide power at isolated postal service stations and farms. These machines were manufactured from 1936 until 1970. Utility-scale turbines A forerunner of modern horizontal-axis utility-scale wind generators was the WIME D-30 in service in Balaklava, near Yalta, USSR from 1931 until 1942. This was a 100 kW generator on a 30 m (100 ft) tower, connected to the local 6.3 kV distribution system. It had a three-bladed 30 metre rotor on a steel lattice tower. It was reported to have an annual load factor of 32 per cent, not much different from current wind machines. In 1941 the world's first megawatt-size wind turbine was connected to the local electrical distribution system on the mountain known as Grandpa's Knob in Castleton, Vermont, United States. It was designed by Palmer Cosslett Putnam and manufactured by the S. Morgan Smith Company. This 1.25 MW Smith–Putnam turbine operated for 1100 hours before a blade failed at a known weak point, which had not been reinforced due to war-time material shortages. No similar-sized unit was to repeat this "bold experiment" for about forty years. Fuel-saving turbines During the Second World War, small wind generators were used on German U-boats to recharge submarine batteries as a fuel-conserving measure. In 1946 the lighthouse and residences on the island of Neuwerk were partly powered by an 18 kW wind turbine 15 metres in diameter, to economize on diesel fuel. This installation ran for around 20 years before being replaced by a submarine cable to the mainland. The Station d'Etude de l'Energie du Vent at Nogent-le-Roi in France operated an experimental 800 KVA wind turbine from 1956 to 1966. 1973–2000 US development From 1974 through the mid-1980s the United States government worked with industry to advance the technology and enable large commercial wind turbines. The NASA wind turbines were developed under a program to create a utility-scale wind turbine industry in the U.S. With funding from the National Science Foundation and later the United States Department of Energy (DOE), a total of 13 experimental wind turbines were put into operation, in four major wind turbine designs. This research and development program pioneered many of the multi-megawatt turbine technologies in use today, including: steel tube towers, variable-speed generators, composite blade materials, partial-span pitch control, as well as aerodynamic, structural, and acoustic engineering design capabilities. The large wind turbines developed under this effort set several world records for diameter and power output. The MOD-2 wind turbine cluster of three turbines produced 7.5 megawatts of power in 1981. In 1987, the MOD-5B was the largest single wind turbine operating in the world with a rotor diameter of nearly 100 meters and a rated power of 3.2 megawatts. It demonstrated an availability of 95 percent, an unparalleled level for a new first-unit wind turbine. The MOD-5B had the first large-scale variable speed drive train and a sectioned, two-blade rotor that enabled easy transport of the blades. The 4 megawatt WTS-4 held the world record for power output for over 20 years. Although the later units were sold commercially, none of these two-bladed machines were ever put into mass production. When oil prices declined by a factor of three from 1980 through the early 1990s, many turbine manufacturers, both large and small, left the business. The commercial sales of the NASA/Boeing Mod-5B, for example, came to an end in 1987 when Boeing Engineering and Construction announced they were "planning to leave the market because low oil prices are keeping windmills for electricity generation uneconomical." Later, in the 1980s, California provided tax rebates for wind power. These rebates funded the first major use of wind power for utility electricity. These machines, gathered in large wind parks such as at Altamont Pass would be considered small and un-economic by modern wind power development standards. Danish development A giant change took place in 1978 when the world's first multi-megawatt wind turbine was constructed. It pioneered many technologies used in modern wind turbines and allowed Vestas, Siemens and others to get the parts they needed. Especially important was the novel wing construction using help from German aeronautics specialists. The power plant was capable of delivering 2MW, had a tubular tower, pitch controlled wings and three blades. It was built by the teachers and students of the Tvind school. Before completion these "amateurs" were much ridiculed. The turbine still runs today and looks almost identical to the newest most modern mills. Danish commercial wind power development stressed incremental improvements in capacity and efficiency based on extensive serial production of turbines, in contrast with development models requiring extensive steps in unit size based primarily on theoretical extrapolation. A practical consequence is that all commercial wind turbines resemble the Danish model, a light-weight three-blade upwind design. All major horizontal axis turbines today rotate the same way (clockwise) to present a coherent view. However, early turbines rotated counter-clockwise like the old windmills, but a shift occurred from 1978 and on. The individualist-minded blade supplier Økær made the decision to change direction in order to be distinguished from the collective Tvind and their small wind turbines. Some of the blade customers were companies that later evolved into Vestas, Siemens, Enercon and Nordex. Public demand required that all turbines rotate the same way, and the success of these companies made clockwise the new standard. Self-sufficiency and back-to-the-land In the 1970s many people began to desire a self-sufficient life-style. Solar cells were too expensive for small-scale electrical generation, so some turned to windmills. At first they built ad hoc designs using wood and automobile parts. Most people discovered that a reliable wind generator is a moderately complex engineering project, well beyond the ability of most amateurs. Some began to search for and rebuild farm wind generators from the 1930s, of which Jacobs Wind Electric Company machines were especially sought after. Hundreds of Jacobs machines were reconditioned and sold during the 1970s. Following experience with reconditioned 1930s wind turbines, a new generation of American manufacturers started building and selling small wind turbines not only for battery-charging but also for interconnection to electricity networks. An early example would be Enertech Corporation of Norwich, Vermont, which began building 1.8 kW models in the early 1980s. In the 1990s, as aesthetics and durability became more important, turbines were placed atop tubular steel or reinforced concrete towers. Small generators are connected to the tower on the ground, then the tower is raised into position. Larger generators are hoisted into position atop the tower and there is a ladder or staircase inside the tower to allow technicians to reach and maintain the generator, while protected from the weather. 21st century As the 21st century began, fossil fuel was still relatively cheap, but rising concerns over energy security, global warming, and eventual fossil fuel depletion led to an expansion of interest in all available forms of renewable energy. The fledgling commercial wind power industry began expanding at a robust growth rate of about 25% per year, driven by the ready availability of large wind resources, and falling costs due to improved technology and wind farm management. The steady run-up in oil prices after 2003 led to increasing fears that peak oil was imminent, further increasing interest in commercial wind power. Even though wind power generates electricity rather than liquid fuels, and thus is not an immediate substitute for petroleum in most applications (especially transport), fears over petroleum shortages only added to the urgency to expand wind power. Earlier oil crises had already caused many utility and industrial users of petroleum to shift to coal or natural gas. Wind power showed potential for replacing natural gas in electricity generation on a cost basis. By 2021 wind energy produced 4872 terawatts-hour, 2.8% of the total primary energy production and 6.6% of the total electricity production. Technological innovations continue to drive new developments in the application of wind power. By 2015, the largest wind turbine were 8MW capacity Vestas V164 for offshore use. By 2014, over 240,000 commercial-sized wind turbines were operating in the world, producing 4% of the world's electricity. Total installed capacity exceeded 336GW in 2014 with China, the U.S., Germany, Spain and Italy leading in installations. In the United States, wind energy received a boost from the government's production tax credit (PTC) promoting wind energy, however these have since expired and have not been renewed as of 2022. From a pricing standpoint, General Electric (a producer of wind-turbine technology) noted an increase in steel prices detrimentally impacting supply of wind as a result of inflation. PTC has a benefit for a period of 10 years from the date of construction, ranging from 1 cent to 1.9 cents per kWh. The credit was intended as temporary, but was renewed 13 times. In some states such as Nebraska, there has been local push back against wind projects with local groups rejecting wind energy projects. As a result, wind supplies over 8% of the United States' power, and reached a record peak of 24.5% share of power. Large projects are required to deliver the wind energy to markets where the power is needed. In Colorado, Xcel Energy approved a $1.7 billion project for power line transmission of 560 miles. In Europe, wind has faced similar pressures from global steel prices, in addition to pressure resulting from Russia's war in Ukraine. As a result, European wind original equipment manufacturers (OEMs) have faced issues with profitability with market share moving to China. In the United States, the Department of Energy estimates 60% to 75% for towers and up to 30% to 50% for blades and hubs are produced domestically. Floating wind-turbine technology Offshore wind power began to expand beyond fixed-bottom, shallow-water turbines beginning late in the first decade of the 2000s. The world's first operational deep-water large-capacity floating wind turbine, Hywind, became operational in the North Sea off Norway in late 2009 at a cost of some 400 million kroner (around US$62 million) to build and deploy. These floating turbines are a very different construction technology—closer to floating oil rigs rather—than traditional fixed-bottom, shallow-water monopile foundations that are used in the other large offshore wind farms to date. By late 2011, Japan announced plans to build a multiple-unit floating wind farm, with six 2-megawatt turbines, off the Fukushima coast of northeast Japan where the 2011 tsunami and nuclear disaster had created a scarcity of electric power. The initial evaluation phase was due to be completed in 2016, "Japan plans to build as many as 80 floating wind turbines off Fukushima by 2020" at a cost of some 10–20 billion Yen. However, approximately 60 billion Yen were ultimately spent by the Japanese government on test wind projects at Fukushima between November 2013 and December 2020 when it was decided that a combination of technical issues and lack of commerciality justified closing and decommissioning the structures as of April 2021. Airborne turbines Airborne wind energy systems use airfoils or turbines supported in the air by buoyancy or by aerodynamic lift. The purpose is to eliminate the expense of tower construction, and allow extraction of wind energy from steadier, faster, winds higher in the atmosphere. As yet no grid-scale plants have been constructed. Many design concepts have been demonstrated. See also Wind power in Ohio –History Growian – 1980s experimental turbine, at the time the largest ever built Timeline of solar cells Energy development Outline of energy Smart grid research Timeline of sustainable energy research 2020–present#Wind power List of years in science Notes The terms "horizontal" and "vertical" refer to the plane of rotation of the sails. Modern wind turbines are generally referred to by the plane of rotation of the main axle (windshaft). Thus a horizontal mill may also be described as a "vertical-axis windmill" and a vertical mill may also be described as a "horizontal-axis windmill". References External links Fao Guide on wind power in history Wind power Wind power
History of wind power
Technology
5,279
57,594,622
https://en.wikipedia.org/wiki/ESSA-7
ESSA-7 (or TOS-E) was a spin-stabilized operational meteorological satellite. Its name was derived from that of its oversight agency, the Environmental Science Services Administration (ESSA). Launch ESSA-7 was launched on August 16, 1968, at 11:31 UTC. It was launched atop a Delta rocket from Vandenberg Air Force Base, California, USA. The spacecraft had a mass of at the time of launch. ESSA-7 had an inclination of 101.72°, and an orbited the Earth once every 114.9 minutes. Its perigee was and its apogee was . References Spacecraft launched in 1968 Weather satellites of the United States Television Infrared Observation Satellites
ESSA-7
Astronomy
146
17,176,241
https://en.wikipedia.org/wiki/Pratt%20%26%20Whitney%20X-1800
The Pratt & Whitney X-1800 (later enlarged as the XH-2600) was an H-block aircraft engine project developed between 1938 and 1940, which was cancelled with only one example being built. Design and development The X-1800 was a watercooled 24-cylinder H-block of 2,240 in3 displacement; this was later expanded to 2,600 in3 displacement. It was intended to be used in the Vultee XP-54, Curtiss-Wright XP-55 Ascender, Northrop XP-56, Lockheed XP-49, and Lockheed XP-58 Chain Lightning. Projected performance was to be 1,800 to 2,200 hp (1350-1640 kW), with a turbocharger to secure high-altitude performance. The designation came from the intended power rating rather than the more usual cubic inch engine displacement figure. The target date for series production was 1942. In 1940, however, performance on the test bench did not continue to improve, demonstrating a need for considerable additional development effort. Pratt & Whitney subsequently ended development of the X-1800 in October 1940, with only one built, to concentrate on radial engines. Intended applications Curtiss-Wright XP-55 Ascender Lockheed XP-49 Lockheed XP-58 Chain Lightning Northrop N-1 Northrop XP-56 Vultee XP-54 Specifications (X-1800) See also References Notes Bibliography External links Photo of the XH-2600 at enginehistory.org X-1800 Sleeve valve engines 1940s aircraft piston engines Abandoned military aircraft engine projects of the United States
Pratt & Whitney X-1800
Technology
322
31,554,613
https://en.wikipedia.org/wiki/ENI%20number
An ENI number (European Number of Identification or European Vessel Identification Number) is a registration for ships capable of navigating on inland European waters. It is a unique, eight-digit identifier that is attached to a hull for its entire lifetime, independent of the vessel's current name or flag. ENI was introduced by the Inland Transport Committee of the United Nations Economic Commission for Europe in their meeting on 11–13 October 2006 in Geneva. It is based on the Rhine Vessel certification system previously used for ships navigating the Rhine, and is comparable to the IMO ship identification number. Format The ENI number consists of eight Arabic numerals. The first three digits identify the competent authority where the number is assigned (see "List of prefixes" below) and the last five digits are a serial number. Ships which have a vessel number in accordance to the Rhine Inspection Rules receive an ENI beginning with "0" and followed by the seven digit Rhine number. A vessel which has been issued an IMO number may only receive an ENI number if it has appropriate certifications for inland water travel. Its ENI will begin with "9" followed by its seven digit IMO number. The ENI number is transmitted by Inland-Automatic Identification System transponders. Requirements Not all European vessels are required to carry an ENI number. As of April 2007, a vessel must have an ENI if it operates on inland waterways and meets any of the following criteria: is over in length; is greater than in volume; is a tug or push boat that operates with a qualifying vessel; is a passenger ship; or is a floating installation/equipment. If a vessel is issued an ENI, this number must be displayed on the sides and stern of the vessel. List of prefixes References External links Ship identification numbers Country codes Ship registration Water transport in Europe
ENI number
Mathematics
381
31,257
https://en.wikipedia.org/wiki/Tadoma
Tadoma is a method of communication utilized by deafblind individuals, in which the listener places their little finger on the speaker's lips and their fingers along the jawline. The middle three fingers often fall along the speaker's cheeks with the little finger picking up the vibrations of the speaker's throat. It is sometimes referred to as tactile lipreading, as the listener feels the movement of the lips, the vibrations of the vocal cords, expansion of the cheeks and the warm air produced by nasal phonemes such as 'N' and 'M'. Hand positioning can vary, and it is a sometimes also used by hard-of-hearing people to supplement their remaining hearing. In some cases, especially if the speaker knows sign language, the deafblind listener may use the Tadoma method with one hand on the speaker's face, and their other hand on the speaker’s signing hand to hear the words. In this way, the two methods reinforce each other, increasing the chances of the listener understanding the speaker. The Tadoma method can also help a deafblind person retain speech skills they may had otherwise. This can, in special cases, allow deafblind people to acquire entirely new words. It is a difficult method to learn and use, and is rarely used nowadays. However, a small number of deafblind people still use the Tadoma method in everyday communication. History The Tadoma method was invented by American teacher Sophie Alcorn and developed at the Perkins School for the Blind in Massachusetts. It is named after the first two children to whom it was taught: Winthrop "Tad" Chapman and Oma Simpson. It was hoped that the students would learn to speak by trying to reproduce what they felt on the speaker's face and throat while touching their own face. Helen Keller was a famous user of the method. See also Tactile signing References External links Deafblindness Deaf education Human communication
Tadoma
Biology
395
434,512
https://en.wikipedia.org/wiki/Land%20Transport%20Authority
The Land Transport Authority (LTA) is a statutory board under the Ministry of Transport of the Government of Singapore. History Incorporation The Land Transport Authority (LTA) was established on 1 September 1995 and was formed by the merger of various public sector entities: the Registry of Vehicles, Mass Rapid Transit Corporation, Roads & Transportation Division of the Public Works Department and Land Transportation Division of the former Ministry of Communications. 1996 Land Transport White Paper On 2 January 1996, the Land Transport Authority published the 1996 Land Transport White Paper, titled "A World Class Land Transport System". It outlined the government plans. Changes to existing schemes were proposed along with schemes were introduced across various transport sectors. This included the Electronic Road Pricing (ERP) scheme, which eventually become ubiquitous in the city state. 1996 Rail Financing Framework The 1996 Rail Financing Framework was a scheme that set out the financing framework of the rail transport system. In the white paper, it was phrased that the financing framework of the rail transport system would eventually be run on the basis of partnership, which the government and its regulatory authority would provide the assets and infrastructure (which remain fully owned by the regulatory authority), with commuters paying for the operating costs and operators extracting efficiency dividends within standards and fares set by the regulatory authority. The framework allowed for an open up of the rail transport market with the operation aspects of the industry no longer tied to the authorities, allowing for more autonomy of the incumbent operator and new operators to enter the market. This also laid the foundation for the restructure and flotation of SMRT Corporation, previously a state-owned incumbent operator under the name Mass Rapid Transit Corporation, in 2000. The framework was revised in 2008 as the New Rail Financing Framework (NRFF), which saw the regulatory authority re-assuming the full ownership of all rail assets, where the ownership and maintenance of which were previously held responsible under the individual operators. Changes to public transport To meet with the increasing number of commuters in Singapore, Land Transport Authority exercises on new changes over time. Rail LTA is responsible for the development of the rapid transit system and the expansion of the rail network. It aims to double the rail network by 2030. Since 2008, LTA has increased the length of Singapore's rail network from 138 km to about 180 km with the opening of the Boon Lay Extension in 2009, the Circle Line from 2009 to 2011 and the Circle Line Extension in 2012. Downtown Line, Thomson–East Coast Line are underway towards completion, with Cross Island Line and Jurong Region Line under construction. Half-height platform screen doors were installed in all 36 elevated stations in 2012 for the safety of passengers and to reduce delays in train service from track intrusions. HVLS fans are also installed at all elevated stations starting from 1 June 2012 and ending on 6 January 2013. Bus LTA took on the role of central bus network planner from 2009, working with communities and the bus operators, SBS Transit and SMRT Buses, to identify areas for bus improvements and to shift the focus to placing the commuter at the centre and taking a holistic approach in planning the bus network, taking into consideration development in the Rapid Transit System (RTS) network and other transport infrastructure. It is meant for their feedbacks, and any changes will be under the monthly updates, this has been brought through Bus Services Enhancement Programme. Under BSEP, about 80 new services are being introduced and 1,000 buses are being added over five years. Quality of Service (QoS) standards have also been tightened to reduce waiting time and reduce crowding. Now, those with increased loads run every 10 minutes or less during weekday peak hours in 2015. Feeder bus services have become more frequent too, with 95% of bus services now running at intervals of 10 minutes or less during the weekday peak periods, tightened from 85%. Announced in 2014, the Bus Contracting Model (BCM) which took effect on 1 September 2016, saw LTA assuming the full ownership of all bus assets in Singapore. Road projects Investment in road projects ensures that the economy will be ably supported with a strong and ever-improving transport infrastructure and coordinated system. One such project is the introduction of the Parking Guidance System (PGS) in the city and HarbourFront area to guide drivers to the nearest parking facility with available parking spaces, reducing the need for vehicles to cruise around to find empty parking spaces. As part of investment in road projects, LTA will also be expanding the EMAS signages and upgrading the oldest EMAS signages in the expressways. To improve road safety, LTA implemented a variety of road engineering measures, such as adding pedestrian crossing lines with enhanced dash markings, traffic calming markings and "pedestrian crossing ahead" road markings in more locations in 2009. "Your Speed Signs", electronic signs displaying the speed of a passing vehicle, were also introduced so that motorists could be more aware of their speeds and would be more likely to keep to the speed limit. Road studs which flash in tandem with the green man signal at traffic junctions were also installed at more locations to alert motorists to stop for crossing pedestrians. LTA is not responsible for reminding vehicle owners to scan their Autopass card when exiting Singapore. References External links Land Transport Masterplan 2013 ONE.MOTORING – information portal for Singapore motorists MyTransport.SG – a portal providing information and eServices for all land transport users Pay ERP charges via credit card Singapore Public Transport portal LTA latest press releases Transport in Singapore Statutory boards of the Singapore Government Intermodal transport authorities 1995 establishments in Singapore Rail accident investigators Government agencies established in 1995 Regulation in Singapore
Land Transport Authority
Technology
1,142
14,744,934
https://en.wikipedia.org/wiki/Nokia%202110
The Nokia 2110 is a cellular phone made by the Finnish telecommunications firm Nokia, first announced and released in January 1994. It is the first Nokia phone with the famous Nokia tune ringtone. The phone can send and receive SMS messages; and lists ten dialed calls, ten received calls and ten missed calls. At the time of the phone's release, it was smaller than others of its price and had a bigger display, so it became very popular. It also features a "revolutionary" new user interface featuring with two dynamic softkeys, which would later lead to the development of the Navi-key on its successor, the Nokia 6110, as well as the Series 20 interface. A later version, the Nokia 2110i, released in 1996, comes with more memory and a protruding antenna knob. A variant model, the Nokia 2140 (more popularly called the Nokia Orange), is the launch handset on the Orange network (now EE). It differed in that it was designed to work on the 1800 MHz frequency then utilised by Orange, and had a slightly less bulbous design. A North American model, the Nokia 2190, was also available. It is one of the earlier phones available on the Pacific Bell Mobile Services and Powertel's newly launched GSM 1900 network in 1995. A version for Digital AMPS was produced as the Nokia 2120. Another variant, the Nokia C6, was introduced in 1997 for Germany's analogue C-Netz. See also HP OmniGo 700LX, a palmtop PC with built-in Nokia 2110 References External links full phone specifications A Nokia 2110 User Manual 2110 Mobile phones introduced in 1994
Nokia 2110
Technology
343
1,039,736
https://en.wikipedia.org/wiki/Isotone
Two nuclides are isotones if they have the same neutron number N, but different proton number Z. For example, boron-12 and carbon-13 nuclei both contain 7 neutrons, and so are isotones. Similarly, 36S, 37Cl, 38Ar, 39K, and 40Ca nuclei are all isotones of 20 because they all contain 20 neutrons. Despite its similarity to the Greek for "same stretching", the term was formed by the German physicist K. Guggenheimer by changing the "p" in "isotope" from "p" for "proton" to "n" for "neutron". The largest numbers of observationally stable nuclides exist for isotones 50 (five: 86Kr, 88Sr, 89Y, 90Zr, 92Mo – noting also the primordial radionuclide 87Rb) and 82 (six: 138Ba, 139La, 140Ce, 141Pr, 142Nd, 144Sm – noting also the primordial radionuclide 136Xe). Neutron numbers for which there are no stable isotones are 19, 21, 35, 39, 45, 61, 89, 115, 123, and 127 or more (though 21, 142, 143, 146, and perhaps 150 have primordial radionuclides). In contrast, the proton numbers for which there are no stable isotopes are 43, 61, and 83 or more (83, 90, 92, and perhaps 94 have primordial radionuclides). This is related to nuclear magic numbers, the number of nucleons forming complete shells within the nucleus, e.g. 2, 8, 20, 28, 50, 82, and 126. No more than one observationally stable nuclide has the same odd neutron number, except for 1 (2H and 3He), 5 (9Be and 10B), 7 (13C and 14N), 55 (97Mo and 99Ru), and 107 (179Hf and 180mTa). In contrast, all even neutron numbers from 6 to 124, except 84 and 86, have at least two observationally stable nuclides. Neutron numbers for which there is a stable nuclide and a primordial radionuclide are 27 (50V), 65 (113Cd), 81 (138La), 84 (144Nd), 85 (147Sm), 86 (148Sm), 105 (176Lu), and 126 (209Bi). Neutron numbers for which there are two primordial radionuclides are 88 (151Eu and 152Gd) and 112 (187Re and 190Pt). The neutron numbers which have only one stable nuclide (compare: monoisotopic element for the proton numbers) are: 0, 2, 3, 4, 9, 11, 13, 15, 17, 23, 25, 27, 29, 31, 33, 37, 41, 43, 47, 49, 51, 53, 57, 59, 63, 65, 67, 69, 71, 73, 75, 77, 79, 81, 83, 84, 85, 86, 87, 91, 93, 95, 97, 99, 101, 103, 105, 109, 111, 113, 117, 119, 121, 125, 126, and the neutron numbers which have only one significant naturally-abundant nuclide (compare: mononuclidic element for the proton numbers) are: 0, 2, 3, 4, 9, 11, 13, 15, 17, 21, 23, 25, 29, 31, 33, 37, 41, 43, 47, 49, 51, 53, 57, 59, 63, 67, 69, 71, 73, 75, 77, 79, 83, 87, 91, 93, 95, 97, 99, 101, 103, 109, 111, 113, 117, 119, 121, 125, 142, 143, 146. See also Isotopes are nuclides having the same number of protons: e.g. carbon-12 and carbon-13. Isobars are nuclides having the same mass number (i.e. sum of protons plus neutrons): e.g. carbon-12 and boron-12. Nuclear isomers are different excited states of the same type of nucleus. A transition from one isomer to another is accompanied by emission or absorption of a gamma ray, or the process of internal conversion. (Not to be confused with chemical isomers.) Notes Nuclear physics
Isotone
Physics
944
2,589,664
https://en.wikipedia.org/wiki/KTF
KT Freetel Co., Ltd. (Korea Telecom Freetel, ) was a South Korean telecommunications firm, now merged into Korea Telecom, specializing in cellular, or mobile, phones. Since 1999, it has also developed extensive overseas operations. The company is credited with developing customized ring back tones. On 1 June 2009, KTF was merged with KT. In 2003, KTF received an order from PT Mobile-8 Telecom of Indonesia for a comprehensive consulting service. KTF also signed a contract for the export of its CDMA network management system and invested $10 million in the Indonesian mobile provider. KTF commercialized the world first nationwide HSDPA service with the brand of "SHOW" on 1 March 2007. In India, the firm completed the first stage of its contract with Reliance for $2.65 million worth of the CDMA network construction. KTF also holds a 25% stake in CEC Mobile of China, after investing a sum of 4.5 billion won in 2002. The two major shareholders of KTF are KT(52.99%) and NTT DoCoMo (10.03%). KTF sponsors a professional StarCraft team. Merged into KT KT officially declared the merger on January 14 2009. KFTC approved merge on February 25, 2009. The Korea Communications Commission finally approved the merger on Mar 18 2009. A special meeting of shareholders was held on Mar 27 2009. KT finished the merge on May 31, 2009. See also List of South Korean companies List of telephone operating companies Economy of South Korea SK Telecom LG Telecom KT U Mobile Members of the Conexus Mobile Alliance KT Group
KTF
Technology
339
38,621,100
https://en.wikipedia.org/wiki/Nihamanchi
Nihamanchï is a beer brewed from cassava (Manihot esculenta) by indigenous peoples of South America. It is also known as nihamanci, nijimanche, or nijiamanchi, and is related to chicha. Jívaro women make it by chewing manioc tubers, placing them in large jars, and allowing them to ferment in their saliva. Nijimanche is nutrious, and adults drink 4–5 quarts a day. The same beverage is made by the Jivaro in Ecuador and Peru (the Shuara, Achuara, Aguaruna and Mayna people); they call it nijimanche. As Michael Harner describes it: The sweet manioc beer (nihamanci or nijiamanchi), is prepared by first peeling and washing the tubers in the stream near the garden. Then the water and manioc are brought to the house, where the tubers are cut up and put in a pot to boil. ... The manioc is then mashed and stirred to a soft consistency with the aid of a special wooden paddle. While the woman stirs the mash, she chews handfuls of [it] and spits them back into the pot, a process that may take half an hour or longer. After the mash has been prepared, it is transferred to a beer storage jar and left to ferment. ... The resultant liquid tastes somewhat like a pleasingly alcoholic buttermilk and is most refreshing. The Jivaros consider it to be far superior to plain water, which they drink only in emergencies. The Tiriós and Erwarhoyanas, Indian tribes from northern Brazil and Suriname, make a beverage called sakurá with the sweet variety of cassava. Yagua people brew a similar beverage which they called masato. See also List of saliva-fermented beverages Notes References Arnalot, José. Lo que los Achuar me han enseñado. Quito: Abya-Yala, 1996. . Howell, Edward. Enzyme Nutrition: The Food Enzyme Concept. Avery Publishing Group, 1995. . Alcoholic drinks Amylase induced fermentation Indigenous cuisine of the Americas Indigenous topics of the Amazon Brazilian alcoholic drinks Fermented drinks
Nihamanchi
Chemistry,Biology
470
5,409,892
https://en.wikipedia.org/wiki/Woodboring%20beetle
The term woodboring beetle encompasses many species and families of beetles whose larval or adult forms eat and destroy wood (i.e., are xylophagous). In the woodworking industry, larval stages of some are sometimes referred to as woodworms. The three most species-rich families of woodboring beetles are longhorn beetles, bark beetles and weevils, and metallic flat-headed borers. Woodboring is thought to be the ancestral ecology of beetles, and bores made by beetles in fossil wood extend back to the earliest fossil record of beetles in the Early Permian (Asselian), around 295-300 million years ago. Ecology Woodboring beetles most often attack dying or dead trees. In forest settings, they are important in the turnover of trees by culling weak trees, thus allowing new growth to occur. They are also important as primary decomposers of trees within forest systems, allowing for the recycling of nutrients locked away in the relatively decay-resilient woody material of trees. To develop and reach maturity woodboring beetles need nutrients provided by fungi from outside of the inhabited wood. These nutrients are not only assimilated into the beetles' bodies but also are concentrated in their frass, contributing to soil nutrients cycles. Though the vast majority of woodboring beetles are ecologically important and economically benign, some species can become economic pests by attacking relatively healthy trees (e.g. Asian longhorn beetle, emerald ash borer) or by infesting downed trees in lumber yards. Species such as the Asian longhorn beetle and the emerald ash borer are examples of invasive species that threaten natural forest ecosystems. Invasion and control Woodboring beetles are commonly detected a few years after new construction. The lumber supply may have contained wood infected with beetle eggs or larvae, and since beetle life cycles can be one or more years, several years may pass before the presence of beetles becomes noticeable. In many cases, the beetles will be of a type that only attacks living wood, and thus incapable of "infesting" any other pieces of wood, or doing any further damage. Genuine infestations are far more likely in areas with high humidity, such as poorly ventilated crawl spaces. Housing with central heating/air-conditioning tends to cut the humidity of wood in the living areas to less than half of natural humidity, thus strongly reducing the likelihood of an infestation. Some species will infest furniture. Some beetles invade wood used in construction and furniture making; others limit their activity to forests or roots of living trees. The following lists some of those beetles that are house pests. Ambrosia beetle Common furniture beetle Deathwatch beetle Flat-headed wood-borer Powderpost beetle (Ptinidae, Bostrichidae) Old-house borer See also Bark beetles and weevils Carpenter ants Longhorn beetles Metallic flat-headed borers Termites Wood ants References External links Building defects Insect ecology
Woodboring beetle
Materials_science
608
59,492,054
https://en.wikipedia.org/wiki/Sala%20Senkayi
Sala Nanyanzi Senkayi is an African environmental scientist at the United States Environmental Protection Agency. She was the first Ugandan-born woman to win the Presidential Early Career Award for Scientists and Engineers. Early life and education Senkayi is the daughter of Abu Senkayi and Sunajeh Senkayi. Her family are from Butambala District in Uganda. Her father was an environmental scientist and worked at Texas A&M University as a research scientist from 1977. Senkayi obtained a bachelor's degree in biomedical sciences from Texas A&M University in College Station, Texas. She joined the University of Texas at Arlington, earning two more Bachelor's degrees in microbiology and biology.  Later, she earned a master's degree (2010) and a PhD (2012) degrees in environmental and earth sciences from the same university. Her PhD thesis considered the association between childhood leukaemia and proximity to airports in Texas. She found that benzene emissions were a predictor for childhood leukaemia. During her graduate studies Muwenda Mutebi II of Buganda and Sylvia Nnaginda visited her in Texas. Career Senkayi joined the United States Environmental Protection Agency in 2007. She works with local children in schools and colleges talking about the environment. She initiated the EPA Converses with Students webcast, an opportunity for children to speak to scientists who worked on environmental protection on Earth Day. Her research focuses on water quality protection and she is the Water Quality Division Quality Assurance Officer. In 2017 Senkayi was awarded the Presidential Early Career Award for Scientists and Engineers for her "transformative" community outreach and research. References Texas A&M University alumni University of Texas at Arlington alumni Environmental scientists 21st-century Ugandan women scientists 21st-century Ugandan scientists Year of birth missing (living people) Living people Recipients of the Presidential Early Career Award for Scientists and Engineers
Sala Senkayi
Environmental_science
382
3,157,929
https://en.wikipedia.org/wiki/Original%20design%20manufacturer
An original design manufacturer is a company that designs and manufactures a product, in contrast to "OEM" which only manufactures a product. Post-2016 Nokia phones (HMD) is an example of a firm which relies on original design manufacturers. In late 2019, it switched from relying on only one original design manufacturer to multiple original design manufacturers. Examples Foxconn is one example of an ODM, which helps upstream manufacturers such as Dell, Lenovo to manufacture laptops. It has also manufactured products for Apple, Nintendo, Sony, Microsoft, and many other companies. ZOTAC, a Hong Kong graphics card manufacturer that has its own factories, designs and manufactures some special Nvidia graphics cards, and then rebrands and provides them to companies like Lenovo. Intellectual property Original design manufacturers create their own intellectual property and are very proactive in patenting it. Most of their patents are filed in the US, China, and Taiwan. See also Electronics manufacturing services Original equipment manufacturer Contract manufacturer References Brands Design companies
Original design manufacturer
Engineering
207
10,127,300
https://en.wikipedia.org/wiki/Creative%20Wave%20Blaster
The Wave Blaster was an add-on MIDI-synthesizer for Creative Sound Blaster 16 and Sound Blaster AWE32 family of PC soundcards. It was a sample-based synthesis General MIDI compliant synthesizer. For General MIDI scores, the Wave Blaster's wavetable-engine produced more realistic instrumental music than the SB16's onboard Yamaha-OPL3. The Wave Blaster attached to a SB16 through a 26-pin expansion-header, eliminating the need for extra cabling between the SB16 and the Wave Blaster. The SB16 emulated an MPU-401 UART, giving existing MIDI-software the option to send MIDI-sequences directly to the attached Wave Blaster, instead of driving an external MIDI-device. The Wave Blaster's analog stereo-output fed into a dedicated line-in on the SB16, where the onboard-mixer allowed equalization, mixing, and volume adjustment. The Wave Blaster port was adopted by other sound card manufacturers who produced both daughterboards and soundcards with the expansion-header: Diamond, Ensoniq, Guillemot, Oberheim, Orchid, Roland, TerraTec, Turtle Beach, and Yamaha. The header also appeared on devices such as the Korg NX5R MIDI sound module, the Oberheim MC-1000/MC-2000 keyboards, and the TerraTec Axon AX-100 Guitar-to-MIDI converter. Since 2000, Wave Blaster-capable sound cards for computers are becoming rare. In 2005, Terratec released a new Wave Blaster daughterboard called the Wave XTable with 16mb of on-board sample memory comprising 500 instruments and 10 drum kits. In 2014, a new compatible card called Dreamblaster S1 was produced by the Belgian company Serdaco. In 2015 that same company released a high end card named Dreamblaster X1, comparable to Yamaha and Roland cards. In 2016 DreamBlaster X2 was released, a board with both a Wave Blaster interface and a USB interface. Wave Blaster II Creative released the Wave Blaster II (CT1910) shortly after the original Wave Blaster. Wave Blaster II used a newer E-mu EMU8000 synthesis-engine (which later appeared in the AWE32). By the time the SB16 reached the height of its popularity, competing MIDI-daughterboards had already pushed aside the Wave Blaster. In particular, Roland's Sound Canvas daughterboards (SCD-10/15), priced higher than Creative's offering, were highly regarded for their unrivalled musical reproduction in MIDI-scored game titles. (This was due to Roland's dominance in the production aspect of the MIDI game soundtracks; Roland's daughterboards shared the same synthesis-engine and instrument sound-set as the popular Sound Canvas 55, a commercial MIDI module favored by game composers.) By comparison, the Wave Blaster's instruments were improperly balanced, with many instruments striking at different volume-levels (relative to the de facto standard, Sound Canvas.) Reception Computer Gaming World in 1993 praised the Wave Blaster's audio quality and stated that the card was the best wave-table synthesis device for those with a compatible sound card. Wave Blaster connector pinout AGnd = Analog ground DGnd = Digital ground Some Wave Blaster cards offer audio inputs (Yamaha DB50XG) Some Wave Blaster cards offer TTL-MIDI output Reset is active low References External links Wave Blaster pin-out information Wave Blaster card photos (text in Japanese) Wave Blaster Card Collection 2014 Dreamblaster Module 2015 dreamblaster X1 review dreamblaster X1 vs Yamaha vs Roland IBM PC compatibles Computer peripherals Creative Technology products Sound cards
Creative Wave Blaster
Technology
767
15,144
https://en.wikipedia.org/wiki/International%20Electrotechnical%20Commission
The International Electrotechnical Commission (IEC; ) is an international standards organization that prepares and publishes international standards for all electrical, electronic and related technologies – collectively known as "electrotechnology". IEC standards cover a vast range of technologies from power generation, transmission and distribution to home appliances and office equipment, semiconductors, fibre optics, batteries, solar energy, nanotechnology, and marine energy, as well as many others. The IEC also manages four global conformity assessment systems that certify whether equipment, system or components conform to its international standards. All electrotechnologies are covered by IEC Standards, including energy production and distribution, electronics, magnetics and electromagnetics, electroacoustics, multimedia, telecommunications and medical technology, as well as associated general disciplines such as terminology and symbols, electromagnetic compatibility, measurement and performance, dependability, design and development, safety and the environment. History The first International Electrical Congress took place in 1881 at the International Exposition of Electricity, held in Paris. At that time the International System of Electrical and Magnetic Units was agreed to. The International Electrotechnical Commission held its inaugural meeting on 26 June 1906, following discussions among the British Institution of Electrical Engineers, the American Institute of Electrical Engineers, and others, which began at the 1900 Paris International Electrical Congress,, with British engineer R. E. B. Crompton playing a key role. In 1906, Lord Kelvin was elected as the first President of the International Electrotechnical Commission. The IEC was instrumental in developing and distributing standards for units of measurement, particularly the gauss, hertz, and weber. It was also first to promote the Giorgi System of standards, later developed into the SI, or Système International d'unités (in English, the International System of Units). In 1938, it published a multilingual international vocabulary to unify terminology relating to electrical, electronic and related technologies. This effort continues, and the International Electrotechnical Vocabulary is published online as the Electropedia. The CISPR (Comité International Spécial des Perturbations Radioélectriques) – in English, the International Special Committee on Radio Interference – is one of the groups founded by the IEC. Currently, 89 countries are IEC members while another 85 participate in the Affiliate Country Programme, which is not a form of membership but is designed to help industrializing countries get involved with the IEC. Originally located in London, United Kingdom, the IEC moved to its current headquarters in Geneva, Switzerland in 1948. It has regional centres in Africa (Nairobi, Kenya), Asia (Singapore), Oceania (Sydney, Australia), Latin America (São Paulo, Brazil) and North America (Worcester, Massachusetts, United States). The work is done by some 10,000 electrical and electronics experts from industry, government, academia, test labs and others with an interest in the subject. IEC Standards are often adopted as national standards by its members. IEC Standards The IEC cooperates closely with the International Organization for Standardization (ISO) and the International Telecommunication Union (ITU). In addition, it works with several major standards development organizations, including the IEEE with which it signed a cooperation agreement in 2002, which was amended in 2008 to include joint development work. IEC Standards that are not jointly developed with ISO have numbers in the range 60000–79999 and their titles take a form such as IEC 60417: Graphical symbols for use on equipment. Following the Dresden Agreement with CENELEC the numbers of older IEC standards were converted in 1997 by adding 60000, for example IEC 27 became IEC 60027. Standards of the 60000 series are also found preceded by EN to indicate that the IEC standard is also adopted by CENELEC as a European standard; for example IEC 60034 is also available as EN 60034. Standards developed jointly with ISO, such as ISO/IEC 26300 (Open Document Format for Office Applications (OpenDocument) v1.0), ISO/IEC 27001 (Information technology, Security techniques, Information security management systems, Requirements), and ISO/IEC 17000 series, carry the acronym of both organizations. The use of the ISO/IEC prefix covers publications from ISO/IEC Joint Technical Committee 1 – Information Technology, as well as conformity assessment standards developed by ISO CASCO (Committee on conformity assessment) and IEC CAB (Conformity Assessment Board). Other standards developed in cooperation between IEC and ISO are assigned numbers in the 80000 series, such as IEC 82045–1. IEC Standards are also being adopted by other certifying bodies such as BSI (United Kingdom), CSA (Canada), UL & ANSI/INCITS (United States), SABS (South Africa), Standards Australia, SPC/GB (China) and DIN (Germany). IEC standards adopted by other certifying bodies may have some noted differences from the original IEC standard. Membership and participation The IEC is made up of members, called national committees, and each NC represents its nation's electrotechnical interests in the IEC. This includes manufacturers, providers, distributors and vendors, consumers and users, all levels of governmental agencies, professional societies and trade associations as well as standards developers from national standards bodies. National committees are constituted in different ways. Some NCs are public sector only, some are a combination of public and private sector, and some are private sector only. About 90% of those who prepare IEC standards work in industry. IEC Member countries include: Full members Associate members (limited voting and managerial rights) Affiliates In 2001 and in response to calls from the WTO to open itself to more developing nations, the IEC launched the Affiliate Country Programme to encourage developing nations to become involved in the commission's work or to use its International Standards. Countries signing a pledge to participate in the work and to encourage the use of IEC Standards in national standards and regulations are granted access to a limited number of technical committee documents for the purposes of commenting. In addition, they can select a limited number of IEC Standards for their national standards' library. Countries participating in the Affiliate Country Programme are: Afghanistan Angola Antigua and Barbuda Armenia Azerbaijan Barbados Belize Benin Bhutan Bolivia Botswana Brunei Burkina Faso Burundi Cabo Verde Cambodia Cameroon Central African Republic Chad Comoros Congo (Rep. of) Congo (Democratic Rep. of) Costa Rica Côte d'Ivoire Dominica Dominican Republic Ecuador El Salvador Eritrea Eswatini Fiji Gabon Grenada Guatemala Guinea Guinea Bissau Guyana Haiti Honduras Jamaica Kyrgyzstan Laos Lebanon Lesotho Madagascar Malawi Mali Mauritania Mauritius Mongolia Mozambique Myanmar Namibia Nepal Niger Palestine Panama Papua New Guinea Paraguay Rwanda Saint Lucia Saint Vincent and the Grenadines São Tomé and Príncipe Senegal Seychelles Sierra Leone South Sudan Sudan Suriname Syrian Arab Republic Tanzania The Gambia Togo Trinidad and Tobago Turkmenistan Uruguay Uzbekistan Venezuela Yemen Zambia Zimbabwe Technical information Graphical Symbols Hydraulic Turbines Switchgear Dependability Power Systems Management Fibre Optics Audio, video and multimedia systems and equipment Standards and tools published in database format International Electrotechnical Vocabulary IEC Glossary IEC 60061: Lamp caps, lampholders and gauges IEC 60417 Graphical Symbols for Use on Equipment IEC 60617: Graphical Symbols for Diagrams See also International Organization for Standardization International Telecommunication Union World Standards Cooperation List of IEC standards List of IEC technical committees References External links Organizations established in 1906 1906 establishments in the United Kingdom Electrical engineering organizations Electrical safety standards organizations International organisations based in Switzerland Organisations based in Geneva
International Electrotechnical Commission
Engineering
1,501
27,285,703
https://en.wikipedia.org/wiki/Safingol
Safingol is a lyso-sphingolipid protein kinase inhibitor. It has the molecular formula C18H39NO2 and is a colorless solid. Medicinally, safingol has demonstrated promising anticancer potential as a modulator of multi-drug resistance and as an inducer of necrosis. The administration of safingol alone has not been shown to exert a significant effect on tumor cell growth. However, preclinical and clinical studies have shown that combining safingol with conventional chemotherapy agents such as fenretinide, vinblastine, irinotecan and mitomycin C can dramatically potentiate their antitumor effects. In phase I clinical trials, it was found to be safe to co-administer with cisplatin, but caused reversible dose-dependent hepatotoxicity. Mechanism The underlying mechanism by which safingol induces cell death is poorly understood. It is believed to exert a variety of inhibitory effects, resulting in a series of cascades that result in accidental necrotic cell death brought about by reactive oxygen species (ROS) and mediated by autophagy. Increased autophagic activity has been associated with increased cellular death, although it is unclear if there is any causative relationship between the two. Because autophagy normally plays a pro-survival role by impeding apoptosis, it is curious that it may play a role in cell death following safingol exposure. Safingol competitively competes with phorbol dibutyrate at regulatory domains of the protein kinase C family, inhibiting the activation of such enzymes as PKCβ-I, PKCδ, and PKCε. Safingol can also inhibit phosphoinositide 3-kinase (PI3k), which is a critical component of the mTOR and MAPK/ERK pathways. Furthermore, safingol, like other sphingolipids, has been found to inhibit glucose uptake. This results in oxidative stress, leading to the generation of ROS that are both time and concentration-dependent. Together, the inhibitory signaling effects (particularly of PKCε and PI3k) and the presence of ROS synergize to induce autophagy. Following autophagic activity, cell death is eventually induced by an as of yet unknown mechanism. Missing from this cellular death are any signs of apoptotic induction such as characteristic changes to nuclear morphology and PARP cleavage. Instead, several hallmarks of necrosis are observed, such as caspase-independent cell death, the loss of plasma membrane integrity, the collapse of mitochondrial membrane potential, and the depletion of intracellular ATP. However, the involvement of RIPK1 has not been observed, suggesting that this necrosis is accidental in nature and not programmed. One potential explanation for safingol’s cytotoxicity is that high concentrations result in ROS-related molecular and cellular damage that is beyond repair. Therefore, autophagy does not directly contribute to death, but is rather a failed attempt to preserve cell viability. However, not only does this hypothesis warrants further testing, but safingol has demonstrated unusual regulatory effects on other pathways capable of regulating autophagy. As expected, a decrease in glucose heightens AMPK phosphorylation. However, an initial increase in phosphorylated mTOR is also observed, which eventually reduces after several hours. The mTOR pathway normally inhibits autophagy, as is induced by heightened glucose uptake. Therefore, decreasing glucose levels should suppress the mTOR pathway, allowing for autophagy. While autophagy is indeed observed following exposure of safingol, it is intriguing that mTOR is activated initially. Modulations in Bcl-2, Bcl-xL, and endonuclease G from mitochondria are also thought to play a role in safingol-induced cellular death by regulating autophagy. Safingol is also a putative inhibitor of sphingosine kinase 1 (SphK), which catalyzes the production of sphingosine 1-phosphate (S1P), an important mediator of cancer cell growth, proliferation, invasion, and angiogenesis. This ability further contributes to its anticancer potential. It can also affect the balance of other endogenous sphingolipids, particularly ceramide and dihydroceramide, which have been implicated in autophagic induction and ROS production. References Amines Diols
Safingol
Chemistry
947
71,078,367
https://en.wikipedia.org/wiki/TMEM104
Transmembrane protein 104 (TMEM104) is a protein that in humans is encoded by the TMEM104 gene. The aliases of TMEM104 are FLJ00021 and FLJ20255. Humans have a 163,255 base pair long gene coding sequence, 4703 base pair long mRNA, and 496 amino acid long protein sequence. In Eukaryotes, the TMEM104 gene is conserved. Gene Location TMEM104 is located on human chromosome 17 at the locus 17q25.1. TMEM104 is located between the genes NAT9 and GRIN2C. Transcripts There are 7 main transcription variants: isoform 1, isoform 2, variant X1 - X5. TMEM104 is predicted to have a promoter region 150 base pairs upstream of the start of transcription. The promoter region of Homo sapien TMEM104 compared to other organisms is very unconserved. It was hard to find anything outside of Mammalia species and most were found under Primates. Tissue expression In most human tissues, TMEM104 has a modest expression level (25–50th percentile), relative to all human proteins, according to RNA-seq data. Subcelluar expression The protein has been located primarily in the plasma membrane and less so found in nucleus. Immunochemistry Data Thermofisher claims that it exhibits significant nuclear and cytoplasmic positivity in glandular cells. With the aid of a TMEM104 polyclonal antibody, the samples were probed. Protein TMEM104 variant 1 protein is 496 amino acids in length. TMEM104 is a secreted protein that is overexpressed in Adrenal. TMEM104 is a phenylalanine enriched and glutamine poor protein. Characteristics TMEM104 has an isoelectric point of 6.8 and a molecular weight of 55.7 kdaltons. It is predicted to have between nine and eleven transmembrane domains, making it a transmembrane protein. Post Translation Modifications The post-translational modifications N-glycosylation, sulfonation, and phosphorylation are among those predicted for TMEM104. Tertiary structure TMEM104 has a tertiary structure with alpha helices and beta sheets. Interaction TMEM104 has been shown to interact with CASKIN2, TMEM94, TOMM6, SYNGR3, SYTL5, B3GNT8, NTRK3, C15orf39, and PPSIG. Homology Ortholog TMEM104 has no paralogs. TMEM protein is found mostly in Eukaryotes. The orthologs in the following table were discovered through BLAST searches. Although by no means exhaustive, this list demonstrates the enormous variety of organisms that include TMEM104 orthologs. References Uncharacterized proteins
TMEM104
Biology
639
577,961
https://en.wikipedia.org/wiki/Department%20of%20Plant%20Sciences%2C%20University%20of%20Cambridge
The Department of Plant Sciences is a department of the University of Cambridge that conducts research and teaching in plant sciences. It was established in 1904, although the university has had a professor of botany since 1724. Research , the department pursues three strategic targets of research Global food security Synthetic biology and biotechnology Climate science and ecosystem conservation See also the Sainsbury Laboratory Cambridge University Notable academic staff Sir David Baulcombe, FRS, Regius Professor of Botany Beverley Glover, Professor of Plant systematics and evolution, director of the Cambridge University Botanic Garden Howard Griffiths, Professor of Plant Ecology Julian Hibberd, Professor of Photosynthesis Alison Smith, Professor of Plant Biochemistry and Head of Department , the department also has 66 members of faculty and postdoctoral researchers, 100 graduate students, 19 Biotechnology and Biological Sciences Research Council (BBSRC) Doctoral Training Program (DTP) PhD students, 20 part II Tripos undergraduate students and 44 support staff. History The University of Cambridge has a long and distinguished history in Botany including work by John Ray and Stephen Hales in the 17th century and 18th century, Charles Darwin’s mentor John Stevens Henslow in the 19th century, and Frederick Blackman, Arthur Tansley and Harry Godwin in the 20th century. Emeritus and alumni More recently, the department has been home to: John C. Gray, Emeritus Professor of Plant Molecular Biology since 2011 Thomas ap Rees, Professor of Botany F. Ian Woodward, Lecturer and Fellow of Trinity Hall, Cambridge before being appointed Professor of Plant Ecology at the University of Sheffield References Plant Sciences, Department of Biotechnology in the United Kingdom Cambridge Universities and colleges established in 1904 1904 establishments in England
Department of Plant Sciences, University of Cambridge
Biology
328
47,350,206
https://en.wikipedia.org/wiki/Andromeda%20XVIII
Andromeda XVIII, discovered in 2008, is a dwarf spheroidal galaxy (has no rings, low luminosity, much dark matter, little gas or dust), which is a satellite of the Andromeda Galaxy (M31). It is one of the 14 known dwarf galaxies orbiting M31. It is relatively isolated, being about 1.8 million light-years (579 kpc) away. However, for an isolated dwarf galaxy it is also unusually quiescent. This suggests that Andromeda XVIII is a backsplash galaxy, a galaxy that once had a close orbital encounter with a more massive galaxy which stripped it of much of its star-forming matter. However, alternative hypotheses are also possible for Andromeda XVIII. It was announced in 2010 that the orbiting galaxies lie close to a plane running through M31's center. See also List of Andromeda's satellite galaxies References Dwarf spheroidal galaxies Andromeda Subgroup Andromeda (constellation)
Andromeda XVIII
Astronomy
213
31,333,144
https://en.wikipedia.org/wiki/Phospho.ELM
Phospho.ELM is a database storing the phosphorylation data extracted from the literature and the analyses. References External links http://phospho.elm.eu.org Biological databases Post-translational modification Phosphorus
Phospho.ELM
Chemistry,Biology
52
700,674
https://en.wikipedia.org/wiki/Tr%20%28Unix%29
tr is a command in Unix, Plan 9, Inferno, and Unix-like operating systems. It is an abbreviation of translate or transliterate, indicating its operation of replacing or removing specific characters in its input data set. Overview The utility reads a byte stream from its standard input and writes the result to the standard output. As arguments, it takes two sets of characters (generally of the same length), and replaces occurrences of the characters in the first set with the corresponding elements from the second set. For example, tr 'abcd' 'jkmn' maps all characters a to j, b to k, c to m, and d to n. The character set may be abbreviated by using character ranges. The previous example could be written: tr 'a-d' 'jkmn' In POSIX-compliant versions of tr, the set represented by a character range depends on the locale's collating order, so it is safer to avoid character ranges in scripts that might be executed in a locale different from that in which they were written. Ranges can often be replaced with POSIX character sets such as [:alpha:]. The s flag causes tr to compress sequences of identical adjacent characters in its output to a single token. For example, tr -s '\n' replaces sequences of one or more newline characters with a single newline. The d flag causes tr to delete all tokens of the specified set of characters from its input. In this case, only a single character set argument is used. The following command removes carriage return characters. tr -d '\r' The c flag indicates the complement of the first set of characters. The invocation tr -cd '[:alnum:]' therefore removes all non-alphanumeric characters. Implementations The original version of tr was written by Douglas McIlroy and was introduced in Version 4 Unix. The version of tr bundled in GNU coreutils was written by Jim Meyering. The command is available as a separate package for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities. It is also available in the OS-9 shell. A tr command is also part of ASCII's MSX-DOS2 Tools for MSX-DOS version 2. The command has also been ported to the IBM i operating system. Most versions of tr, including GNU tr and classic Unix tr, operate on single-byte characters and are not Unicode compliant. An exception is the Heirloom Toolchest implementation, which provides basic Unicode support. Ruby and Perl also have an internal tr operator, which operates analogously. Tcl's string map command is more general in that it maps strings to strings while tr maps characters to characters. See also sed List of Unix commands GNU Core Utilities References External links tr(1) – Unix 8th Edition manual page. usage examples at examplenow.com Unix text processing utilities Unix SUS2008 utilities Plan 9 commands Inferno (operating system) commands IBM i Qshell commands
Tr (Unix)
Technology
631
35,297,570
https://en.wikipedia.org/wiki/Gerontoplast
A gerontoplast is a plastid that develops from a chloroplast during the senescing of plant foliage. Gerontoplast development is generally seen to be the process of grana being unstacked, loss of thylakoid membranes, and large accumulation of plastoglobuli. Transformation of chloroplasts to gerontoplasts The term gerontoplast was first introduced in 1977 to define the unique features of the plastid formed during leaf senescence. The process of senescence brings about regulated dismantling of cellular organelles involved in photosynthesis. Chloroplasts responsible for gas exchange in stomata are the last organelles to degrade during senescence, and give plants the green color. The formation of gerontoplasts from chloroplasts during senescence involves extensive structural modifications of the thylakoid membrane with the concomitant formation of a large number of plastoglobuli with lipophilic materials. The envelope of the plastid, however, remains intact. References External links Organelles Photosynthesis
Gerontoplast
Chemistry,Biology
237
8,641,308
https://en.wikipedia.org/wiki/Disk%20buffer
In computer storage, a disk buffer (often ambiguously called a disk cache or a cache buffer) is the embedded memory in a hard disk drive (HDD) or solid-state drive (SSD) acting as a buffer between the rest of the computer and the physical hard disk platter or flash memory that is used for storage. Modern hard disk drives come with 8 to 256 MiB of such memory, and solid-state drives come with up to 4 GB of cache memory. Since the late 1980s, nearly all disks sold have embedded microcontrollers and either an ATA, Serial ATA, SCSI, or Fibre Channel interface. The drive circuitry usually has a small amount of memory, used to store the data going to and coming from the disk platters. The disk buffer is physically distinct from and is used differently from the page cache typically kept by the operating system in the computer's main memory. The disk buffer is controlled by the microcontroller in the hard disk drive, and the page cache is controlled by the computer to which that disk is attached. The disk buffer is usually quite small, ranging between 8 MB to 4 GB, and the page cache is generally all unused main memory. While data in the page cache is reused multiple times, the data in the disk buffer is rarely reused. In this sense, the terms disk cache and cache buffer are misnomers; the embedded controller's memory is more appropriately called disk buffer. Note that disk array controllers, as opposed to disk controllers, usually have normal cache memory of around 0.5–8 GiB. Uses Read-ahead/read-behind When a disk's controller executes a physical read, the actuator moves the read/write head to (or near to) the correct cylinder. After some settling and possibly fine-actuating the read head begins to pick up track data, and all is left to do is wait until platter rotation brings the requested data. The data read ahead of request during this wait is unrequested but free, so typically saved in the disk buffer in case it is requested later. Similarly, data can be read for free behind the requested one if the head can stay on track because there is no other read to execute or the next actuating can start later and still complete in time. If several requested reads are on the same track (or close by on a spiral track), most unrequested data between them will be both read ahead and behind. Speed matching The speed of the disk's I/O interface to the computer almost never matches the speed at which the bits are transferred to and from the hard disk platter. The disk buffer is used so that both the I/O interface and the disk read/write head can operate at full speed. Write acceleration The disk's embedded microcontroller may signal the main computer that a disk write is complete immediately after receiving the write data, before the data is actually written to the platter. This early signal allows the main computer to continue working even though the data has not actually been written yet. This can be somewhat dangerous, because if power is lost before the data is permanently fixed in the magnetic media, the data will be lost from the disk buffer, and the file system on the disk may be left in an inconsistent state. On some disks, this vulnerable period between signaling the write complete and fixing the data can be arbitrarily long, as the write can be deferred indefinitely by newly arriving requests. For this reason, the use of write acceleration can be controversial. Consistency can be maintained, however, by using a battery-backed memory system for caching data, although this is typically only found in high-end RAID controllers. Alternatively, the caching can simply be turned off when the integrity of data is deemed more important than write performance. Another option is to send data to disk in a carefully managed order and to issue "cache flush" commands in the right places, which is usually referred to as the implementation of write barriers. Command queuing Newer SATA and most SCSI disks can accept multiple commands while any one command is in operation through "command queuing" (see NCQ and TCQ). These commands are stored by the disk's embedded controller until they are completed. One benefit is that the commands can be re-ordered to be processed more efficiently, so that commands affecting the same area of a disk are grouped together. Should a read reference the data at the destination of a queued write, the to-be-written data will be returned. NCQ is usually used in combination with enabled write buffering. In case of a read/write FPDMA command with Force Unit Access (FUA) bit set to 0 and enabled write buffering, an operating system may see the write operation finished before the data is physically written to the media. In case of FUA bit set to 1 and enabled write buffering, write operation returns only after the data is physically written to the media. Cache control from the host Cache flushing Data that was accepted in write cache of a disk device will be eventually written to disk platters, provided that no starvation condition occurs as a result of firmware flaw, and that disk power supply is not interrupted before cached writes are forced to disk platters. In order to control write cache, ATA specification included FLUSH CACHE (E7h) and FLUSH CACHE EXT (EAh) commands. These commands cause the disk to complete writing data from its cache, and disk will return good status after data in the write cache is written to disk media. In addition, when the drive received STANDBY IMMEDIATE command, on disk media this command will park the head, on flash media this command will save FTL mapping table. An operating system will send FLUSH CACHE and STANDBY IMMEDIATE command to hard disk drives in the shutdown process. Mandatory cache flushing is used in Linux for write barriers in some filesystems (for example, ext4), together with Force Unit Access write command for journal commit blocks. Force Unit Access (FUA) Force Unit Access (FUA) is an I/O write command option that forces written data all the way to stable storage. FUA write commands (WRITE DMA FUA EXT 3Dh, WRITE DMA QUEUED FUA EXT 3Eh, WRITE MULTIPLE FUA EXT CEh), in contrast to corresponding commands without FUA, write data directly to the media, regardless of whether write caching in the device is enabled or not. FUA write command will not return until data is written to media, thus data written by a completed FUA write command is on permanent media even if the device is powered off before issuing a FLUSH CACHE command. FUA appeared in the SCSI command set, and was later adopted by SATA with NCQ. FUA is more fine-grained as it allows a single write operation to be forced to stable media and thus has smaller overall performance impact when compared to commands that flush the entire disk cache, such as the ATA FLUSH CACHE family of commands. Windows (Vista and up) supports FUA as part of Transactional NTFS, but only for SCSI or Fibre Channel disks where support for FUA is common. It is not known whether a SATA drive that supports FUA write commands will actually honor the command and write data to disk platters as instructed; thus, Windows 8 and Windows Server 2012 instead send commands to flush the disk write cache after certain write operations. Although the Linux kernel gained support for NCQ around 2007, SATA FUA remains disabled by default because of regressions that were found in 2012 when the kernel's support for FUA was tested. The Linux kernel supports FUA at the block layer level. See also Hybrid array Hybrid drive References Computer storage devices Hard disk computer storage Solid-state computer storage
Disk buffer
Technology
1,605
684,771
https://en.wikipedia.org/wiki/Enemy%20of%20the%20State%20%28film%29
Enemy of the State is a 1998 American political action thriller film directed by Tony Scott, written by David Marconi, produced by Jerry Bruckheimer, and starring Will Smith and Gene Hackman with an ensemble supporting cast consisting of Jon Voight, Regina King, Loren Dean, Jake Busey, Barry Pepper and Gabriel Byrne. In the film, a lawyer is targeted by a group of corrupt National Security Agency (NSA) agents after he unknowingly receives a tape of the agents murdering a congressman. Enemy of the State was released on November 20, 1998, by Buena Vista Pictures through its Touchstone Pictures label. The film grossed $250.8 million worldwide, and received generally positive reviews from film critics, with many praising the writing and direction as well as the chemistry between Smith and Hackman. Plot Congressman Phil Hammersley wants to block the passage of a new piece of counterterrorism legislation that would dramatically expand the surveillance powers of American intelligence agencies, thinking that the potential benefits of the bill are not worth sacrificing the privacy rights of ordinary citizens for. NSA Assistant Director Thomas Reynolds, wanting the bill passed to obtain a long-delayed promotion, has agents loyal to him murder Hammersley and stage his death as a car accident following a heart attack. Labor lawyer Robert Clayton Dean works on a case involving restaurant owner and mob boss Paulie Pintero. Dean occasionally hires "Brill", a man whom he has never met in person, to conduct surveillance operations. Brill obtains a tape incriminating Pintero for labor racketeering. Dean threatens Pintero with the tape to ensure that the mobster agrees to a favorable settlement. Reynolds and his team spot biologist Daniel Zavitz swapping out a tape from a remote wildlife camera stationed near the murder scene. After viewing footage of the murder, Zavitz contacts a journalist to publicize the tape. Reynolds' team intercepts the call and rush to Zavitz's apartment. Zavitz transfers the video to a disc and bumps into Dean, his old college friend, while fleeing. Panicked, Zavitz slips the disc into Dean's shopping bag without his knowledge. He runs into the path of an oncoming fire truck and dies, while Reynolds has the journalist murdered. Looking for the disc, Reynolds' team identify Dean and visit him disguised as cops. When Dean refuses to let them search his belongings without a warrant, the agents erroneously believe that he is knowingly withholding the disc. They break into Dean's house while he and his family are out and plant bugs in his clothes and personal effects. They also disseminate false evidence that Dean is laundering money through his firm for Pintero and having an affair with Rachel Banks (Dean's ex-girlfriend and Brill's courier). Because of the subterfuge, Dean is fired from his law firm, his bank accounts are frozen pending a federal investigation, and his wife, Carla, throws him out. Dean asks Rachel to contact Brill for help. Reynolds intercepts the call and sends someone to impersonate Brill. The real Brill rescues Dean and warns him that the NSA is responsible for ruining his life. Dean later finds Rachel killed in her home. Dean finds the disc and shows it to Brill, who identifies Reynolds. The NSA agents raid Brill's hideout; Brill and Dean escape but the disc is destroyed in a car fire. Brill is actually Edward Lyle, a former NSA communications expert stationed in Iran during the Iranian Revolution. His partner, Rachel's father, was killed, but Lyle escaped and has been working covertly ever since, employing Rachel as a courier to watch over her. Lyle urges Dean to start a new life, but he insists on clearing his name. Dean and Lyle trail Congressman Sam Albert, a key supporter of the bill, and record a videotape of him with his mistress. Dean and Lyle hide an NSA listening device in Albert's hotel room, knowing that he will find it. Lyle then hacks into Reynolds' personal bank account and deposits money to make it look like he is being paid to blackmail Albert. A meeting is arranged with Reynolds to exchange the video so he can be tricked into incriminating himself. Reynolds' men instead ambush the meeting and hold Lyle and Dean at gunpoint, demanding the tape. Dean, anticipating this, lies, saying that the evidence is hidden at Pintero's restaurant, which is under FBI surveillance. He then tricks Pintero and Reynolds into believing that the other man has the tape. The encounter escalates into a firefight when a gangster shoots an NSA agent in the back; Pintero, his men, Reynolds, and almost all of his agents are killed. Meanwhile, Lyle sends the FBI a live feed of the incident to trigger a raid on the restaurant before escaping in disguise. Dean is rescued, the survivors are arrested, and the conspiracy is exposed. To avoid scandal, Congress abandons the bill, while the NSA executes a cover-up of Reynolds' actions. Dean is cleared of all charges and reconciles with Carla. Lyle sends Dean a "farewell" message via his TV, showing himself relaxing on a tropical island with his cat. Cast Will Smith as Robert Clayton Dean Gene Hackman as Edward "Brill" Lyle Jon Voight as NSA Assistant Director Thomas Brian Reynolds Regina King as Carla Dean Loren Dean as NSA Agent Loren Hicks, Reynolds' aide-de-camp Jake Busey as Krug, a USMC veteran and NSA field operative Barry Pepper as NSA Agent David Pratt Jason Lee as Daniel Leon Zavitz Gabriel Byrne as the Brill imposter who tries to kidnap Dean Lisa Bonet as Rachel Banks Jack Black as NSA Agent Fiedler, a surveillance analyst Jamie Kennedy as NSA Agent Jamie Williams Scott Caan as Jones, Hicks' partner James LeGros as Jerry Miller, managing partner of Dean's firm Stuart Wilson as Congressman Sam Albert Ian Hart as NSA Agent John Bingham Jascha Washington as Eric Dean Anna Gunn as Emily Reynolds Grant Heslov as Lenny Bloom Bodhi Elfman as NSA Agent Van Dan Butler as NSA Director Admiral Shaffer Jason Robards as Congressman Philip Hammersley (uncredited) John Capodice as Old Worker #1 Seth Green as NSA Agent Selby (uncredited) Tom Sizemore as Paulie Pintero (uncredited) Philip Baker Hall as Mark Silverberg (uncredited) Brian Markinson as Brian Blake (uncredited) Larry King as Himself Production The story is set in both Washington, D.C., and Baltimore, and most of the filming was done in Baltimore. Location shooting began on a ferry in Fell's Point. In mid-January, the company moved to Los Angeles to complete production in April 1998. David Marconi spent over 2 1/2 years developing his original script at Don Simpson/Jerry Bruckheimer Films under the direction of Lucas Foster, their development executive at the time. Oliver Stone expressed early interest in directing Marconi's script, but ultimately Jerry Bruckheimer went with Tony Scott who he had a long standing relationship with because of their previous collaborations. The writers Aaron Sorkin, Henry Bean and Tony Gilroy each performed an uncredited rewrite of the script. Mel Gibson and Tom Cruise were considered for the part that went to Will Smith, who took the role largely because he wanted to work with Gene Hackman, and had previously enjoyed working with the producer Jerry Bruckheimer on Bad Boys. George Clooney was also considered for a role in the film. Sean Connery was considered for the role that went to Hackman. The film is notable for having cast several soon-to-be stars in smaller supporting roles, which casting director Victoria Thomas credited to people's interest in working with Gene Hackman. The film's crew included a technical surveillance counter-measures consultant who also had a minor role as a spy shop merchant. Hackman had previously acted in a similar thriller about spying and surveillance, The Conversation (1974). The photo in Edward Lyle's NSA file is of Hackman in The Conversation. Reception Box office Enemy of the State grossed $111.5 million in the United States and $139.3 million in other territories, for a worldwide total of $250.8 million, against a production budget of $90 million. The film opened at #2, behind The Rugrats Movie, grossing $20 million over its first weekend at 2,393 theaters, averaging $8,374 per venue. It made $18.1 million in its second weekend and $9.7 million in its third, finishing third place both times. Critical response On the review aggregator website Rotten Tomatoes, Enemy of the State holds an approval rating of 70% based on 84 reviews, with an average rating of 6.44/10. The website's critics consensus reads: "An entertaining, topical thriller that finds director Tony Scott on solid form and Will Smith confirming his action headliner status." Metacritic assigned the film a normalized score of 67 out of 100, based on 22 critics, indicating "generally favorable reviews". Audiences polled by CinemaScore gave the film an average grade of A− on an A+ to F scale. Kenneth Turan of the Los Angeles Times expressed enjoyment in the movie, noting how its "pizazz [overcame] occasional lapses in moment-to-moment plausibility". Janet Maslin of The New York Times approved of the film's action-packed sequences, but cited how it was similar in manner to the rest of the members of "Simpson's and Bruckheimer's school of empty but sensation-packed filming. In a combination of the two's views, Edvins Beitiks of the San Francisco Examiner praised many of the movie's development aspects, but criticized the overall concept that drove the film from the beginning—the efficiency of government intelligence—as unrealistic. Roger Ebert of The Chicago Sun-Times felt "the climax edges perilously close to the ridiculous" but overall enjoyed the film, particularly Voight and Hackman's performances. Kim Newman considered Enemy of the State a "continuation of The Conversation", the 1974 psychological thriller that starred Hackman as a paranoid, isolated surveillance expert. Undeveloped television series In October 2016, ABC announced it had green-lit a television series sequel to the film, with Bruckheimer to return as producer. The series would take place two decades after the original film, where "an elusive NSA spy is charged with leaking classified intelligence, an idealistic female attorney must partner with a hawkish FBI agent to stop a global conspiracy". However, nothing ever came to fruition. Real life An episode of PBS's Nova titled "Spy Factory" reported that the film's portrayal of the NSA's capabilities was fiction: although the agency can intercept transmissions, connecting the dots is difficult. However, in 2001, the then-NSA director Gen. Michael Hayden, who was appointed to the position during the release of the film, told CNN's Kyra Phillips that "I made the judgment that we couldn't survive with the popular impression of this agency being formed by the last Will Smith movie." James Risen wrote in his 2006 book State of War: The Secret History of the CIA and the Bush Administration that Hayden "was appalled" by the film's depiction of the NSA, and sought to counter it with a PR campaign on behalf of the agency. Given the events of 9/11, the Patriot Act and Edward Snowden's revelations about the NSA's PRISM surveillance program, the film has become noteworthy for being ahead of its time regarding issues of national security and privacy. In June 2013, the NSA's PRISM and Boundless Informant programs for domestic and international surveillance were uncovered by The Guardian and The Washington Post as the result of information provided by the whistleblower Edward Snowden. This information revealed capabilities such as collection of Internet browsing, e-mail and telephone data of not only many Americans, but citizens of other nations as well. The Guardians John Patterson argued that Hollywood depictions of NSA surveillance, including Enemy of the State and Echelon Conspiracy, had "softened" up the American public to "the notion that our spending habits, our location, our every movement and conversation, are visible to others whose motives we cannot know". See also List of films featuring surveillance List of American films of 1998 Patriot Act References External links 1990s American films 1990s chase films 1990s English-language films 1990s political action films 1990s political thriller films 1998 action thriller films 1998 films American action thriller films American chase films American mystery drama films American mystery thriller films American neo-noir films American political action films American political thriller films Films about computing Films about security and surveillance Films about the American Mafia Films about the Federal Bureau of Investigation Films about lawyers Films about the National Security Agency Films directed by Tony Scott Films produced by Jerry Bruckheimer Films scored by Harry Gregson-Williams Films scored by Trevor Rabin Films set in Baltimore Films set in Washington, D.C. Films shot in Baltimore Films shot in Los Angeles Jerry Bruckheimer Films films Scott Free Productions films American techno-thriller films Touchstone Pictures films English-language action thriller films
Enemy of the State (film)
Technology
2,689
1,528,864
https://en.wikipedia.org/wiki/Broad%20Band%20X-ray%20Telescope
The Broad Band X-ray Telescope (BBXRT) was flown on the Space Shuttle Columbia (STS-35) from December 2 through December 11, 1990 as part of the ASTRO-1 payload. The flight of BBXRT marked the first opportunity for performing X-ray observations over a broad energy range (0.3-12 keV) with a moderate energy resolution (typically 90 eV and 150 eV at 1 and 6 keV, respectively). BBXRT was co-mounted with three ultraviolet telescopes HUT, WUPPE, and HIT for Astro-1 in 1990. This was, "..the first focusing X-ray telescope operating over a broad energy range 0.3-12 keV with a moderate energy resolution (90 eV at 1 keV and 150eV at 6 keV)." according to NASA. Hardware See also Spacelab X-ray astronomy List of X-ray space telescopes References External links Broad Band X-ray Telescope (BBXRT. GSFC. NASA) on the internet Space telescopes X-ray telescopes Crewed space observatories Space Shuttle program
Broad Band X-ray Telescope
Astronomy
226
24,871,369
https://en.wikipedia.org/wiki/Ionic%20liquid%20piston%20compressor
An ionic liquid piston compressor, ionic compressor or ionic liquid piston pump is a hydrogen compressor based on an ionic liquid piston instead of a metal piston as in a piston-metal diaphragm compressor. Principle An ionic liquid compressor takes advantage of two properties of ionic liquids—their virtually non-measurable vapor pressures and large temperature window for the liquid phase—in combination with the low solubility of some gases (e.g. hydrogen) in them. This insolubility is exploited by using the body of an ionic liquid to compress hydrogen up to 1000bar (14,500psi) in hydrogen filling stations. Linde's ionic liquid compressor reduced the number of moving parts from about 500 in a conventional reciprocating compressor down to 8. Many seals and bearings were removed in the design as the ionic liquid does not mix with the gas. Service life is about 10 times longer than a regular reciprocating compressor with reduced maintenance during use, energy costs are reduced by as much as 20%. The heat exchangers that are used in a normal piston compressor are removed as the heat is removed in the cylinder itself where it is generated. Almost 100% of the energy going into the process is being used with little energy wasted as reject heat. Not to be confused with the ion pump or the ionic liquid ring pump. History After the renewed interest in ionic liquids, research was done by proionic, an enterprise in the spin-off center "ZAT Center for applied Technology" of the University of Leoben. The system was demonstrated at Zemships. See also Electrochemical hydrogen compressor Guided rotor compressor Hydride compressor Linear compressor Ganzair Compressor Timeline of hydrogen technologies References Gas compressors Hydrogen technologies
Ionic liquid piston compressor
Chemistry
351
277,956
https://en.wikipedia.org/wiki/Outline%20of%20neuroscience
The following outline is provided as an overview of and topical guide to neuroscience: Neuroscience is the scientific study of the structure and function of the nervous system. It encompasses the branch of biology that deals with the anatomy, biochemistry, molecular biology, and physiology of neurons and neural circuits. It also encompasses cognition, and human behavior. Neuroscience has multiple concepts that each relate to learning abilities and memory functions. Additionally, the brain is able to transmit signals that cause conscious/unconscious behaviors that are responses verbal or non-verbal. This allows people to communicate with one another. Branches of neuroscience Neurophysiology Neurophysiology is the study of the function (as opposed to structure) of the nervous system. Brain mapping Electrophysiology Extracellular recording Intracellular recording Brain stimulation Electroencephalography Intermittent rhythmic delta activity :Category: Neurophysiology :Category: Neuroendocrinology :Neuroendocrinology Neuroanatomy Neuroanatomy is the study of the anatomy of nervous tissue and neural structures of the nervous system. Immunostaining :Category: Neuroanatomy Neuropharmacology Neuropharmacology is the study of how drugs affect cellular function in the nervous system. Drug Psychoactive drug Anaesthetic Narcotic Behavioral neuroscience Behavioral neuroscience, also known as biological psychology, biopsychology, or psychobiology, is the application of the principles of biology to the study of mental processes and behavior in human and non-human animals. Neuroethology Developmental neuroscience Developmental neuroscience aims to describe the cellular basis of brain development and to address the underlying mechanisms. The field draws on both neuroscience and developmental biology to provide insight into the cellular and molecular mechanisms by which complex nervous systems develop. Human brain development timeline Development of the nervous system in humans Prenatal development - Cognitive development Aging and memory (see also Child development - Mechanisms) Cognitive neuroscience Cognitive neuroscience is concerned with the scientific study of biological substrates underlying cognition, with a focus on the neural substrates of mental processes. Neurolinguistics Neuroimaging Functional magnetic resonance imaging Positron emission tomography Systems neuroscience Systems neuroscience is a subdiscipline of neuroscience which studies the function of neural circuits and systems. It is an umbrella term, encompassing a number of areas of study concerned with how nerve cells behave when connected together to form neural networks. Neural circuit Neural network (biology) Neural oscillation Molecular neuroscience Molecular neuroscience is a branch of neuroscience that examines the biology of the nervous system with molecular biology, molecular genetics, protein chemistry and related methodologies (ie. concerning neurotransmitters moving via physiology of synapses etc) Neurochemistry Nutritional neuroscience Neuropeptide [ also see Neuropharmacology above] Computational neuroscience Computational neuroscience includes both the study of the information processing functions of the nervous system, and the use of digital computers to study the nervous system. It is an interdisciplinary science that links the diverse fields of neuroscience, cognitive science and psychology, electrical engineering, computer science, physics and mathematics. Neural network Neuroinformatic Neuroengineering Brain–computer interface Mathematical neuroscience Neurophilosophy Neurophilosophy or "philosophy of neuroscience" is the interdisciplinary study of neuroscience and philosophy. Work in this field is often separated into two distinct approaches. The first approach attempts to solve problems in philosophy of mind with empirical information from the neurosciences. The second approach attempts to clarify neuroscientific results using the conceptual rigor and methods of philosophy of science. Philosophy of mind Neuroethics Neuroscience of free will Neurology Neurology is the medical specialty dealing with disorders of the nervous system. It deals with the diagnosis and treatment of all categories of disease involving the central, peripheral, and autonomic nervous systems. Stroke Parkinson's disease Alzheimer's disease Huntington's disease Multiple sclerosis Amyotrophic lateral sclerosis Rabies Schizophrenia Epilepsy Hydrocephalus Brain damage Traumatic brain injury Closed head injury Coma Paralysis Level of consciousness Neurosurgery Neuropsychology Neuropsychology studies the structure and function of the brain related to psychological processes and behaviors. The term is used most frequently with reference to studies of the effects of brain damage in humans and animals. Agraphia Agnosia Alexia Amnesia Anosognosia Aphasia Apraxia Dementia Dyslexia Hemispatial neglect Neurobiological effects of physical exercise Neuroevolution and neuroeconomics Evolution of nervous systems Neuroevolution History of neuroscience History of neuroscience Neuron doctrine :Category: History of neuroscience Nervous system Outline of the human nervous system Action potential Acetylcholinesterase Central nervous system (CNS) Brain Dendrite Glial cells List of regions in the human brain Nervous system Neurite Neuron Neuroplasticity Synaptic plasticity Long-term potentiation Neurotransmitter Acetylcholine Dopamine Synapse Neuroscience organizations Persons influential in the field of neuroscience List of neuroscientists :Category: Neuroscientists Related sciences Genetics Neurochemistry Cognitive science Psychology Molecular biology Psychiatry Neurosurgery Linguistics Developmental biology Biotechnology Neurophilosophy See also Fundamentals of Neuroscience at Wikiversity References External links Neuroscience Information Framework (NIF) American Society for Neurochemistry Neuroscience Online (electronic neuroscience textbook) Faculty for Undergraduate Neuroscience (FUN) Neuroscience for Kids Neuroscience Discussion Group in ResearchGate Neuroscience Discussion Forum HHMI Neuroscience lecture series - Making Your Mind: Molecules, Motion, and Memory Neuroscience Neuroscience Neuroscience Neuroscience
Outline of neuroscience
Biology
1,146
260,914
https://en.wikipedia.org/wiki/Orders%20of%20magnitude%20%28numbers%29
This list contains selected positive numbers in increasing order, including counts of things, dimensionless quantities and probabilities. Each number is given a name in the short scale, which is used in English-speaking countries, as well as a name in the long scale, which is used in some of the countries that do not have English as their national language. Smaller than (one googolth) Mathematics – random selections: Approximately is a rough first estimate of the probability that a typing "monkey", or an English-illiterate typing robot, when placed in front of a typewriter, will type out William Shakespeare's play Hamlet as its first set of inputs, on the precondition it typed the needed number of characters. However, demanding correct punctuation, capitalization, and spacing, the probability falls to around 10−360,783. Computing: 2.2 is approximately equal to the smallest non-zero value that can be represented by an octuple-precision IEEE floating-point value. 1 is equal to the smallest non-zero value that can be represented by a quadruple-precision IEEE decimal floating-point value. 6.5 is approximately equal to the smallest non-zero value that can be represented by a quadruple-precision IEEE floating-point value. 3.6 is approximately equal to the smallest non-zero value that can be represented by an 80-bit x86 double-extended IEEE floating-point value. 1 is equal to the smallest non-zero value that can be represented by a double-precision IEEE decimal floating-point value. 4.9 is approximately equal to the smallest non-zero value that can be represented by a double-precision IEEE floating-point value. 1.5 is approximately equal to the probability that in a randomly selected group of 365 people, all of them will have different birthdays. 1 is equal to the smallest non-zero value that can be represented by a single-precision IEEE decimal floating-point value. 10−100 to 10−30 Mathematics: The chances of shuffling a standard 52-card deck in any specific order is around 1.24 (or exactly ) Computing: The number 1.4 is approximately equal to the smallest positive non-zero value that can be represented by a single-precision IEEE floating-point value. 10−30 (; 1000−10; short scale: one nonillionth; long scale: one quintillionth) ISO: quecto- (q) Mathematics: The probability in a game of bridge of all four players getting a complete suit each is approximately . 10−27 (; 1000−9; short scale: one octillionth; long scale: one quadrilliardth) ISO: ronto- (r) 10−24 (; 1000−8; short scale: one septillionth; long scale: one quadrillionth) ISO: yocto- (y) 10−21 (; 1000−7; short scale: one sextillionth; long scale: one trilliardth) ISO: zepto- (z) Mathematics: The probability of matching 20 numbers for 20 in a game of keno is approximately 2.83 × 10−19. Mathematics: The odds of a perfect bracket in the NCAA Division I men's basketball tournament are 1 in 263, approximately 1.08 × 10−19, if coin flips are used to predict the winners of the 63 matches. 10−18 (; 1000−6; short scale: one quintillionth; long scale: one trillionth) ISO: atto- (a) Mathematics: The probability of rolling snake eyes 10 times in a row on a pair of fair dice is about . 10−15 (; 1000−5; short scale: one quadrillionth; long scale: one billiardth) ISO: femto- (f) Mathematics: The Ramanujan constant, is an almost integer, differing from the nearest integer by approximately . 10−12 (; 1000−4; short scale: one trillionth; long scale: one billionth) ISO: pico- (p) Mathematics: The probability in a game of bridge of one player getting a complete suit is approximately (). Biology: Human visual sensitivity to 1000 nm light is approximately of its peak sensitivity at 555 nm. 10−9 (; 1000−3; short scale: one billionth; long scale: one milliardth) ISO: nano- (n) Mathematics – Lottery: The odds of winning the Grand Prize (matching all 6 numbers) in the US Powerball lottery, with a single ticket, under the rules , are 292,201,338 to 1 against, for a probability of (). Mathematics – Lottery: The odds of winning the Grand Prize (matching all 6 numbers) in the Australian Powerball lottery, with a single ticket, under the rules , are 134,490,400 to 1 against, for a probability of (). Mathematics – Lottery: The odds of winning the Jackpot (matching the 6 main numbers) in the current 59-ball UK National Lottery Lotto, with a single ticket, under the rules , are 45,057,474 to 1 against, for a probability of (). Mathematics – Lottery: The odds of winning the Jackpot (matching the 6 main numbers) in the former 49-ball UK National Lottery, with a single ticket, were 13,983,815 to 1 against, for a probability of (). 10−6 (; 1000−2; long and short scales: one millionth) ISO: micro- (μ) Mathematics – Poker: The odds of being dealt a royal flush in poker are 649,739 to 1 against, for a probability of 1.5 (). Mathematics – Poker: The odds of being dealt a straight flush (other than a royal flush) in poker are 72,192 to 1 against, for a probability of 1.4 (0.0014%). Mathematics – Poker: The odds of being dealt a four of a kind in poker are 4,164 to 1 against, for a probability of 2.4 (0.024%). 10−3 (0.001; 1000−1; one thousandth) ISO: milli- (m) Mathematics – Poker: The odds of being dealt a full house in poker are 693 to 1 against, for a probability of 1.4 × 10−3 (0.14%). Mathematics – Poker: The odds of being dealt a flush in poker are 507.8 to 1 against, for a probability of 1.9 × 10−3 (0.19%). Mathematics – Poker: The odds of being dealt a straight in poker are 253.8 to 1 against, for a probability of 4 × 10−3 (0.39%). Physics: α = , the fine-structure constant. 10−2 (0.01; one hundredth) ISO: centi- (c) Mathematics – Lottery: The odds of winning any prize in the UK National Lottery, with a single ticket, under the rules as of 2003, are 54 to 1 against, for a probability of about 0.018 (1.8%). Mathematics – Poker: The odds of being dealt a three of a kind in poker are 46 to 1 against, for a probability of 0.021 (2.1%). Mathematics – Lottery: The odds of winning any prize in the Powerball, with a single ticket, under the rules as of 2015, are 24.87 to 1 against, for a probability of 0.0402 (4.02%). Mathematics – Poker: The odds of being dealt two pair in poker are 21 to 1 against, for a probability of 0.048 (4.8%). 10−1 (0.1; one tenth) ISO: deci- (d) Legal history: 10% was widespread as the tax raised for income or produce in the ancient and medieval period; see tithe. Mathematics – Poker: The odds of being dealt only one pair in poker are about 5 to 2 against (2.37 to 1), for a probability of 0.42 (42%). Mathematics – Poker: The odds of being dealt no pair in poker are nearly 1 to 2, for a probability of about 0.5 (50%). 100 (1; one) Demography: The population of Monowi, an incorporated village in Nebraska, United States, was one in 2010. Religion: One is the number of gods in Judaism, Christianity, and Islam (monotheistic religions). Computing – Unicode: One character is assigned to the Lisu Supplement Unicode block, the fewest of any public-use Unicode block as of Unicode 15.0 (2022). Mathematics: ≈ , the ratio of the diagonal of a square to its side length. Mathematics: φ ≈ , the golden ratio. Mathematics: ≈ , the ratio of the diagonal of a unit cube. Mathematics: the number system understood by most computers, the binary system, uses 2 digits: 0 and 1. Mathematics: ≈ 2.236 067 9775, the correspondent to the diagonal of a rectangle whose side lengths are 1 and 2. Mathematics: + 1 ≈ , the silver ratio; the ratio of the smaller of the two quantities to the larger quantity is the same as the ratio of the larger quantity to the sum of the smaller quantity and twice the larger quantity. Mathematics: e ≈ , the base of the natural logarithm. Mathematics: the number system understood by ternary computers, the ternary system, uses 3 digits: 0, 1, and 2. Religion: three manifestations of God in the Christian Trinity. Mathematics: π ≈ , the ratio of a circle's circumference to its diameter. Religion: the Four Noble Truths in Buddhism. Biology: 7 ± 2, in cognitive science, George A. Miller's estimate of the number of objects that can be simultaneously held in human working memory. Music: 7 notes in a major or minor scale. Astronomy: 8 planets in the Solar System. Religion: the Noble Eightfold Path in Buddhism. Literature: 9 circles of Hell in the Inferno by Dante Alighieri. 101 (10; ten) ISO: deca- (da) Demography: The population of Pesnopoy, a village in Bulgaria, was 10 in 2007. Human scale: There are 10 digits on a pair of human hands, and 10 toes on a pair of human feet. Mathematics: The decimal system has 10 digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9. Religion: the Ten Commandments in the Abrahamic religions. Music: There are 12 notes in the chromatic scale. Astrology: There are 12 zodiac signs, each one representing part of the annual path of the sun's movement across the night sky. Computing – Microsoft Windows: Twelve successive consumer versions of Windows NT have been released as of December 2021. Music: Composers Ludwig van Beethoven and Dmitri Shostakovich both completed and numbered 15 string quartets in their lifetimes. Linguistics: The Finnish language has 15 noun cases. Mathematics: The hexadecimal system, a common number system used in computer programming, uses 16 digits where the last 6 are typically represented by letters: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F. Computing – Unicode: The minimum possible size of a Unicode block is 16 contiguous code points (i.e., U+abcde0 - U+abcdeF). Computing – UTF-16/Unicode: There are 17 addressable planes in UTF-16, and, thus, as Unicode is limited to the UTF-16 code space, 17 valid planes in Unicode. Science fiction: The 23 enigma plays a prominent role in the plot of The Illuminatus! Trilogy by Robert Shea and Robert Anton Wilson. Mathematics: e ≈ 23.140692633 Music: There is a combined total of 24 major and minor keys, also the number of works in some musical cycles of J. S. Bach, Frédéric Chopin, Alexander Scriabin, and Dmitri Shostakovich. Alphabetic writing: There are 26 letters in the Latin-derived English alphabet (excluding letters found only in foreign loanwords). Science fiction: The number 42, in by Douglas Adams, is the Answer to the Ultimate Question of Life, the Universe, and Everything which is calculated by an enormous supercomputer over a period of 7.5 million years. Biology: A human cell typically contains 46 chromosomes. Phonology: There are 47 phonemes in English phonology in Received Pronunciation. Syllabic writing: There are 49 letters in each of the two kana syllabaries (hiragana and katakana) used to represent Japanese (not counting letters representing sound patterns that have never occurred in Japanese). Chess: Either player in a chess game can claim a draw if 50 consecutive moves are made by each side without any captures or pawn moves. Demography: The population of Nassau Island, part of the Cook Islands, was around 78 in 2016. Syllabic writing: There are 85 letters in the modern version of the Cherokee syllabary. Music: Typically, there are 88 keys on a grand piano. Computing – ASCII: There are 95 printable characters in the ASCII character set. 102 (100; hundred) ISO: hecto- (h) European history: Groupings of 100 homesteads were a common administrative unit in Northern Europe and Great Britain (see Hundred (county division)). Music: There are 104 numbered symphonies of Franz Josef Haydn. Religion: 108 is a sacred number in Hinduism. Chemistry: 118 chemical elements have been discovered or synthesized as of 2016. Computing – ASCII: There are 128 characters in the ASCII character set, including nonprintable control characters. Videogames: There are 151 Pokémon in the first generation. Phonology: The Taa language is estimated to have between 130 and 164 distinct phonemes. Political science: There were 193 member states of the United Nations as of 2011. Computing: A GIF image (or an 8-bit image) supports maximum 256 (28) colors. Computing – Unicode: There are 327 different Unicode blocks as of Unicode 15.0 (2022). Aviation: 583 people died in the 1977 Tenerife airport disaster, the deadliest accident in the history of civil aviation. Music: The largest number (626) in the Köchel catalogue of works of Wolfgang Amadeus Mozart. Demography: Vatican City, the least populous independent country, has an approximate population of 800 as of 2018. 103 (; thousand) ISO: kilo- (k) Demography: The population of Ascension Island is 1,122. Music: 1,128: number of known extant works by Johann Sebastian Bach recognized in the Bach-Werke-Verzeichnis as of 2017. Typesetting: 2,000–3,000 letters on a typical typed page of text. Mathematics: 2,520 (5×7×8×9 or 23×32×5×7) is the least common multiple of every positive integer under (and including) 10. Terrorism: 2,996 persons (including 19 terrorists) died in the terrorist attacks of September 11, 2001. Biology: the DNA of the simplest viruses has 3,000 base pairs. Military history: 4,200 (Republic) or 5,200 (Empire) was the standard size of a Roman legion. Linguistics: Estimates for the linguistic diversity of living human languages or dialects range between 5,000 and 10,000. (SIL Ethnologue in 2009 listed 6,909 known living languages.) Astronomy – Catalogues: There are 7,840 deep-sky objects in the NGC Catalogue from 1888. Lexicography: 8,674 unique words in the Hebrew Bible. 104 (; ten thousand or a myriad) Biology: Each neuron in the human brain is estimated to connect to 10,000 others. Demography: The population of Tuvalu was 10,544 in 2007. Lexicography: 14,500 unique English words occur in the King James Version of the Bible. Zoology: There are approximately 17,500 distinct butterfly species known. Language: There are 20,000–40,000 distinct Chinese characters in more than occasional use. Biology: Each human being is estimated to have 20,000 coding genes. Grammar: Each regular verb in Cherokee can have 21,262 inflected forms. War: 22,717 Union and Confederate soldiers were killed, wounded, or missing in the Battle of Antietam, the bloodiest single day of battle in American history. Computing – Unicode: 42,720 characters are encoded in CJK Unified Ideographs Extension B, the most of any single public-use Unicode block as of Unicode 15.0 (2022). Aviation: , 44,000+ airframes have been built of the Cessna 172, the most-produced aircraft in history. Computing - Fonts: The maximum possible number of glyphs in a TrueType or OpenType font is 65,535 (216-1), the largest number representable by the 16-bit unsigned integer used to record the total number of glyphs in the font. Computing – Unicode: A plane contains 65,536 (216) code points; this is also the maximum size of a Unicode block, and the total number of code points available in the obsolete UCS-2 encoding. Mathematics: 65,537 is the largest known Fermat prime. Memory: , the largest number of decimal places of π that have been recited from memory is 70,030. 105 (; one hundred thousand or a lakh). Demography: The population of Saint Vincent and the Grenadines was 100,982 in 2009. Biology – Strands of hair on a head: The average human head has about 100,000–150,000 strands of hair. Literature: approximately 100,000 verses (shlokas) in the Mahabharata. Computing – Unicode: 149,186 characters (including control characters) encoded in Unicode as of version 15.0 (2022). Language: 267,000 words in James Joyce's Ulysses. Computing – Unicode: 293,168 code points assigned to a Unicode block as of Unicode 15.0. Genocide: 300,000 people killed in the Nanjing Massacre. Language – English words: The New Oxford Dictionary of English contains about 360,000 definitions for English words. Mathematics: 360,000 – The approximate number of entries in The On-Line Encyclopedia of Integer Sequences . Biology – Plants: There are approximately 390,000 distinct plant species known, of which approximately 20% (or 78,000) are at risk of extinction. Biology – Flowers: There are approximately 400,000 distinct flower species on Earth. Literature: 564,000 words in War and Peace by Leo Tolstoy. Literature: 930,000 words in the King James Version of the Bible. Mathematics: There are 933,120 possible combinations on the Pyraminx. Computing – Unicode: There are 974,530 publicly-assignable code points (i.e., not surrogates, private-use code points, or noncharacters) in Unicode. 106 (; 10002; long and short scales: one million) ISO: mega- (M) Demography: The population of Riga, Latvia was 1,003,949 in 2004, according to Eurostat. Computing – UTF-8: There are 1,112,064 (220 + 216 - 211) valid UTF-8 sequences (excluding overlong sequences and sequences corresponding to code points used for UTF-16 surrogates or code points beyond U+10FFFF). Computing – UTF-16/Unicode: There are 1,114,112 (220 + 216) distinct values encodable in UTF-16, and, thus (as Unicode is currently limited to the UTF-16 code space), 1,114,112 valid code points in Unicode (1,112,064 scalar values and 2,048 surrogates). Ludology – Number of games: Approximately 1,181,019 video games have been created as of 2019. Biology – Species: The World Resources Institute claims that approximately 1.4 million species have been named, out of an unknown number of total species (estimates range between 2 and 100 million species). Some scientists give 8.8 million species as an exact figure. Genocide: Approximately 800,000–1,500,000 (1.5 million) Armenians were killed in the Armenian genocide. Linguistics: The number of possible conjugations for each verb in the Archi language is 1,502,839. Info: The freedb database of CD track listings has around 1,750,000 entries . Computing – UTF-8: 2,164,864 (221 + 216 + 211 + 27) possible one- to four-byte UTF-8 sequences, if the restrictions on overlong sequences, surrogate code points, and code points beyond U+10FFFF are not adhered to. (Note that not all of these correspond to unique code points.) Mathematics – Playing cards: There are 2,598,960 different 5-card poker hands that can be dealt from a standard 52-card deck. Mathematics: There are 3,149,280 possible positions for the Skewb. Mathematics – Rubik's Cube: 3,674,160 is the number of combinations for the Pocket Cube (2×2×2 Rubik's Cube). Geography/Computing – Geographic places: The NIMA GEOnet Names Server contains approximately 3.88 million named geographic features outside the United States, with 5.34 million names. The USGS Geographic Names Information System claims to have almost 2 million physical and cultural geographic features within the United States. Computing - Supercomputer hardware: 4,981,760 processor cores in the final configuration of the Tianhe-2 supercomputer. Genocide: Approximately 5,100,000–6,200,000 Jews were killed in the Holocaust. Info – Web sites: As of , , the English Wikipedia contains approximately million articles in the English language. 107 (; a crore; long and short scales: ten million) Demography: The population of Haiti was 10,085,214 in 2010. Literature: 11,206,310 words in Devta by Mohiuddin Nawab, the longest continuously published story known in the history of literature. Genocide: An estimated 12 million persons shipped from Africa to the New World in the Atlantic slave trade. Mathematics: 12,988,816 is the number of domino tilings of an 8×8 checkerboard. Genocide/Famine: 15 million is an estimated lower bound for the death toll of the 1959–1961 Great Chinese Famine, the deadliest known famine in human history. War: 15 to 22 million casualties estimated as a result of World War I. Computing: 16,777,216 different colors can be generated using the hex code system in HTML (note that the trichromatic color vision of the human eye can only distinguish between about an estimated 1,000,000 different colors). Science Fiction: In Isaac Asimov's Galactic Empire, in 22,500 CE, there are 25,000,000 different inhabited planets in the Galactic Empire, all inhabited by humans in Asimov's "human galaxy" scenario. Genocide/Famine: 55 million is an estimated upper bound for the death toll of the Great Chinese Famine. Literature: Wikipedia contains a total of around articles in languages as of . War: 70 to 85 million casualties estimated as a result of World War II. Mathematics: 73,939,133 is the largest right-truncatable prime. 108 (; long and short scales: one hundred million) Demography: The population of the Philippines was 100,981,437 in 2015. Internet – YouTube: The number of YouTube channels is estimated to be 113.9 million. Info – Books: The British Library holds over 150 million items. The Library of Congress holds approximately 148 million items. See The Gutenberg Galaxy. Video gaming: , approximately 200 million copies of Minecraft (the most-sold video game in history) have been sold. Mathematics: More than 215,000,000 mathematical constants are collected on the Plouffe's Inverter . Mathematics: 275,305,224 is the number of 5×5 normal magic squares, not counting rotations and reflections. This result was found in 1973 by Richard Schroeppel. Demography: The population of the United States was 328,239,523 in 2019. Mathematics: 358,833,097 stellations of the rhombic triacontahedron. Info – Web sites: , the Netcraft web survey estimates that there are 525,998,433 (526 million) distinct websites. Astronomy – Cataloged stars: The Guide Star Catalog II has entries on 998,402,801 distinct astronomical objects. 109 (; 10003; short scale: one billion; long scale: one thousand million, or one milliard) ISO: giga- (G) Transportation – Cars: , there are approximately 1.4 billion cars in the world, corresponding to around 18% of the human population. Demographics – China: 1,409,670,000 – approximate population of the People's Republic of China in 2023. Demographics – India 1,428,627,663 – approximate population of India in 2023. Demographics – Africa: The population of Africa reached 1,430,000,000 sometime in 2023. Internet – Google: There are more than 1,500,000,000 active Gmail users globally. Internet: Approximately 1,500,000,000 active users were on Facebook as of October 2015. Computing – Computational limit of a 32-bit CPU: 2,147,483,647 is equal to 231−1, and as such is the largest number which can fit into a signed (two's complement) 32-bit integer on a computer. Computing – UTF-8: 2,147,483,648 (231) possible code points (U+0000 - U+7FFFFFFF) in the pre-2003 version of UTF-8 (including five- and six-byte sequences), before the UTF-8 code space was limited to the much smaller set of values encodable in UTF-16. Biology – base pairs in the genome: approximately 3.3 base pairs in the human genome. Linguistics: 3,400,000,000 – the total number of speakers of Indo-European languages, of which 2,400,000,000 are native speakers; the other 1,000,000,000 speak Indo-European languages as a second language. Mathematics and computing: 4,294,967,295 (232 − 1), the product of the five known Fermat primes and the maximum value for a 32-bit unsigned integer in computing. Computing – IPv4: 4,294,967,296 (232) possible unique IP addresses. Computing: 4,294,967,296 – the number of bytes in 4 gibibytes; in computation, 32-bit computers can directly access 232 units (bytes) of address space, which leads directly to the 4-gigabyte limit on main memory. Mathematics: 4,294,967,297 is a Fermat number and semiprime. It is the smallest number of the form which is not a prime number. Demographics – world population: 8,019,876,189 – Estimated population for the world as of 1 January 2024. 1010 (; short scale: ten billion; long scale: ten thousand million, or ten milliard) Biology – bacteria in the human body: There are roughly 1010 bacteria in the human mouth. Computing – web pages: approximately 5.6 web pages indexed by Google as of 2010. 1011 (; short scale: one hundred billion; long scale: hundred thousand million, or hundred milliard) Astronomy: There are 100 billion planets located in the Milky Way. Biology – Neurons in the brain: approximately (1±0.2) × 1011 neurons in the human brain. Medicine: The United States Food and Drug Administration requires a minimum of 3 x 1011 (300 billion) platelets per apheresis unit. Paleodemography – Number of humans that have ever lived: approximately (1.2±0.3) × 1011 live births of anatomically modern humans since the beginning of the Upper Paleolithic. Astronomy – stars in our galaxy: of the order of 1011 stars in the Milky Way galaxy. Mathematics: 608,981,813,029 is the smallest number for which there are more primes of the form 3k + 1 than of the form 3k + 2 up to the number. 1012 (; 10004; short scale: one trillion; long scale: one billion) ISO: tera- (T) Astronomy: Andromeda Galaxy, which is part of the same Local Group as our galaxy, contains about 1012 stars. Biology – Bacteria on the human body: The surface of the human body houses roughly 1012 bacteria. Astronomy – Galaxies: A 2016 estimate says there are 2 × 1012 galaxies in the observable universe. Biology: An estimate says there were 3.04 × 1012 trees on Earth in 2015. Mathematics: 7,625,597,484,987 – a number that often appears when dealing with powers of 3. It can be expressed as , , , and 33 or when using Knuth's up-arrow notation it can be expressed as and . Astronomy: A light-year, as defined by the International Astronomical Union (IAU), is the distance that light travels in a vacuum in one year, which is equivalent to about 9.46 trillion kilometers (). Mathematics: 1013 – The approximate number of known non-trivial zeros of the Riemann zeta function . Biology – Blood cells in the human body: The average human body is estimated to have (2.5 ± .5) × 1013 red blood cells. Mathematics – Known digits of π: , the number of known digits of π is 31,415,926,535,897 (the integer part of π). Biology – approximately 1014 synapses in the human brain. Biology – Cells in the human body: The human body consists of roughly 1014 cells, of which only 1013 are human. The remaining 90% non-human cells (though much smaller and constituting much less mass) are bacteria, which mostly reside in the gastrointestinal tract, although the skin is also covered in bacteria. Mathematics: The first case of exactly 18 prime numbers between multiples of 100 is 122,853,771,370,900 + n, for n = 1, 3, 7, 19, 21, 27, 31, 33, 37, 49, 51, 61, 69, 73, 87, 91, 97, 99. Cryptography: 150,738,274,937,250 configurations of the plug-board of the Enigma machine used by the Germans in WW2 to encode and decode messages by cipher. Computing – MAC-48: 281,474,976,710,656 (248) possible unique physical addresses. Mathematics: 953,467,954,114,363 is the largest known Motzkin prime. 1015 (; 10005; short scale: one quadrillion; long scale: one thousand billion, or one billiard) ISO: peta- (P) Biology – Insects: 1,000,000,000,000,000 to 10,000,000,000,000,000 (1015 to 1016) – The estimated total number of ants on Earth alive at any one time (their biomass is approximately equal to the total biomass of the human species). Computing: 9,007,199,254,740,992 (253) – number until which all integer values can exactly be represented in IEEE double precision floating-point format. Mathematics: 48,988,659,276,962,496 is the fifth taxicab number. Science Fiction: In Isaac Asimov's Galactic Empire, in what we call 22,500 CE, there are 25,000,000 different inhabited planets in the Galactic Empire, all inhabited by humans in Asimov's "human galaxy" scenario, each with an average population of 2,000,000,000, thus yielding a total Galactic Empire population of approximately 50,000,000,000,000,000. Cryptography: There are 256 = 72,057,594,037,927,936 different possible keys in the obsolete 56-bit DES symmetric cipher. Science Fiction: There are approximately 100,000,000,000,000,000 (1017) sentient beings in the Star Wars galaxy. Physical culture: Highest amount of bytes lifted by a human is 318,206,335,271,488,635 by Hafþór Júlíus Björnsson. 1018 (; 10006; short scale: one quintillion; long scale: one trillion) ISO: exa- (E) Mathematics: The first case of exactly 19 prime numbers between multiples of 100 is 1,468,867,005,116,420,800 + n, for n = 1, 3, 7, 9, 21, 31, 37, 39, 43, 49, 51, 63, 67, 69, 73, 79, 81, 87, 93. Mathematics: 261 − 1 = 2,305,843,009,213,693,951 (≈2.31) is the ninth Mersenne prime. It was determined to be prime in 1883 by Ivan Mikheevich Pervushin. This number is sometimes called Pervushin's number. Mathematics: Goldbach's conjecture has been verified for all n ≤ 4 by a project which computed all prime numbers up to that limit. Computing – Manufacturing: An estimated 6 transistors were produced worldwide in 2008. Computing – Computational limit of a 64-bit CPU: 9,223,372,036,854,775,807 (about 9.22) is equal to 263−1, and as such is the largest number which can fit into a signed (two's complement) 64-bit integer on a computer. Mathematics – NCAA basketball tournament: There are 9,223,372,036,854,775,808 (263) possible ways to enter the bracket. Mathematics – Bases: 9,439,829,801,208,141,318 (≈9.44) is the 10th and (by conjecture) largest number with more than one digit that can be written from base 2 to base 18 using only the digits 0 to 9, meaning the digits for 10 to 17 are not needed in bases greater than 10. Biology – Insects: It has been estimated that the insect population of the Earth is about 1019. Mathematics – Answer to the wheat and chessboard problem: When doubling the grains of wheat on each successive square of a chessboard, beginning with one grain of wheat on the first square, the final number of grains of wheat on all 64 squares of the chessboard when added up is 264−1 = 18,446,744,073,709,551,615 (≈1.84). Mathematics – Legends: The Tower of Brahma legend tells about a Hindu temple containing a large room with three posts, on one of which are 64 golden discs, and the object of the mathematical game is for the Brahmins in this temple to move all of the discs to another pole so that they are in the same order, never placing a larger disc above a smaller disc, moving only one at a time. Using the simplest algorithm for moving the disks, it would take 264−1 = 18,446,744,073,709,551,615 (≈1.84) turns to complete the task (the same number as the wheat and chessboard problem above).<ref>Ivan Moscovich, 1000 playthinks: puzzles, paradoxes, illusions & games, Workman Pub., 2001 .</ref> Computing – IPv6: 18,446,744,073,709,551,616 (264; ≈1.84) possible unique /64 subnetworks. Mathematics – Rubik's Cube: There are 43,252,003,274,489,856,000 (≈4.33) different positions of a 3×3×3 Rubik's Cube. Password strength: Usage of the 95-character set found on standard computer keyboards for a 10-character password yields a computationally intractable 59,873,693,923,837,890,625 (9510, approximately 5.99) permutations. Economics: Hyperinflation in Zimbabwe estimated in February 2009 by some economists at 10 sextillion percent, or a factor of 1020. 1021 (; 10007; short scale: one sextillion; long scale: one thousand trillion, or one trilliard) ISO: zetta- (Z) Geo – Grains of sand: All the world's beaches combined have been estimated to hold roughly 1021 grains of sand. Computing – Manufacturing: Intel predicted that there would be 1.2 transistors in the world by 2015 and Forbes estimated that 2.9 transistors had been shipped up to 2014. Mathematics – Sudoku: There are 6,670,903,752,021,072,936,960 (≈6.7) 9×9 sudoku grids. Mathematics: The first case of exactly 20 prime numbers between multiples of 100 is 20,386,095,164,137,273,086,400 + n, for n = 1, 3, 7, 9, 13, 19, 21, 31, 33, 37, 49, 57, 63, 73, 79, 87, 91, 93, 97, 99. Astronomy – Stars: 70 sextillion = 7, the estimated number of stars within range of telescopes (as of 2003). Astronomy – Stars: in the range of 1023 to 1024 stars in the observable universe. Mathematics: 146,361,946,186,458,562,560,000 (≈1.5) is the fifth unitary perfect number. Mathematics: 357,686,312,646,216,567,629,137 (≈3.6) is the largest left-truncatable prime. Chemistry – Physics: The Avogadro constant () is the number of constituents (e.g. atoms or molecules) in one mole of a substance, defined for convenience as expressing the order of magnitude separating the molecular from the macroscopic scale. 1024 (; 10008; short scale: one septillion; long scale: one quadrillion) ISO: yotta- (Y) Mathematics: 2,833,419,889,721,787,128,217,599 (≈2.8) is the fifth Woodall prime. Mathematics: 3,608,528,850,368,400,786,036,725 (≈3.6) is the largest polydivisible number. Mathematics: 286 = 77,371,252,455,336,267,181,195,264 is the largest known power of two not containing the digit '0' in its decimal representation. 1027 (; 10009; short scale: one octillion; long scale: one thousand quadrillion, or one quadrilliard) ISO: ronna- (R) Biology – Atoms in the human body: the average human body contains roughly 7 atoms. Mathematics – Poker: the number of unique combinations of hands and shared cards in a 10-player game of Texas hold 'em is approximately 2.117. 1030 (; 100010; short scale: one nonillion; long scale: one quintillion) ISO: quetta- (Q) Biology – Bacterial cells on Earth: The number of bacterial cells on Earth is estimated at 5,000,000,000,000,000,000,000,000,000,000, or 5 × 1030. Mathematics: 5,000,000,000,000,000,000,000,000,000,027 is the largest quasi-minimal prime. Mathematics: The number of partitions of 1000 is 24,061,467,864,032,622,473,692,149,727,991. Mathematics: 368 = 278,128,389,443,693,511,257,285,776,231,761 is the largest known power of three not containing the digit '0' in its decimal representation. Mathematics: 2108 = 324,518,553,658,426,726,783,156,020,576,256 is the largest known power of two not containing the digit '9' in its decimal representation. Mathematics: 739 = 909,543,680,129,861,140,820,205,019,889,143 is the largest known power of 7 not containing the digit '7' in its decimal representation. 1033 (; 100011; short scale: one decillion; long scale: one thousand quintillion, or one quintilliard) Mathematics – Alexander's Star: There are 72,431,714,252,715,638,411,621,302,272,000,000 (about 7.24) different positions of Alexander's Star. 1036 (; 100012; short scale: one undecillion; long scale: one sextillion) Mathematics: 227−1 − 1 = 170,141,183,460,469,231,731,687,303,715,884,105,727 (≈1.7) is the largest known double Mersenne prime and the 12th Mersenne prime. Computing: 2128 = 340,282,366,920,938,463,463,374,607,431,768,211,456 (≈3.40282367), the theoretical maximum number of Internet addresses that can be allocated under the IPv6 addressing system, one more than the largest value that can be represented by a single-precision IEEE floating-point value, the total number of different Universally Unique Identifiers (UUIDs) that can be generated. Cryptography: 2128 = 340,282,366,920,938,463,463,374,607,431,768,211,456 (≈3.40282367), the total number of different possible keys in the AES 128-bit key space (symmetric cipher). 1039 (; 100013; short scale: one duodecillion; long scale: one thousand sextillion, or one sextilliard) Cosmology: The Eddington–Dirac number is roughly 1040. Mathematics: 97# × 25 × 33 × 5 × 7 = 69,720,375,229,712,477,164,533,808,935,312,303,556,800 (≈6.97) is the least common multiple of every integer from 1 to 100. 1042 to 10100 (; 100014; short scale: one tredecillion; long scale: one septillion) Mathematics: 141×2141+1 = 393,050,634,124,102,232,869,567,034,555,427,371,542,904,833 (≈3.93) is the second Cullen prime. Mathematics: There are 7,401,196,841,564,901,869,874,093,974,498,574,336,000,000,000 (≈7.4) possible permutations for the Rubik's Revenge (4×4×4 Rubik's Cube). Chess: 4.52 is a proven upper bound for the number of chess positions allowed according to the rules of chess. Geo: 1.33 is the estimated number of atoms on Earth. Mathematics: 2168 = 374,144,419,156,711,147,060,143,317,175,368,453,031,918,731,001,856 is the largest known power of two which is not pandigital: There is no digit '2' in its decimal representation. Mathematics: 3106 = 375,710,212,613,636,260,325,580,163,599,137,907,799,836,383,538,729 is the largest known power of three which is not pandigital: There is no digit '4'. Mathematics: 808,017,424,794,512,875,886,459,904,961,710,757,005,754,368,000,000,000 (≈8.08) is the order of the monster group. Cryptography: 2192 = 6,277,101,735,386,680,763,835,789,423,207,666,416,102,355,444,464,034,512,896 (6.27710174), the total number of different possible keys in the Advanced Encryption Standard (AES) 192-bit key space (symmetric cipher). Cosmology: 8 is roughly the number of Planck time intervals since the universe is theorised to have been created in the Big Bang 13.799 ± 0.021 billion years ago. Cosmology: 1 is Archimedes' estimate in The Sand Reckoner of the total number of grains of sand that could fit into the entire cosmos, the diameter of which he estimated in stadia to be what we call 2 light-years. Mathematics – Cards: 52! = 80,658,175,170,943,878,571,660,636,856,403,766,975,289,505,440,883,277,824,000,000,000,000 (≈8.07) – the number of ways to order the cards in a 52-card deck. Mathematics: There are ≈1.01×1068 possible combinations for the Megaminx. Mathematics: 1,808,422,353,177,349,564,546,512,035,512,530,001,279,481,259,854,248,860,454,348,989,451,026,887 (≈1.81) – The largest known prime factor found by Lenstra elliptic-curve factorization (LECF) . Mathematics: There are 282,870,942,277,741,856,536,180,333,107,150,328,293,127,731,985,672,134,721,536,000,000,000,000,000 (≈2.83) possible permutations for the Professor's Cube (5×5×5 Rubik's Cube). Cryptography: 2256 = 115,792,089,237,316,195,423,570,985,008,687,907,853,269,984,665,640,564,039,457,584,007,913,129,639,936 (≈1.15792089), the total number of different possible keys in the Advanced Encryption Standard (AES) 256-bit key space (symmetric cipher). Cosmology: Various sources estimate the total number of fundamental particles in the observable universe to be within the range of 1080 to 1085.WMAP- Content of the Universe . Map.gsfc.nasa.gov (2010-04-16). Retrieved on 2011-05-01. However, these estimates are generally regarded as guesswork. (Compare the Eddington number, the estimated total number of protons in the observable universe.) Computing: 9.999 999 is equal to the largest value that can be represented in the IEEE decimal32 floating-point format. Computing: 69! (roughly 1.7112245), is the largest factorial value that can be represented on a calculator with two digits for powers of ten without overflow. Mathematics: One googol, 1, 1 followed by one hundred zeros, or 10,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000. 10100 (one googol) to 101000 (; short scale: ten duotrigintillion; long scale: ten thousand sexdecillion, or ten sexdecillard) Mathematics: There are 157 152 858 401 024 063 281 013 959 519 483 771 508 510 790 313 968 742 344 694 684 829 502 629 887 168 573 442 107 637 760 000 000 000 000 000 000 000 000 (≈1.57) distinguishable permutations of the V-Cube 6 (6×6×6 Rubik's Cube). Chess: Shannon number, 10120, a lower bound of the game-tree complexity of chess. Physics: 10120, discrepancy between the observed value of the cosmological constant and a naive estimate based on Quantum Field Theory and the Planck energy. Physics: 8, ratio of the mass-energy in the observable universe to the energy of a photon with a wavelength the size of the observable universe. Mathematics: 19 568 584 333 460 072 587 245 340 037 736 278 982 017 213 829 337 604 336 734 362 294 738 647 777 395 483 196 097 971 852 999 259 921 329 236 506 842 360 439 300 (≈1.96) is the period of Fermat pseudoprimes. History – Religion: Asaṃkhyeya is a Buddhist name for the number 10140. It is listed in the Avatamsaka Sutra and metaphorically means "innumerable" in the Sanskrit language of ancient India. Xiangqi: 10150, an estimation of the game-tree complexity of xiangqi. Mathematics: There are 19 500 551 183 731 307 835 329 126 754 019 748 794 904 992 692 043 434 567 152 132 912 323 232 706 135 469 180 065 278 712 755 853 360 682 328 551 719 137 311 299 993 600 000 000 000 000 000 000 000 000 000 000 000 (≈1.95) distinguishable permutations of the V-Cube 7 (7×7×7 Rubik's Cube). Go: There are 208 168 199 381 979 984 699 478 633 344 862 770 286 522 453 884 530 548 425 639 456 820 927 419 612 738 015 378 525 648 451 698 519 643 907 259 916 015 628 128 546 089 888 314 427 129 715 319 317 557 736 620 397 247 064 840 935 (≈2.08) legal positions in the game of Go. See Go and mathematics. Economics: The annualized rate of the hyperinflation in Hungary in 1946 was estimated to be 2.9%. It was the most extreme case of hyperinflation ever recorded. Board games: 3.457, number of ways to arrange the tiles in English Scrabble on a standard 15-by-15 Scrabble board. Physics: 10186, approximate number of Planck volumes in the observable universe. Shogi: 10226, an estimation of the game-tree complexity of shogi. Physics: 7×10245, approximate spacetime volume of the history of the observable universe in Planck units. Computing: 1.797 693 134 862 315 807 is approximately equal to the largest value that can be represented in the IEEE double precision floating-point format. Computing: (10 – 10−15) is equal to the largest value that can be represented in the IEEE decimal64 floating-point format. Mathematics: 997# × 31# × 7 × 52 × 34 × 27 = 7 128 865 274 665 093 053 166 384 155 714 272 920 668 358 861 885 893 040 452 001 991 154 324 087 581 111 499 476 444 151 913 871 586 911 717 817 019 575 256 512 980 264 067 621 009 251 465 871 004 305 131 072 686 268 143 200 196 609 974 862 745 937 188 343 705 015 434 452 523 739 745 298 963 145 674 982 128 236 956 232 823 794 011 068 809 262 317 708 861 979 540 791 247 754 558 049 326 475 737 829 923 352 751 796 735 248 042 463 638 051 137 034 331 214 781 746 850 878 453 485 678 021 888 075 373 249 921 995 672 056 932 029 099 390 891 687 487 672 697 950 931 603 520 000 (≈7.13) is the least common multiple of every integer from 1 to 1000. 101000 to 1010100 (one googolplex) Mathematics: There are approximately 1.869 distinguishable permutations of the world's largest Rubik's Cube (33×33×33). Computing: 1.189 731 495 357 231 765 05 is approximately equal to the largest value that can be represented in the IEEE 80-bit x86 extended precision floating-point format. Computing: 1.189 731 495 357 231 765 085 759 326 628 007 0 is approximately equal to the largest value that can be represented in the IEEE quadruple-precision floating-point format. Computing: (10 – 10−33) is equal to the largest value that can be represented in the IEEE decimal128 floating-point format. Computing: 1010,000 − 1 is equal to the largest value that can be represented in Windows Phone's calculator. Mathematics: 104,8245 + 5104,824 is the largest proven Leyland prime; with 73,269 digits . Mathematics: approximately 7.76 × 10206,544 cattle in the smallest herd which satisfies the conditions of Archimedes's cattle problem. Mathematics: 2,618,163,402,417 × 21,290,000 − 1 is a 388,342-digit Sophie Germain prime; the largest known . Mathematics: 2,996,863,034,895  ×  21,290,000 ± 1 are 388,342-digit twin primes; the largest known . Mathematics: 3,267,113# – 1 is a 1,418,398-digit primorial prime; the largest known . Mathematics – Literature: Jorge Luis Borges' Library of Babel contains at least 251,312,000 ≈ 1.956 × 101,834,097 books (this is a lower bound). Mathematics: 101,888,529 − 10944,264 – 1 is a 1,888,529-digit palindromic prime, the largest known . Mathematics: 4 × 721,119,849 − 1 is the smallest prime of the form 4 × 72n − 1. Mathematics: 422,429! + 1 is a 2,193,027-digit factorial prime; the largest known . Mathematics: (215,135,397 + 1)/3 is a 4,556,209-digit Wagstaff probable prime, the largest known . Mathematics: 1,963,7361,048,576 + 1 is a 6,598,776-digit Generalized Fermat prime, the largest known . Mathematics: (108,177,207 − 1)/9 is a 8,177,207-digit probable prime, the largest known . Mathematics: 10,223 × 231,172,165 + 1 is a 9,383,761-digit Proth prime, the largest known Proth prime and non-Mersenne prime . Mathematics: 282,589,933 − 1 is a 24,862,048-digit Mersenne prime; the largest known prime of any kind . Mathematics: 282,589,932 × (282,589,933 − 1) is a 49,724,095-digit perfect number, the largest known as of 2020. Mathematics – History: 108×1016, largest named number in Archimedes' Sand Reckoner. Mathematics: 10googol (), a googolplex. A number 1 followed by 1 googol zeros. Carl Sagan has estimated that 1 googolplex, fully written out, would not fit in the observable universe because of its size, while also noting that one could also write the number as 1010100. Larger than 1010100 (One googolplex; 10googol; short scale: googolplex; long scale: googolplex) Go: There are at least 1010108 legal games of Go. See Game Tree Complexity. Mathematics – Literature: The number of different ways in which the books in Jorge Luis Borges' Library of Babel can be arranged is approximately , the factorial of the number of books in the Library of Babel. Cosmology: In chaotic inflation theory, proposed by physicist Andrei Linde, our universe is one of many other universes with different physical constants that originated as part of our local section of the multiverse, owing to a vacuum that had not decayed to its ground state. According to Linde and Vanchurin, the total number of these universes is about . Mathematics: , order of magnitude of an upper bound that occurred in a proof of Skewes (this was later estimated to be closer to 1.397 × 10316).Cosmology: The estimated number of Planck time units for quantum fluctuations and tunnelling to generate a new Big Bang is estimated to be . Mathematics: , a number in the googol family called a googolplexplex, googolplexian, or googolduplex. 1 followed by a googolplex zeros, or 10googolplexCosmology: The uppermost estimate to the size of the entire universe is approximately times that of the observable universe. Mathematics: , order of magnitude of another upper bound in a proof of Skewes. Mathematics: Steinhaus' mega lies between 10[4]257 and 10[4]258 (where a[n]b is hyperoperation). Mathematics: Moser's number, "2 in a mega-gon" in Steinhaus–Moser notation, is approximately equal to 10[10[4]257]10, the last four digits are ...1056. Mathematics: Graham's number, the last ten digits of which are ...2464195387. Arises as an upper bound solution to a problem in Ramsey theory. Representation in powers of 10 would be impractical (the number of 10s in the power tower would be virtually indistinguishable from the number itself). Mathematics: TREE(3): appears in relation to a theorem on trees in graph theory. Representation of the number is difficult, but one weak lower bound is AA(187196)(1), where A(n) is a version of the Ackermann function. Mathematics: SSCG(3): appears in relation to the Robertson–Seymour theorem. Known to be greater than TREE(3). Mathematics: Transcendental integers: a set of numbers defined in 2000 by Harvey Friedman, appears in proof theory. Mathematics:'' Rayo's number is a large number named after Agustín Rayo which has been claimed to be the largest number to have ever been named. It was originally defined in a "big number duel" at MIT on 26 January 2007. See also Conway chained arrow notation Encyclopedic size comparisons on Wikipedia Fast-growing hierarchy Indian numbering system Infinity Large numbers List of numbers Mathematical constant Names of large numbers Names of small numbers Power of 10 References External links Seth Lloyd's paper Computational capacity of the universe provides a number of interesting dimensionless quantities. Notable properties of specific numbers Numbers
Orders of magnitude (numbers)
Mathematics
13,006
6,477,222
https://en.wikipedia.org/wiki/Deductive%20language
A deductive language is a computer programming language in which the program is a collection of predicates ('facts') and rules that connect them. Such a language is used to create knowledge based systems or expert systems which can deduce answers to problem sets by applying the rules to the facts they have been given. An example of a deductive language is Prolog, or its database-query cousin, Datalog. History As the name implies, deductive languages are rooted in the principles of deductive reasoning; making inferences based upon current knowledge. The first recommendation to use a clausal form of logic for representing computer programs was made by Cordell Green (1969) at Stanford Research Institute (now SRI International). This idea can also be linked back to the battle between procedural and declarative information representation in early artificial intelligence systems. Deductive languages and their use in logic programming can also be dated to the same year when Foster and Elcock introduced Absys, the first deductive/logical programming language. Shortly after, the first Prolog system was introduced in 1972 by Colmerauer through collaboration with Robert Kowalski. Components The components of a deductive language are a system of formal logic and a knowledge base upon which the logic is applied. Formal Logic Formal logic is the study of inference in regards to formal content. The distinguishing feature between formal and informal logic is that in the former case, the logical rule applied to the content is not specific to a situation. The laws hold regardless of a change in context. Although first-order logic is described in the example below to demonstrate the uses of a deductive language, no formal system is mandated and the use of a specific system is defined within the language rules or grammar. As input, a predicate takes any object(s) in the domain of interest and outputs either one of two Boolean values: true or false. For example, consider the sentences "Barack Obama is the 44th president" and "If it rains today, I will bring an umbrella". The first is a statement with an associated truth value. The second is a conditional statement relying on the value of some other statement. Either of these sentences can be broken down into predicates which can be compared and form the knowledge base of a deductive language. Moreover, variables such as 'Barack Obama' or 'president' can be quantified over. For example, take 'Barack Obama' as variable 'x'. In the sentence "There exists an 'x' such that if 'x' is the president, then 'x' is the commander in chief." This is an example of the existential quantifier in first order logic. Take 'president' to be the variable 'y'. In the sentence "For every 'y', 'y' is the leader of their nation." This is an example of the universal quantifier. Knowledge Base A collection of 'facts' or predicates and variables form the knowledge base of a deductive language. Depending on the language, the order of declaration of these predicates within the knowledge base may or may not influence the result of applying logical rules. Upon application of certain 'rules' or inferences, new predicates may be added to a knowledge base. As new facts are established or added, they form the basis for new inferences. As the core of early expert systems, artificial intelligence systems which can make decisions like an expert human, knowledge bases provided more information than databases. They contained structured data, with classes, subclasses, and instances. Prolog Prolog is an example of a deductive, declarative language that applies first- order logic to a knowledge base. To run a program in Prolog, a query is posed and based upon the inference engine and the specific facts in the knowledge base, a result is returned. The result can be anything appropriate from a new relation or predicate, to a literal such as a Boolean (true/false), depending on the engine and type system. References J.M. Foster and E.W. Elcock. ABSYS 1: An Incremental Compiler for Assertions: an Introduction, Machine Intelligence 4, Edinburgh U Press, 1969, pp. 423–429 Cordell Green. Application of Theorem Proving to Problem Solving IJCAI 1969. Cordell Green, Alumnus of SRI International's Artificial Intelligence Center, retrieved 12/09/14. Robert Kowalski and Donald and Kuehner, Linear Resolution with Selection Function Artificial Intelligence, Vol. 2, 1971, pp. 227–60. Robert Kowalski Predicate Logic as a Programming Language Memo 70, Department of Artificial Intelligence, Edinburgh University. 1973. Also in P Programming languages Computer programming Databases
Deductive language
Technology,Engineering
966
6,899,692
https://en.wikipedia.org/wiki/Tachytrope
A tachytrope is a curve in which the law of the velocity is given. It was first used by American mathematician Benjamin Peirce in A System of Analytic Mechanics, first published in 1855. References Sources Velocity
Tachytrope
Physics,Mathematics
45
15,981,895
https://en.wikipedia.org/wiki/Small%20activating%20RNA
Small activating RNAs (saRNAs) are small double-stranded RNAs (dsRNAs) that target gene promoters to induce transcriptional gene activation in a process known as RNA activation (RNAa). Small dsRNAs, such as small interfering RNAs (siRNAs) and microRNAs (miRNAs), are known to be the trigger of an evolutionarily conserved mechanism known as RNA interference (RNAi). RNAi invariably leads to gene silencing via remodeling of chromatin to thereby suppress transcription, degrading complementary mRNA, or blocking protein translation. Later it was found that dsRNAs can also act to activate transcription and was thus designated saRNA. By targeting selected sequences in gene promoters, saRNAs induce target gene expression at the transcriptional/epigenetic level. saRNAs are typically 21 nucleotides in length with a 2 nucleotide overhang at the 3' end of each strand, the same structure as a typical siRNA. To identify an saRNA that can activate a gene of interest, several saRNAs need to be designed within a 1- to 2-kbp promoter region by following a set of rules and tested in cultured cells. In some reports, saRNAs are designed in such a way to target non-coding transcripts that overlap the promoter sequence of a protein coding gene. Both chemically synthesized saRNAs and saRNAs expressed as short hairpin RNA (shRNA) have been used in in vitro and in vivo experiments. An online resource for saRNAs has been developed to integrate experimentally verified saRNAs and proteins involved. Therapeutic use of saRNAs have been suggested. They have been tested in animal models to treat bladder tumors, liver carcinogenesis, pancreatic cancer, and erectile dysfunction. In 2016, a phase I clinical trial involving advanced liver cancer patients was launched for the saRNA drug MTL-CEBPA. It aimed to complete in 2021. References Further reading Morris KV (2008). RNA and the Regulation of Gene Expression: A Hidden Layer of Complexity. Caister Academic Press Tost J (2008). Epigenetics. Caister Academic Press External links Small RNAs Reveal an Activating Side. Science News of the Week How to get your genes switched on. New Scientist 16 November 2006 Bladder cancer: Intravesical RNA activation—a new treatment concept. Nature Review Urology October 2012 RNA Gene expression
Small activating RNA
Chemistry,Biology
496
4,652,664
https://en.wikipedia.org/wiki/Eye%20%28cyclone%29
The eye is a region of mostly calm weather at the center of a tropical cyclone. The eye of a storm is a roughly circular area, typically in diameter. It is surrounded by the eyewall, a ring of towering thunderstorms where the most severe weather and highest winds of the cyclone occur. The cyclone's lowest barometric pressure occurs in the eye and can be as much as 15 percent lower than the pressure outside the storm. In strong tropical cyclones, the eye is characterized by light winds and clear skies, surrounded on all sides by a towering, symmetric eyewall. In weaker tropical cyclones, the eye is less well defined and can be covered by the central dense overcast, an area of high, thick clouds that show up brightly on satellite imagery. Weaker or disorganized storms may also feature an eyewall that does not completely encircle the eye or have an eye that features heavy rain. In all storms, however, the eye is where the barometer reading is lowest. Structure A typical tropical cyclone has an eye approximately 30–65km (20–40mi) across at the geometric center of the storm. The eye may be clear or have spotty low clouds (a clear eye), it may be filled with low- and mid-level clouds (a filled eye), or it may be obscured by the central dense overcast. There is, however, very little wind and rain, especially near the center. This is in stark contrast to conditions in the eyewall, which contains the storm's strongest winds. Due to the mechanics of a tropical cyclone, the eye and the air directly above it are warmer than their surroundings. While normally quite symmetric, eyes can be oblong and irregular, especially in weakening storms. A large ragged eye is a non-circular eye which appears fragmented, and is an indicator of a weak or weakening tropical cyclone. An open eye is an eye which can be circular, but the eyewall does not completely encircle the eye, also indicating a weakening, moisture-deprived cyclone or a weak but strengthening one. Both of these observations are used to estimate the intensity of tropical cyclones via Dvorak analysis. Eyewalls are typically circular; however, distinctly polygonal shapes ranging from triangles to hexagons occasionally occur. While typical mature storms have eyes that are a few dozen miles across, rapidly intensifying storms can develop an extremely small, clear, and circular eye, sometimes referred to as a pinhole eye. Storms with pinhole eyes are prone to large fluctuations in intensity, and provide difficulties and frustrations for forecasters. Small/minuscule eyesthose less than ten nautical miles (19km, 12mi) acrossoften trigger eyewall replacement cycles, where a new eyewall begins to form outside the original eyewall. This can take place anywhere from fifteen to hundreds of kilometers (ten to a few hundred miles) outside the inner eye. The storm then develops two concentric eyewalls, or an "eye within an eye". In most cases, the outer eyewall begins to contract soon after its formation, which chokes off the inner eye and leaves a much larger but more stable eye. While the replacement cycle tends to weaken storms as it occurs, the new eyewall can contract fairly quickly after the old eyewall dissipates, allowing the storm to re-strengthen. This may trigger another re-strengthening cycle of eyewall replacement. Eyes can range in size from (Typhoon Carmen) to a mere (Hurricane Wilma) across. While it is uncommon for storms with large eyes to become very intense, it does occur, especially in annular hurricanes. Hurricane Isabel was the eleventh most powerful North Atlantic hurricane in recorded history, and sustained a wide65–80km (40–50mi)eye for a period of several days. Formation and detection Tropical cyclones typically form from large, disorganized areas of disturbed weather in tropical regions. As more thunderstorms form and gather, the storm develops rainbands which start rotating around a common center. As the storm gains strength, a ring of stronger convection forms at a certain distance from the rotational center of the developing storm. Since stronger thunderstorms and heavier rain mark areas of stronger updrafts, the barometric pressure at the surface begins to drop, and air begins to build up in the upper levels of the cyclone. This results in the formation of an upper level anticyclone, or an area of high atmospheric pressure above the central dense overcast. Consequently, most of this built up air flows outward anticyclonically above the tropical cyclone. Outside the forming eye, the anticyclone at the upper levels of the atmosphere enhances the flow towards the center of the cyclone, pushing air towards the eyewall and causing a positive feedback loop. However, a small portion of the built-up air, instead of flowing outward, flows inward towards the center of the storm. This causes air pressure to build even further, to the point where the weight of the air counteracts the strength of the updrafts in the center of the storm. Air begins to descend in the center of the storm, creating a mostly rain-free areaa newly formed eye. Many aspects of this process remain a mystery. Scientists do not know why a ring of convection forms around the center of circulation instead of on top of it, or why the upper-level anticyclone ejects only a portion of the excess air above the storm. Many theories exist as to the exact process by which the eye forms: all that is known for sure is that the eye is necessary for tropical cyclones to achieve high wind speeds. The formation of an eye is almost always an indicator of increasing tropical cyclone organisation and strength. Because of this, forecasters watch developing storms closely for signs of eye formation. For storms with a clear eye, detection of the eye is as simple as looking at pictures from a weather satellite. However, for storms with a filled eye, or an eye completely covered by the central dense overcast, other detection methods must be used. Observations from ships and hurricane hunters can pinpoint an eye visually, by looking for a drop in wind speed or lack of rainfall in the storm's center. In the United States, South Korea, and a few other countries, a network of NEXRAD Doppler weather radar stations can detect eyes near the coast. Weather satellites also carry equipment for measuring atmospheric water vapor and cloud temperatures, which can be used to spot a forming eye. In addition, scientists have recently discovered that the amount of ozone in the eye is much higher than the amount in the eyewall, due to air sinking from the ozone-rich stratosphere. Instruments sensitive to ozone perform measurements, which are used to observe rising and sinking columns of air, and provide indication of the formation of an eye, even before satellite imagery can determine its formation. One satellite study found eyes detected on average for 30 hours per storm. Associated phenomena Eyewall replacement cycles Eyewall replacement cycles, also called concentric eyewall cycles, naturally occur in intense tropical cyclones, generally with winds greater than 185km/h (115mph), or major hurricanes (Category 3 or higher on the Saffir–Simpson hurricane scale). When tropical cyclones reach this intensity, and the eyewall contracts or is already sufficiently small (see above), some of the outer rainbands may strengthen and organize into a ring of thunderstormsan outer eyewallthat slowly moves inward and robs the inner eyewall of its needed moisture and angular momentum. Since the strongest winds are located in a cyclone's eyewall, the tropical cyclone usually weakens during this phase, as the inner wall is "choked" by the outer wall. Eventually the outer eyewall replaces the inner one completely, and the storm can re-intensify. The discovery of this process was partially responsible for the end of the U.S. government's hurricane modification experiment Project Stormfury. This project set out to seed clouds outside the eyewall, causing a new eyewall to form and weakening the storm. When it was discovered that this was a natural process due to hurricane dynamics, the project was quickly abandoned. Research shows that 53 percent of intense hurricanes undergo at least one of these cycles during its existence. Hurricane Allen in 1980 went through repeated eyewall replacement cycles, fluctuating between Category5 and Category4 status on the Saffir–Simpson scale several times, while Hurricane Juliette (2001) is a documented case of triple eyewalls. Moats A moat in a tropical cyclone is a clear ring outside the eyewall, or between concentric eyewalls, characterized by subsidence (slowly sinking air) and little or no precipitation. The air flow in the moat is dominated by the cumulative effects of stretching and shearing. The moat between eyewalls is an area in the storm where the rotational speed of the air changes greatly in proportion to the distance from the storm's center; these areas are also known as rapid filamentation zones. Such areas can potentially be found near any vortex of sufficient strength, but are most pronounced in strong tropical cyclones. Eyewall mesovortices Eyewall mesovortices are small scale rotational features found in the eyewalls of intense tropical cyclones. They are similar, in principle, to small "suction vortices" often observed in multiple-vortex tornadoes. In these vortices, wind speeds may be greater than anywhere else in the eyewall. Eyewall mesovortices are most common during periods of intensification in tropical cyclones. Eyewall mesovortices often exhibit unusual behavior in tropical cyclones. They usually revolve around the low pressure center, but sometimes they remain stationary. Eyewall mesovortices have even been documented to cross the eye of a storm. These phenomena have been documented observationally, experimentally, and theoretically. Eyewall mesovortices are a significant factor in the formation of tornadoes after tropical cyclone landfall. Mesovortices can spawn rotation in individual convective cells or updrafts (a mesocyclone), which leads to tornadic activity. At landfall, friction is generated between the circulation of the tropical cyclone and land. This can allow the mesovortices to descend to the surface, causing tornadoes. These tornadic circulations in the boundary layer may be prevalent in the inner eyewalls of intense tropical cyclones but with short duration and small size they are not frequently observed. Stadium effect The stadium effect is a phenomenon observed in strong tropical cyclones. It is a fairly common event, where the clouds of the eyewall curve outward from the surface with height. This gives the eye an appearance resembling a sports stadium from the air. An eye is always larger at the top of the storm, and smallest at the bottom of the storm because the rising air in the eyewall follows isolines of equal angular momentum, which also slope outward with height. Eye-like features An eye-like structure is often found in intensifying tropical cyclones. Similar to the eye seen in hurricanes or typhoons, it is a circular area at the circulation center of the storm in which convection is absent. These eye-like features are most normally found in intensifying tropical storms and hurricanes of Category1 strength on the Saffir-Simpson scale. For example, an eye-like feature was found in Hurricane Beta when the storm had maximum wind speeds of only 80km/h (50mph), well below hurricane force. The features are typically not visible on visible wavelengths or infrared wavelengths from space, although they are easily seen on microwave satellite imagery. Their development at the middle levels of the atmosphere is similar to the formation of a complete eye, but the features might be horizontally displaced due to vertical wind shear. Hazards Though the eye is by far the calmest and quietest part of the storm (at least on land), with no wind at the center and typically clear skies, it is possibly the most hazardous area on the ocean. In the eyewall, wind-driven waves all travel in the same direction. In the center of the eye, however, the waves converge from all directions, creating erratic crests that can build on each other to become rogue waves. The maximum height of hurricane waves is unknown, but measurements during Hurricane Ivan when it was a Category 4 hurricane estimated that waves near the eyewall exceeded 40m (130ft) from peak to trough. A common mistake, especially in areas where hurricanes are uncommon, is for residents to exit their homes to inspect the damage while the calm eye passes over, only to be caught off guard by the violent winds in the opposite eyewall. Other cyclones Though only tropical cyclones have structures officially termed "eyes", there are other weather systems that can exhibit eye-like features. Polar lows Polar lows are mesoscale weather systems, typically smaller than 1,000km (600mi) across, found near the poles. Like tropical cyclones, they form over relatively warm water and can feature deep convection and winds of gale force or greater. Unlike storms of tropical nature, however, they thrive in much colder temperatures and at much higher latitudes. They are also smaller and last for shorter durations, with few lasting longer than a day or so. Despite these differences, they can be very similar in structure to tropical cyclones, featuring a clear eye surrounded by an eyewall and bands of rain and snow. Extratropical cyclones Extratropical cyclones are areas of low pressure which exist at the boundary of different air masses. Almost all storms found at mid-latitudes are extratropical in nature, including classic North American nor'easters and European windstorms. The most severe of these can have a clear "eye" at the site of lowest barometric pressure, though it is usually surrounded by lower, non-convective clouds and is found near the back end of the storm. Subtropical cyclones Subtropical cyclones are low-pressure systems with some extratropical characteristics and some tropical characteristics. As such, they may have an eye while not being truly tropical in nature. Subtropical cyclones can be very hazardous, generating high winds and seas, and often evolve into fully tropical cyclones. For this reason, the National Hurricane Center began including subtropical storms in its naming scheme in 2002. Tornadoes Tornadoes are destructive, small-scale storms, which produce the fastest winds on earth. There are two main types: single-vortex tornadoes, which consist of a single spinning column of air, and multiple-vortex tornadoes, which consist of small "suction vortices," resembling mini-tornadoes themselves, all rotating around a common center. Both types of vortex are theorized to contain calm eyes. These theories are supported by doppler velocity observations by weather radar and eyewitness accounts. Certain single-vortex tornadoes have also been shown to be relatively clear near the center vortex, visible by weak dBZ (reflectivity) returns seen on mobile radar, as well as containing slower wind speeds. Extraterrestrial vortices NASA reported in November 2006 that the Cassini spacecraft observed a "hurricane-like" storm locked to the south pole of Saturn with a clearly defined eyewall. The observation was particularly notable as eyewall clouds had not previously been seen on any planet other than Earth (including a failure to observe an eyewall in the Great Red Spot of Jupiter by the Galileo spacecraft). In 2007, very large vortices on both poles of Venus were observed by the Venus Express mission of the European Space Agency to have a dipole eye structure. See also Outline of tropical cyclones Radius of maximum wind RAINEX Storm surge Eyewall replacement cycle References External links Atlantic Oceanographic and Meteorological Laboratory Canadian Hurricane Centre: Glossary of Hurricane Terms Tropical cyclone meteorology Vortices Articles containing video clips
Eye (cyclone)
Chemistry,Mathematics
3,217
157,932
https://en.wikipedia.org/wiki/Index%20of%20coincidence
In cryptography, coincidence counting is the technique (invented by William F. Friedman) of putting two texts side-by-side and counting the number of times that identical letters appear in the same position in both texts. This count, either as a ratio of the total or normalized by dividing by the expected count for a random source model, is known as the index of coincidence, or IC or IOC or IoC for short. Because letters in a natural language are not distributed evenly, the IC is higher for such texts than it would be for uniformly random text strings. What makes the IC especially useful is the fact that its value does not change if both texts are scrambled by the same single-alphabet substitution cipher, allowing a cryptanalyst to quickly detect that form of encryption. Calculation The index of coincidence provides a measure of how likely it is to draw two matching letters by randomly selecting two letters from a given text. The chance of drawing a given letter in the text is (number of times that letter appears / length of the text). The chance of drawing that same letter again (without replacement) is (appearances − 1 / text length − 1). The product of these two values gives you the chance of drawing that letter twice in a row. One can find this product for each letter that appears in the text, then sum these products to get a chance of drawing two of a kind. This probability can then be normalized by multiplying it by some coefficient, typically 26 in English. where c is the normalizing coefficient (26 for English), na is the number of times the letter "a" appears in the text, and N is the length of the text. We can express the index of coincidence IC for a given letter-frequency distribution as a summation: where N is the length of the text and n1 through nc are the frequencies (as integers) of the c letters of the alphabet (c = 26 for monocase English). The sum of the ni is necessarily N. The products count the number of combinations of n elements taken two at a time. (Actually this counts each pair twice; the extra factors of 2 occur in both numerator and denominator of the formula and thus cancel out.) Each of the ni occurrences of the i -th letter matches each of the remaining occurrences of the same letter. There are a total of letter pairs in the entire text, and 1/c is the probability of a match for each pair, assuming a uniform random distribution of the characters (the "null model"; see below). Thus, this formula gives the ratio of the total number of coincidences observed to the total number of coincidences that one would expect from the null model. The expected average value for the IC can be computed from the relative letter frequencies of the source language: If all letters of an alphabet were equally probable, the expected index would be 1.0. The actual monographic IC for telegraphic English text is around 1.73, reflecting the unevenness of natural-language letter distributions. Sometimes values are reported without the normalizing denominator, for example for English; such values may be called κp ("kappa-plaintext") rather than IC, with κr ("kappa-random") used to denote the denominator (which is the expected coincidence rate for a uniform distribution of the same alphabet, for English). English plaintext will generally fall somewhere in the range of 1.5 to 2.0 (normalized calculation). Application The index of coincidence is useful both in the analysis of natural-language plaintext and in the analysis of ciphertext (cryptanalysis). Even when only ciphertext is available for testing and plaintext letter identities are disguised, coincidences in ciphertext can be caused by coincidences in the underlying plaintext. This technique is used to cryptanalyze the Vigenère cipher, for example. For a repeating-key polyalphabetic cipher arranged into a matrix, the coincidence rate within each column will usually be highest when the width of the matrix is a multiple of the key length, and this fact can be used to determine the key length, which is the first step in cracking the system. Coincidence counting can help determine when two texts are written in the same language using the same alphabet. (This technique has been used to examine the purported Bible code). The causal coincidence count for such texts will be distinctly higher than the accidental coincidence count for texts in different languages, or texts using different alphabets, or gibberish texts. To see why, imagine an "alphabet" of only the two letters A and B. Suppose that in our "language", the letter A is used 75% of the time, and the letter B is used 25% of the time. If two texts in this language are laid side by side, then the following pairs can be expected: Overall, the probability of a "coincidence" is 62.5% (56.25% for AA + 6.25% for BB). Now consider the case when both messages are encrypted using the simple monoalphabetic substitution cipher which replaces A with B and vice versa: The overall probability of a coincidence in this situation is 62.5% (6.25% for AA + 56.25% for BB), exactly the same as for the unencrypted "plaintext" case. In effect, the new alphabet produced by the substitution is just a uniform renaming of the original character identities, which does not affect whether they match. Now suppose that only one message (say, the second) is encrypted using the same substitution cipher (A,B)→(B,A). The following pairs can now be expected: Now the probability of a coincidence is only 37.5% (18.75% for AA + 18.75% for BB). This is noticeably lower than the probability when same-language, same-alphabet texts were used. Evidently, coincidences are more likely when the most frequent letters in each text are the same. The same principle applies to real languages like English, because certain letters, like E, occur much more frequently than other letters—a fact which is used in frequency analysis of substitution ciphers. Coincidences involving the letter E, for example, are relatively likely. So when any two English texts are compared, the coincidence count will be higher than when an English text and a foreign-language text are used. This effect can be subtle. For example, similar languages will have a higher coincidence count than dissimilar languages. Also, it is not hard to generate random text with a frequency distribution similar to real text, artificially raising the coincidence count. Nevertheless, this technique can be used effectively to identify when two texts are likely to contain meaningful information in the same language using the same alphabet, to discover periods for repeating keys, and to uncover many other kinds of nonrandom phenomena within or among ciphertexts. Expected values for various languages are: Generalization The above description is only an introduction to use of the index of coincidence, which is related to the general concept of correlation. Various forms of Index of Coincidence have been devised; the "delta" I.C. (given by the formula above) in effect measures the autocorrelation of a single distribution, whereas a "kappa" I.C. is used when matching two text strings. Although in some applications constant factors such as and can be ignored, in more general situations there is considerable value in truly indexing each I.C. against the value to be expected for the null hypothesis (usually: no match and a uniform random symbol distribution), so that in every situation the expected value for no correlation is 1.0. Thus, any form of I.C. can be expressed as the ratio of the number of coincidences actually observed to the number of coincidences expected (according to the null model), using the particular test setup. From the foregoing, it is easy to see that the formula for kappa I.C. is where is the common aligned length of the two texts A and B, and the bracketed term is defined as 1 if the -th letter of text A matches the -th letter of text B, otherwise 0. A related concept, the "bulge" of a distribution, measures the discrepancy between the observed I.C. and the null value of 1.0. The number of cipher alphabets used in a polyalphabetic cipher may be estimated by dividing the expected bulge of the delta I.C. for a single alphabet by the observed bulge for the message, although in many cases (such as when a repeating key was used) better techniques are available. Example As a practical illustration of the use of I.C., suppose that we have intercepted the following ciphertext message: QPWKA LVRXC QZIKG RBPFA EOMFL JMSDZ VDHXC XJYEB IMTRQ WNMEA IZRVK CVKVL XNEIC FZPZC ZZHKM LVZVZ IZRRQ WDKEC HOSNY XXLSP MYKVQ XJTDC IOMEE XDQVS RXLRL KZHOV (The grouping into five characters is just a telegraphic convention and has nothing to do with actual word lengths.) Suspecting this to be an English plaintext encrypted using a Vigenère cipher with normal A–Z components and a short repeating keyword, we can consider the ciphertext "stacked" into some number of columns, for example seven: QPWKALV RXCQZIK GRBPFAE OMFLJMS DZVDHXC XJYEBIM TRQWN… If the key size happens to have been the same as the assumed number of columns, then all the letters within a single column will have been enciphered using the same key letter, in effect a simple Caesar cipher applied to a random selection of English plaintext characters. The corresponding set of ciphertext letters should have a roughness of frequency distribution similar to that of English, although the letter identities have been permuted (shifted by a constant amount corresponding to the key letter). Therefore, if we compute the aggregate delta I.C. for all columns ("delta bar"), it should be around 1.73. On the other hand, if we have incorrectly guessed the key size (number of columns), the aggregate delta I.C. should be around 1.00. So we compute the delta I.C. for assumed key sizes from one to ten: We see that the key size is most likely five. If the actual size is five, we would expect a width of ten to also report a high I.C., since each of its columns also corresponds to a simple Caesar encipherment, and we confirm this. So we should stack the ciphertext into five columns: QPWKA LVRXC QZIKG RBPFA EOMFL JMSDZ VDH… We can now try to determine the most likely key letter for each column considered separately, by performing trial Caesar decryption of the entire column for each of the 26 possibilities A–Z for the key letter, and choosing the key letter that produces the highest correlation between the decrypted column letter frequencies and the relative letter frequencies for normal English text. That correlation, which we don't need to worry about normalizing, can be readily computed as where are the observed column letter frequencies and are the relative letter frequencies for English. When we try this, the best-fit key letters are reported to be "EVERY," which we recognize as an actual word, and using that for Vigenère decryption produces the plaintext: MUSTC HANGE MEETI NGLOC ATION FROMB RIDGE TOUND ERPAS SSINC EENEM YAGEN TSARE BELIE VEDTO HAVEB EENAS SIGNE DTOWA TCHBR IDGES TOPME ETING TIMEU NCHAN GEDXX from which one obtains: MUST CHANGE MEETING LOCATION FROM BRIDGE TO UNDERPASS SINCE ENEMY AGENTS ARE BELIEVED TO HAVE BEEN ASSIGNED TO WATCH BRIDGE STOP MEETING TIME UNCHANGED XX after word divisions have been restored at the obvious positions. "XX" are evidently "null" characters used to pad out the final group for transmission. This entire procedure could easily be packaged into an automated algorithm for breaking such ciphers. Due to normal statistical fluctuation, such an algorithm will occasionally make wrong choices, especially when analyzing short ciphertext messages. References See also Kasiski examination Riverbank Publications Topics in cryptography Cryptographic attacks Summary statistics for contingency tables
Index of coincidence
Technology
2,618
48,222,408
https://en.wikipedia.org/wiki/Calcium%20channel%20opener
A calcium channel opener is a type of drug which facilitates ion transmission through calcium channels. An example is Bay K8644, which is an analogue of nifedipine that specifically and directly activates L-type voltage-dependent calcium channels. In contrast to Bay K8644, which is not for clinical use, ambroxol is a frequently used mucolytic drug that triggers lysosomal secretion by mobilizing calcium from acidic calcium stores. This effect does most likely not occur by a direct interaction between the drug and a lysosomal calcium channel, but indirectly by neutralizing the acidic pH within lysosomes. Calcium permeable ion channels in lysosomal membranes that may be activated by a luminal pH increase include two pore channels (TPCs), mucolipin TRP channels (TRPMLs) and purinergic receptors of the P2X channel type. See also Calcium channel blocker References
Calcium channel opener
Chemistry
197
1,303,480
https://en.wikipedia.org/wiki/Electrostriction
In electromagnetism, electrostriction is a property of all electrical non-conductor or dielectrics. Electrostriction causes these materials to change their shape under the application of an electric field. It is the dual property to magnetostriction. Explanation Electrostriction is a property of all dielectric materials, and is caused by displacement of ions in the crystal lattice upon being exposed to an external electric field. The cause of electrostrictive is linked to anharmonic effects. Positive ions will be displaced in the direction of the field, while negative ions will be displaced in the opposite direction. This displacement will accumulate throughout the bulk material and result in an overall strain (elongation) in the direction of the field. The thickness will be reduced in the orthogonal directions characterized by Poisson's ratio. All insulating materials consisting of more than one type of atom will be ionic to some extent due to the difference of electronegativity of the atoms, and therefore exhibit electrostriction. The resulting strain (ratio of deformation to the original dimension) is proportional to the square of the polarization. Reversal of the electric field does not reverse the direction of the deformation. More formally, the electrostriction coefficient is a rank four tensor (), relating the rank two strain tensor () and the electric polarization density vector (i.e. rank one tensor; ) The electrostrictive tensor satisfies The related piezoelectric effect occurs only in a particular class of dielectrics. Electrostriction applies to all crystal symmetries, while the piezoelectric effect only applies to the 20 piezoelectric point groups. Piezoelectricity is a result of electrostrictive in ferroelectric materials. Electrostriction is a quadratic effect, unlike piezoelectricity, which is a linear effect. Materials Although all dielectrics exhibit some electrostriction, certain engineered ceramics, known as relaxor ferroelectrics, have extraordinarily high electrostrictive constants. The most commonly used are lead magnesium niobate (PMN) lead magnesium niobate-lead titanate (PMN-PT) lead lanthanum zirconate titanate (PLZT) Magnitude of effect Electrostriction can produce a strain on the order of 0.1% for some materials. This occurs at a field strength of 2 million volts per meter (2 MV/m) for the material PMN-15. Electrostriction exists in all materials, but is generally negligible. Applications Sonar projectors for submarines and surface vessels Actuators for small displacements Sensors, provided a bias electric field or pre-stress is present. See also Magnetostriction Photoelasticity Piezomagnetism Piezoelectricity Relaxor ferroelectric References Further reading "Electrostriction." Encyclopædia Britannica. Mini dictionary of physics (1988) Oxford University Press "Electronic Materials" by Prof. Dr. Helmut Föll Materials science Electric and magnetic fields in matter
Electrostriction
Physics,Chemistry,Materials_science,Engineering
650
33,778,328
https://en.wikipedia.org/wiki/Rectal%20microbicide
A rectal microbicide is a microbicide for rectal use. Most commonly such a product would be a topical gel inserted into the anus so that it make act as protection against the contract of a sexually transmitted infection during anal sex. Along with vaginal microbicides, rectal microbicides are currently the subject of medical research on microbicides for sexually transmitted diseases to determine the circumstances under which and the extent to which they provide protection against infection. Less commonly, rectal microbicides can have other purposes also; for example, they could be used to treat certain medical conditions as a suppository would. History Early development of topical microbicides starting around 1998 focused on preventing of HIV transmission during vaginal intercourse. The entire field lacks a proof of concept that a vaginal microbicide exists. As of 2008, 16 topical microbicides entered phase I or II clinical trial and 7 advanced to an additional trial. Previous studies both showed promise in new areas of research and gave disappointing results from the first generation products, as surfactants like nonoxynol-9 and entry inhibitors like carrageenan showed no efficacy in preventing HIV and were associated with risk of inflammation which raised the risk of contracting HIV in some circumstances. In 1998, researchers noted that gay men using products containing nonoxynol-9 as part of their infection prevention strategy despite lack of evidence of efficacy or any safety data for that practice. At the time, the drug was under evaluation as a vaginal microbicide. Because of expected similarities between the efficacy of vaginal and rectal microbicides, some researchers have called for all vaginal microbicides to be tested for efficacy when used rectally. Motivation There are two fundamental reasons to research and develop rectal microbicides for HIV prevention: Anal intercourse is a normal human behavior and is practiced the world over by an estimated five to ten percent of men, women, and transgender people with both heterosexual and same-sex partners. An act of unprotected anal intercourse is ten to twenty times more likely to result in HIV infection compared to an act of unprotected vaginal intercourse. This indicates that unprotected anal intercourse plays a significant role in the HIV pandemic. Concerted advocacy for the research and development of safe, effective, acceptable and accessible rectal microbicides began in 2005, when International Rectal Microbicide Advocates was founded with colleagues representing the AIDS Foundation of Chicago, the Canadian AIDS Society, the Community HIV/AIDS Mobilization Program, and the Global Campaign for Microbicides. The political and sociocultural context reinforced the dismissal of rectal microbicides. Pervasive homophobia across the globe has resulted in a lack of adequate attention and resources devoted to gay men and other men who have sex with men (MSM) despite the disproportional HIV burden borne by this population. Few knew, or acknowledged, that anal intercourse is a widespread practice among heterosexuals, both men and women, gay men and other MSM, as well as transgender people. Thus, evidence-free assumptions relegated the rectal portion of the microbicide field to a small, dark corner. The field has moved from simply being an adjunct to vaginal studies to a force in its own right. This is due to a handful of visionary, passionate, and dogged scientists; funding from the United States (which has supported approximately 97% of all rectal microbicide research); and growing community engagement. Research Preclinical testing Preclinical testing for rectal microbicides has been conducted in macaques to get a nonhuman primate model of drug behavior. UC-781 trial Scientists working on the University of California, Los Angeles (UCLA's) Microbicide Development Program initiated the first Phase I RM safety trial, investing the safety and acceptability of UC-781, in December 2006. Rectal application of UC-781 gel, a potent antiretroviral (ARV) drug, was shown to be safe and acceptable to the 36 men and women in the trial. Phase I trials normally focus solely on safety and acceptability, but researchers used a novel approach in this trial: taking rectal tissue biopsies from participants and exposing them to HIV ex vivo in the laboratory. The drug significantly reduced HIV transmission in these essays. RMP-02/MTN-006 RMP-02/MTN-006 Tested the same vaginal formulation of tenofovir gel that reduced HIV acquisition by an estimated 39 percent overall in the CAPRISA (Centre for the AIDS Programme of Research) 004 trial that was conducted in South Africa. In September 2009, 18 men and women began enrolling in the trial, which was sponsored by the Microbicide Trials Network (MTN) and UCLA's Microbicide Development Program. The study tested the safety and acceptability of single- and multiple-day rectal applications of tenofovir, a single oral dose of tenofovir, and a placebo. Laboratory tests showed that HIV was significantly inhibited in rectal tissue samples from participants who applied tenofovir gel to their rectums daily for one week compared to tissue from those who used a placebo gel. Although a slight anti-HIV effect was noted in tissue from participants who applied a single dose of tenofovir gel, the finding was not statistically significant. The single dose of oral tenofovir did not provide any protection against HIV in rectal tissue samples. The study also discovered that only 25 percent of the participants liked tenofovir gel, compared to 50 percent who had used the placebo gel. Some individuals who used tenofovir gel experienced gastrointestinal distress, cramps, and diarrhea. Results were presented at the 18th Conference on Retroviruses and Opportunistic Infections, or CROI. MTN-007 MTN-007 studied a reformulated version of the tenofovir gel. Researchers retained the same concentration of tenofovir (one percent), but reduced the glycerin in the gel in an attempt to make it more acceptable and “rectal friendly.” This Phase I safety and acceptability study, launched in October 2011, included 65 men and women from three sites in the United States. Results were presented at the 19th CROI in March 2012. This reduced glycerin formulation of 1 percent tenofovir gel was found to be safe and acceptable. Researchers recommended advancing this candidate to Phase II. MTN-017 MTN-017, the follow-up to MTN-007, represented a major milestone: the first Phase II expanded safety and acceptability study of an RM. The trial was officially launched in October 2013 at sites in the United States, Peru, Thailand, and South Africa. The 195 gay men, other MSM, and transgender women recruited into MTN-017 more than doubled the total number of human beings who have participated in RM clinical trials to date, and the trial was also the first to include participants from countries outside of the United States. The study investigated the safety and acceptability of the reduced glycerin tenofovir gel and directly compared acceptability and adherence to daily oral Truvada. MTN-017 featured an open-label, crossover design in which each individual followed three different regimens, each lasting eight weeks. One regimen consisted of the participant applying the gel to the rectum daily. A second regimen asked participants to apply the gel rectally before and after anal intercourse. In the third regimen, participants took oral Truvada every day. The order in which participants followed the study regimens was assigned randomly, with a break between each regimen. The procedures carried out as part of MTN-017 determined how much of each drug is absorbed in blood, rectal fluid, and tissue, and also assessed any changes in cells or tissue. Study participants were asked about any side effects, what they liked and disliked about using the gel either daily or with sex, and whether they would consider using the gel in the future. Gel acceptability and adherence were directly compared to oral Truvada, which has been shown to reduce the risk of HIV acquisition in a number of studies among different populations. Use Rectal microbicides can reduce the risk of transmission of HIV during anal intercourse, particularly during sex when condoms are not used.Researchers have explored using personal lubricant as a vehicle for delivering a rectal microbicide. Culture Research into rectal microbicides and funding for exploring their use as public health tools has faced barriers historically because of the taboo in discussing anal health and anal sex. Researchers have reported feeling disinclined to request funding for "anal research" because of biases against anything to do with an anus, and public policy writers have at times faced opposition to promoting discussion on anal topics. Future Scientists at the Population Council are trying to develop a microbicide that would be both safe and effective in either the vagina or the rectum. They have conducted early work on a combination product containing MIV-150 (an investigational ARV), zinc acetate, and carrageenan gel. Further evaluation of this combination is dependent on funding. References Microbicides Prevention of HIV/AIDS Rectum Sexually transmitted diseases and infections
Rectal microbicide
Biology
1,929
77,323,818
https://en.wikipedia.org/wiki/VITEK
VITEK refers to a series of automated microbiology analyzers for microbial identification (ID) and antibiotic sensitivity testing (AST). History Vitek was developed in the 1960s between NASA and the defense contractor McDonnell Douglas. For the Voyager program, McDonnell Douglas developed a Microbial Load Monitor (MLM) to detect bacterial contamination aboard the spacecraft. Under a subsequent NASA contract, McDonnell Douglas explored expanding the MLM to detecting and identifying bacterial infections among the crew of a human mission to Mars. The initial system was called the Microbial Load Monitor (MLM) and could detect nine common pathogens of Urinary tract infections (UTIs). In 1977, a new subsidiary was formed around the product, Vitek Systems, and the system was renamed the VITEK meaning "life technology", a portmanteau of Latin Viv, meaning life, and TEK being short for technology. In 1979, Vitek began selling the AutoMicrobic System (AMS) to hospital laboratories. In 1989, Vitek Systems was sold to bioMérieux. In March 2005, the Vitek 2 Compact received FDA clearance. References Microbiology analyzer
VITEK
Biology
237
5,039,137
https://en.wikipedia.org/wiki/Delta%20Canis%20Minoris
The Bayer designation Delta Canis Minoris (δ CMi / δ Canis Minoris) is shared by three stars in the constellation Canis Minor: δ1 Canis Minoris δ2 Canis Minoris δ3 Canis Minoris Canis Minoris, Delta Canis Minor
Delta Canis Minoris
Astronomy
60
25,077,398
https://en.wikipedia.org/wiki/Lagrangian%20coherent%20structure
Lagrangian coherent structures (LCSs) are distinguished surfaces of trajectories in a dynamical system that exert a major influence on nearby trajectories over a time interval of interest. The type of this influence may vary, but it invariably creates a coherent trajectory pattern for which the underlying LCS serves as a theoretical centerpiece. In observations of tracer patterns in nature, one readily identifies coherent features, but it is often the underlying structure creating these features that is of interest. As illustrated on the right, individual tracer trajectories forming coherent patterns are generally sensitive with respect to changes in their initial conditions and the system parameters. In contrast, the LCSs creating these trajectory patterns turn out to be robust and provide a simplified skeleton of the overall dynamics of the system. The robustness of this skeleton makes LCSs ideal tools for model validation, model comparison and benchmarking. LCSs can also be used for now-casting and even short-term forecasting of pattern evolution in complex dynamical systems. Physical phenomena governed by LCSs include floating debris, oil spills, surface drifters and chlorophyll patterns in the ocean; clouds of volcanic ash and spores in the atmosphere; and coherent crowd patterns formed by humans and animals. While LCSs generally exist in any dynamical system, their role in creating coherent patterns is perhaps most readily observable in fluid flows. General definitions Material surfaces On a phase space and over a time interval , consider a non-autonomous dynamical system defined through the flow map , mapping initial conditions into their position for any time . If the flow map is a diffeomorphism for any choice of , then for any smooth set of initial conditions in , the set is an invariant manifold in the extended phase space . Borrowing terminology from fluid dynamics, we refer to the evolving time slice of the manifold as a material surface (see Fig. 1). Since any choice of the initial condition set yields an invariant manifold , invariant manifolds and their associated material surfaces are abundant and generally undistinguished in the extended phase space. Only few of them will act as cores of coherent trajectory patterns. LCSs as exceptional material surfaces In order to create a coherent pattern, a material surface should exert a sustained and consistent action on nearby trajectories throughout the time interval . Examples of such action are attraction, repulsion, or shear. In principle, any well-defined mathematical property qualifies that creates coherent patterns out of randomly selected nearby initial conditions. Most such properties can be expressed by strict inequalities. For instance, we call a material surface attracting over the interval if all small enough initial perturbations to are carried by the flow into even smaller final perturbations to . In classical dynamical systems theory, invariant manifolds satisfying such an attraction property over infinite times are called attractors. They are not only special, but even locally unique in the phase space: no continuous family of attractors may exist. In contrast, in dynamical systems defined over a finite time interval , strict inequalities do not define exceptional (i.e., locally unique) material surfaces. This follows from the continuity of the flow map over . For instance, if a material surface attracts all nearby trajectories over the time interval , then so will any sufficiently close other material surface. Thus, attracting, repelling and shearing material surfaces are necessarily stacked on each other, i.e., occur in continuous families. This leads to the idea of seeking LCSs in finite-time dynamical systems as exceptional material surfaces that exhibit a coherence-inducing property more strongly than any of the neighboring material surfaces. Such LCSs, defined as extrema (or more generally, stationary surfaces) for a finite-time coherence property, will indeed serve as observed centerpieces of trajectory patterns. Examples of attracting, repelling and shearing LCSs are in a direct numerical simulation of 2D turbulence are shown in Fig.2a. LCSs vs. classical invariant manifolds Classical invariant manifolds are invariant sets in the phase space of an autonomous dynamical system. In contrast, LCSs are only required to be invariant in the extended phase space. This means that even if the underlying dynamical system is autonomous, the LCSs of the system over the interval will generally be time-dependent, acting as the evolving skeletons of observed coherent trajectory patterns. Figure 2b shows the difference between an attracting LCS and a classic unstable manifold of a saddle point, for evolving times, in an autonomous dynamical system. Objectivity of LCSs Assume that the phase space of the underlying dynamical system is the material configuration space of a continuum, such as a fluid or a deformable body. For instance, for a dynamical system generated by an unsteady velocity field the open set of possible particle positions is a material configuration space. In this space, LCSs are material surfaces, formed by trajectories. Whether or not a material trajectory is contained in an LCS is a property that is independent of the choice of coordinates, and hence cannot depend of the observer. As a consequence, LCSs are subject to the basic objectivity (material frame-indifference) requirement of continuum mechanics. The objectivity of LCSs requires them to be invariant with respect to all possible observer changes, i.e., linear coordinate changes of the form where is the vector of the transformed coordinates; is an arbitrary proper orthogonal matrix representing time-dependent rotations; and is an arbitrary -dimensional vector representing time-dependent translations. As a consequence, any self-consistent LCS definition or criterion should be expressible in terms of quantities that are frame-invariant. For instance, the strain rate and the spin tensor defined as transform under Euclidean changes of frame into the quantities A Euclidean frame change is, therefore, equivalent to a similarity transform for , and hence an LCS approach depending only on the eigenvalues and eigenvectors of is automatically frame-invariant. In contrast, an LCS approach depending on the eigenvalues of is generally not frame-invariant. A number of frame-dependent quantities, such as , , , as well as the averages or eigenvalues of these quantities, are routinely used in heuristic LCS detection. While such quantities may effectively mark features of the instantaneous velocity field , the ability of these quantities to capture material mixing, transport, and coherence is limited and a priori unknown in any given frame. As an example, consider the linear unsteady fluid particle motion which is an exact solution of the two-dimensional Navier–Stokes equations. The (frame-dependent) Okubo-Weiss criterion classifies the whole domain in this flow as elliptic (vortical) because holds, with referring to the Euclidean matrix norm. As seen in Fig. 3, however, trajectories grow exponentially along a rotating line and shrink exponentially along another rotating line. In material terms, therefore, the flow is hyperbolic (saddle-type) in any frame. Since Newton’s equation for particle motion and the Navier–Stokes equations for fluid motion are well known to be frame-dependent, it might first seem counterintuitive to require frame-invariance for LCSs, which are composed of solutions of these frame-dependent equations. Recall, however, that the Newton and Navier–Stokes equations represent objective physical principles for material particle trajectories. As long as correctly transformed from one frame to the other, these equations generate physically the same material trajectories in the new frame. In fact, we decide how to transform the equations of motion from an -frame to a -frame through a coordinate change precisely by upholding that trajectories are mapped into trajectories, i.e., by requiring to hold for all times. Temporal differentiation of this identity and substitution into the original equation in the -frame then yields the transformed equation in the -frame. While this process adds new terms (inertial forces) to the equations of motion, these inertial terms arise precisely to ensure the invariance of material trajectories. Fully composed of material trajectories, LCSs remain invariant in the transformed equation of motion defined in the -frame of reference. Consequently, any self-consistent LCS definition or detection method must also be frame-invariant. Hyperbolic LCSs Motivated by the above discussion, the simplest way to define an attracting LCS is by requiring it to be a locally strongest attracting material surface in the extended phase space (see. Fig. 4) . Similarly, a repelling LCS can be defined as a locally strongest repelling material surface. Attracting and repelling LCSs together are usually referred to as hyperbolic LCSs, as they provide a finite-time generalization of the classic concept of normally hyperbolic invariant manifolds in dynamical systems. Diagnostic approach: Finite-time Lyapunov exponent (FTLE) ridges Heuristically, one may seek initial positions of repelling LCSs as set of initial conditions at which infinitesimal perturbations to trajectories starting from grow locally at the highest rate relative to trajectories starting off of . The heuristic element here is that instead of constructing a highly repelling material surface, one simply seeks points of large particle separation. Such a separation may well be due to strong shear along the set of points so identified; this set is not at all guaranteed to exert any normal repulsion on nearby trajectories. The growth of an infinitesimal perturbation along a trajectory is governed by the flow map gradient . Let be a small perturbation to the initial condition , with , and with denoting an arbitrary unit vector in . This perturbation generally grows along the trajectory into the perturbation vector . Then the maximum relative stretching of infinitesimal perturbations at the point can be computed as where denotes the right Cauchy–Green strain tensor. One then concludes that the maximum relative stretching experienced along a trajectory starting from is just . As this relative stretching tends to grow rapidly, it is more convenient to work with its growth exponent , which is then precisely the finite-time Lyapunov exponent (FTLE) Therefore, one expects hyperbolic LCSs to appear as codimension-one local maximizing surfaces (or ridges) of the FTLE field. This expectation turns out to be justified in the majority of cases: time positions of repelling LCSs are marked by ridges of . By applying the same argument in backward time, we obtain that time positions of attracting LCSs are marked by ridges of the backward FTLE field . The classic way of computing Lyapunov exponents is solving a linear differential equation for the linearized flow map . A more expedient approach is to compute the FTLE field from a simple finite-difference approximation to the deformation gradient. For example, in a three-dimenisonal flow, we launch a trajectory from any element of a grid of initial conditions. Using the coordinate representation for the evolving trajectory , we approximate the gradient of the flow map as with a small vector pointing in the coordinate direction. For two-dimensional flows, only the first minor matrix of the above matrix is relevant. Issues with inferring hyperbolic LCSs from FTLE ridges FTLE ridges have proven to be a simple and efficient tool for the visualize hyperbolic LCSs in a number of physical problems, yielding intriguing images of initial positions of hyperbolic LCSs in different applications (see, e.g., Figs. 5a-b). However, FTLE ridges obtained over sliding time windows do not form material surfaces. Thus, ridges of under varying cannot be used to define Lagrangian objects, such as hyperbolic LCSs. Indeed, a locally strongest repelling material surface over will generally not play the same role over and hence its evolving position at time will not be a ridge for . Nonetheless, evolving second-derivative FTLE ridges computed over sliding intervals of the form have been identified by some authors broadly with LCSs. In support of this identification, it is also often argued that the material flux over such sliding-window FTLE ridges should necessarily be small. The "FTLE ridge=LCS" identification, however, suffers form the following conceptual and mathematical problems: Second-derivative FTLE ridges are necessarily straight lines and hence do not exist in physical problems. FTLE ridges computed over sliding time windows with a varying are generally not Lagrangian and the flux through them is generally not small. In particular, a broadly referenced material flux formula for FTLE ridges is incorrect, even for straight FTLE ridges FTLE ridges mark hyperbolic LCS positions, but also highlight surfaces of high shear. A convoluted mixture of both types of surfaces often arises in applications (see Fig. 6 for an example). There are several other types LCSs (elliptic and parabolic) beyond the hyperbolic LCSs highlighted by FTLE ridges Local variational approach: Shrink and stretch surfaces The local variational theory of hyperbolic LCSs builds on their original definition as strongest repelling or repelling material surfaces in the flow over the time interval . At an initial point , let denote a unit normal to an initial material surface (cf. Fig. 6). By the invariance of material lines, the tangent space is mapped into the tangent space of by the linearized flow map . At the same time, the image of the normal normal under generally does not remain normal to . Therefore, in addition to a normal component of length , the advected normal also develops a tangential component of length (cf. Fig. 7). If , then the evolving material surface strictly repels nearby trajectories by the end of the time interval . Similarly, signals that strictly attracts nearby trajectories along its normal directions. A repelling (attracting) LCS over the interval can be defined as a material surface whose net repulsion is pointwise maximal (minimal) with respect to perturbations of the initial normal vector field . As earlier, we refer to repelling and attracting LCSs collectively as hyperbolic LCSs. Solving these local extremum principles for hyperbolic LCSs in two and three dimensions yields unit normal vector fields to which hyperbolic LCSs should everywhere be tangent. The existence of such normal surfaces also requires a Frobenius-type integrability condition in the three-dimensional case. All these results can be summarized as follows: Repelling LCSs are obtained as most repelling shrink lines, starting from local maxima of . Attracting LCSs are obtained as most attracting stretch lines, starting from local minima of . These starting points serve are initial positions of exceptional saddle-type trajectories in the flow. An example of the local variational computation of a repelling LCS is shown in FIg. 8. The computational algorithm is available in LCS Tool. In 3D flows, instead of solving the Frobenius PDE (see table above) for hyperbolic LCSs, an easier approach is to construct intersections of hyperbolic LCSs with select 2D planes, and fit a surface numerically to a large number of such intersection curves. Let us denote the unit normal of a 2D plane by . The intersection curve of a 2D repelling LCS surface with the plane is normal to both and to the unit normal of the LCS. As a consequence, an intersection curve satisfies the ODE whose trajectories we refer to as reduced shrink lines. (Strictly speaking, this equation is not an ordinary differential equation, given that its right-hand side is not a vector field, but a direction field, which is generally not globally orientable). Intersections of hyperbolic LCSs with are fastest contracting reduced shrink lines. Determining such shrink lines in a smooth family of nearby planes, then fitting a surface to the curve family so obtained yields a numerical approximation of a 2D repelling LCS. Global variational approach: Shrink- and stretchlines as null-geodesics A general material surface experiences shear and strain in its deformation, both of which depend continuously on initial conditions by the continuity of the map . The averaged strain and shear within a strip of -close material lines, therefore, typically show variation within such a strip. The two-dimensional geodesic theory of LCSs seeks exceptionally coherent locations where this general trend fails, resulting in an order of magnitude smaller variability in shear or strain than what is normally expected across an strip. Specifically, the geodesic theory searches for LCSs as special material lines around which material strips show no variability either in the material-line averaged shear (Shearless LCSs) or in the material-line averaged strain (Strainless or Elliptic LCSs). Such LCSs turn out to be null-geodesics of appropriate metric tensors defined by the deformation field—hence the name of this theory. Shearless LCSs are found to be null-geodesics of a Lorentzian metric tensor defined as Such null-geodesics can be proven to be tensorlines of the Cauchy–Green strain tensor, i.e., are tangent to the direction field formed by the strain eigenvector fields . Specifically, repelling LCSs are trajectories of starting from local maxima of the eigenvalue field. Similarly, attracting LCSs are trajectories of starting from local minims of the eigenvalue field. This agrees with the conclusion of the local variational theory of LCSs. The geodesic approach, however, also sheds more light on the robustness of hyperbolic LCSs: hyperbolic LCSs only prevail as stationary curves of the averaged shear functional under variations that leave their endpoints fixed. This is to be contrasted with parabolic LCSs (see below), which are also shearless LCSs but prevail as stationary curves to the shear functional even under arbitrary variations. As a consequence, individual trajectories are objective, and statements about the coherent structures they form should also be objective. A sample application is shown in Fig. 9, where the sudden appearance of a hyperbolic core (strongest attracting part of a stretchline) within the oil spill caused the notable Tiger-Tail instability in the shape of the oil spill. Elliptic LCSs Elliptc LCSs are closed and nested material surfaces that act as building blocks of the Lagrangian equivalents of vortices, i.e., rotation-dominated regions of trajectories that generally traverse the phase space without substantial stretching or folding. They mimic the behavior of Kolmogorov–Arnold–Moser (KAM) tori that form elliptic regions in Hamiltonian systems. There coherence can be approached either through their homogeneous material rotation or through their homogeneous stretching properties. Rotational coherence from the polar rotation angle (PRA) As a simplest approach to rotational coherence, one may define an elliptic LCS as a tubular material surface along which small material volumes complete the same net rotation over the time intervall of interest. A challenge in that in each material volume element, all individual material fibers (tangent vectors to trajectories) perform different rotations. To obtain a well-defined bulk rotation for each material element, one may employ the unique left and right polar decompositions of the flow gradient in the form where the proper orthogonal tensor is called the rotation tensor and the symmetric, positive definite tensors are called the left stretch tensor and right stretch tensor, respectively. Since the Cauchy–Green strain tensor can be written as the local material straining described by the eigenvalues and eigenvectors of are fully captured by the singular values and singular vectors of the stretch tensors. The remaining factor in the deformation gradient is represented by , interpreted as the bulk solid-body rotation component of volume elements. In planar motions, this rotation is defined relative to the normal of the plane. In three dimensions, the rotation is defined relative to the axis defined by the eigenvector of corresponding to its unit eigenvalue. In higher-dimensional flows, the rotation tensor cannot be viewed as a rotation about a single axis. In two and three dimensions, therefore, there exists a polar rotation angle (PRA) that characterises the material rotation generated by for a volume element centered at the initial condition . This PRA is well-defined up to multiples of . For two-dimensional flows, the PRA can be computed from the invariants of using the formulas which yield a four-quadrant version of the PRA via the formula For three-dimensional flows, the PRA can again be computed from the invariants of from the formulas where is the Levi-Civita symbol, is the eigenvector corresponding to the unit eigenvector of the matrix . The time positions of elliptic LCSs are visualized as tubular level sets of the PRA distribution . In two-dimensions, therefore, (polar) elliptic LCSs are simply closed level curves of the PRA, which turn out to be objective. In three dimensions, (polar) elliptic LCSs are toroidal or cylindrical level surfaces of the PRA, which are, however, not objective and hence will generally change in rotating frames. Coherent Lagrangian vortex boundaries can be visualized as outermost members of nested families of elliptic LCSs. Two- and three-dimensional examples of elliptic LCS revealed by tubular level surfaces of the PRA are shown in Fig. 10a-b. Rotational coherence from the Lagrangian-averaged vorticity deviation (LAVD) The level sets of the PRA are objective in two dimensions but not in three dimensions. An additional shortcoming of the polar rotation tensor is its dynamical inconsistency: polar rotations computed over adjacent sub-intervals of a total deformation do not sum up to the rotation computed for the full-time interval of the same deformation. Therefore, while is the closest rotation tensor to in the norm over a fixed time interval , these piecewise best fits do not form a family of rigid-body rotations as and are varied. For this reason, rotations predicted by the polar rotation tensor over varying time intervals divert from the experimentally observed mean material rotation of fluid elements. An alternative to the classic polar decomposition provides a resolution to both the non-objectivity and the dynamic inconsistency issue. Specifically, the Dynamic Polar Decomposition (DPD) of the deformation gradient is also of the form where the proper orthogonal tensor is the dynamic rotation tensor and the non-singular tensors are the left dynamic stretch tensor and right dynamic stretch tensor, respectively. Just as the classic polar decomposition, the DPD is valid in any finite dimension. Unlike the classic polar decomposition, however, the dynamic rotation and stretch tensors are obtained from solving linear differential equations, rather than from matrix manipulations. In particular, is the deformation gradient of the purely rotational flow and is the deformation gradient of the purely straining flow . The dynamic rotation tensor can further be factorized into two deformation gradients: one for a spatially uniform (rigid-body) rotation, and one that deviates from this uniform rotation: As a spatially independent rigid-body rotation, the proper orthogonal relative rotation tensor is dynamically consistent, serving as the deformation gradient of the relative rotation flow In contrast, the proper orthogonal mean rotation tensor is the deformation gradient of the mean-rotation flow The dynamic consistency of implies that the total angle swept by around its own axis of rotation is dynamically consistent. This intrinsic rotation angle is also objective, and turns out to equal to one half of the Lagrangian-averaged vorticity deviation (LAVD). The LAVD is defined as the trajectory-averaged magnitude of the deviation of the vorticity from its spatial mean. With the vorticity and its spatial mean the LAVD over a time interval therefore takes the form with denoting the (possibly time-varying) domain of definition of the velocity field . This result applies both in two- and three dimensions, and enables the computation of a well-defined, objective and dynamically consistent material rotation angle along any trajectory. Outermost complex tubular level curves of the LAVD define initial positions of rotationally coherent material vortex boundaries in two-dimensional unsteady flows (see Fig. 11a). By construction, these boundaries may exhibit transverse filamentation, but any developing filament keeps rotating with the boundary, without global transverse departure form the material vortex. (Exceptions are inviscid flows where such a global departure of LAVD level surfaces from a vortex is possible as fluid elements preserve their material rotation rate for all times). Remarkably, centers of rotationally coherent vortices (defined by local maxima of the LAVD field) can be proven to be the observed centers of attraction or repulsion for finite-size (inertial) particle motion in geophysical flows (see Fig. 11b). In three-dimensional flows, tubular level surfaces of the LAVD define initial positions of two-dimensional eddy boundary surfaces (see Fig. 11c) that remain rotationally coherent over a time intcenter|erval (see Fig. 11d). Stretching-based coherence from a local variational approach: Shear surfaces The local variational theory of elliptic LCSs targets material surfaces that locally maximize material shear over the finite time interval of interest. This means that at initial point each point of an elliptic LCS , the tangent space is the plane along which the local Lagrangian shear is maximal (cf. Fig 7). Introducing the two-dimensional shear vector field and the three-dimensional shear normal vector field the criteria for two- and three-dimensional elliptic LCSs can be summarized as follows: For 3D flows, as in the case of hyperbolic LCSs, solving the Frobenius PDE can be avoided. Instead, one can construct intersections of a tubular elliptic LCS with select 2D planes, and fit a surface numerically to a large number of these intersection curves. As for hyperbolic LCSs above, let us denote the unit normal of a 2D plane by . Again, the intersection curves of elliptic LCSs with the plane are normal to both and to the unit normal of the LCS. As a consequence, an intersection curve satisfies the reduced shear ODE whose trajectories we refer to as reduced shear lines. (Strictly speaking, the reduced shear ODE is not an ordinary differential equation, given that its right-hand side is not a vector field, but a direction field, which is generally not globally orientable). Intersections of tubular elliptic LCSs with are limit cycles of the reduced shear ODE. Determining such limit cycles in a smooth family of nearby planes, then fitting a surface to the limit cycle family yields a numerical approximation for 2D shear surface. A three-dimensional example of this local variational computation of an elliptic LCS is shown in Fig. 11. Stretching-based coherence from a global variational approach: lambda-lines As noted above under hyperbolic LCSs, a global variational approach has been developed in two dimensions to capture elliptic LCSs as closed stationary curves of the material-line-averaged Lagrangian strain functional. Such curves turn out to be closed null-geodesics of the generalized Green–Lagrange strain tensor family , where is a positive parameter (Lagrange multiplier). The closed null-geodesics can be shown to coincide with limit cycles of the family of direction fields Note that for , the direction field coincides with the direction field for shearlines obtained above from the local variational theory of LCSs. Trajectories of are referred to as -lines. Remarkably, they are initial positions of material lines that are infinitesimally uniformly stretching under the flow map . Specifically, any subset of a -line is stretched by a factor of between the times and . As an example, Fig. 13 shows elliptic LCSs identified as closed -lines within the Great Red Spot of Jupiter. Parabolic LCSs Parabolic LCSs are shearless material surfaces that delineate cores of jet-type sets of trajectories. Such LCSs are characterized by both low stretching (because they are inside a non-stretching structure), but also by low shearing (because material shearing is minimal in jet cores). Diagnostic approach: Finite-time Lyapunov exponents (FTLE) trenches Since both shearing and stretching are as low as possible along a parabolic LCS, one may seek initial positions of such material surfaces as trenches of the FTLE field . A geophysical example of a parabolic LCS (generalized jet core) revealed as a trench of the FTLE field is shown in Fig. 14a. Global variational approach: Heteroclinic chains of null-geodesics In two dimensions, parabolic LCSs are also solutions of the global shearless variational principle described above for hyperbolic LCSs. As such, parabolic LCSs are composed of shrink lines and stretch lines that represent geodesics of the Lorentzian metric tensor . In contrast to hyperbolic LCSs, however, parabolic LCSs satisfy more robust boundary conditions: they remain stationary curves of the material-line-averaged shear functional even under variations to their endpoints. This explains the high degree of robustness and observability that jet cores exhibit in mixing. This is to be contrasted with the highly sensitive and fading footprint of hyperbolic LCSs away from strongly hyperbolic regions in diffusive tracer patterns. Under variable endpoint boundary conditions, initial positions of parabolic LCSs turn out to be alternating chains of shrink lines and stretch lines that connect singularities of these line fields. These singularities occur at points where , and hence no infinitesimal deformation takes place between the two time instances and . Fig. 14b shows an example of parabolic LCSs in Jupiter's atmosphere, located using this variational theory. The chevron-type shapes forming out of circular material blobs positioned along the jet core is characteristic of tracer deformation near parabolic LCSs. Software packages for LCS computations Particle advection and Finite-Time Lyapunov Exponent calculation: ManGen (source code) LCS MATLAB Kit (source code) FlowVC (source code) cuda_ftle (source code) CTRAJ Newman (source code) FlowTK (source code) Jupyter notebooks that guide you through methods used to extract advective, diffusive, stochastic and active transport barriers from discrete velocity data. TBarrier (source code) See also Turbulence Chaos theory Dynamical systems theory Spectral submanifold Eulerian coherent structure Coherent turbulent structure References Further reading Dynamical systems Fluid dynamics Turbulence Chaos theory Flow visualization
Lagrangian coherent structure
Physics,Chemistry,Mathematics,Engineering
6,322
634,780
https://en.wikipedia.org/wiki/Bohr%20compactification
In mathematics, the Bohr compactification of a topological group G is a compact Hausdorff topological group H that may be canonically associated to G. Its importance lies in the reduction of the theory of uniformly almost periodic functions on G to the theory of continuous functions on H. The concept is named after Harald Bohr who pioneered the study of almost periodic functions, on the real line. Definitions and basic properties Given a topological group G, the Bohr compactification of G is a compact Hausdorff topological group Bohr(G) and a continuous homomorphism b: G → Bohr(G) which is universal with respect to homomorphisms into compact Hausdorff groups; this means that if K is another compact Hausdorff topological group and f: G → K is a continuous homomorphism, then there is a unique continuous homomorphism Bohr(f): Bohr(G) → K such that f = Bohr(f) ∘ b. Theorem. The Bohr compactification exists and is unique up to isomorphism. We will denote the Bohr compactification of G by Bohr(G) and the canonical map by The correspondence G ↦ Bohr(G) defines a covariant functor on the category of topological groups and continuous homomorphisms. The Bohr compactification is intimately connected to the finite-dimensional unitary representation theory of a topological group. The kernel of b consists exactly of those elements of G which cannot be separated from the identity of G by finite-dimensional unitary representations. The Bohr compactification also reduces many problems in the theory of almost periodic functions on topological groups to that of functions on compact groups. A bounded continuous complex-valued function f on a topological group G is uniformly almost periodic if and only if the set of right translates gf where is relatively compact in the uniform topology as g varies through G. Theorem. A bounded continuous complex-valued function f on G is uniformly almost periodic if and only if there is a continuous function f1 on Bohr(G) (which is uniquely determined) such that Maximally almost periodic groups Topological groups for which the Bohr compactification mapping is injective are called maximally almost periodic (or MAP groups). For example all Abelian groups, all compact groups, and all free groups are MAP. In the case G is a locally compact connected group, MAP groups are completely characterized: They are precisely products of compact groups with vector groups of finite dimension. See also References Notes Bibliography Further reading Topological groups Harmonic analysis Compactification (mathematics)
Bohr compactification
Mathematics
517
11,741,990
https://en.wikipedia.org/wiki/Cyclin-dependent%20kinase%20inhibitor%20protein
A cyclin-dependent kinase inhibitor protein (also known as CKIs, CDIs, or CDKIs) is a protein that inhibits the enzyme cyclin-dependent kinase (CDK) and Cyclin activity by stopping the cell cycle if there are unfavorable conditions, therefore, acting as tumor suppressors. Cell cycle progression is stopped by Cyclin-dependent kinase inhibitor protein at the G1 phase. CKIs are vital proteins within the control system that point out whether the processes of DNA synthesis, mitosis, and cytokines control one another. When a malfunction hinders the successful completion of DNA synthesis in the G1 phase, it triggers a signal that delays or halts the progression to the S phase. Cyclin-dependent kinase inhibitor proteins are essential in the regulation of the cell cycle. If cell mutations surpass the cell cycle checkpoints during cell cycle regulation, it can result in various types of cancer. CKI Inactivation Process Cyclin-dependent kinase inhibitor proteins work by inactivating the CDKs through degradation. The typical inactivation mechanism of the CDK/Cyclin complex is based on binding a CDK inhibitor to the CDK cyclin complex and a partial conformational rotation of the CDK. The cyclin is thus forced to release the T loop and detach from the CDK. Then, the CDK inhibitor initiates a small helix into the cleft, blocking the cleft and blocking the active site of the CDK. Eventually, it releases the ATP out of the aperture of the CDK and deactivates it. Cyclin-dependent kinase inhibitor proteins use ATP as a phosphate contributor to phosphorylate serine and threonine residues.   Human cells contain many different cyclins that bind to different CDKs. CDKs and cyclins appear and activate at specific cell cycle phases. Seven cyclin-dependent kinase inhibitor proteins have been identified. They are p15, p16, p18, p19, p21, p27, and p57. These cyclin-dependent kinase inhibitor proteins emerge only in their specific cell cycle phase. Each Cyclin/CDK complex is specific to the part of the cell cycle phase. Each CDK and cyclin can be identified based on the location of the cell cycle. CKIs fall into two categories; those that inhibit CDK1, CDK2, and CDK5 and those that inhibit CDK4 and CDK6. These checkpoints' cell cycle blocks at both the G1/S and G2/M checkpoints are consistent with the inhibition profiles of the enzymes. Discovery The discovery of Cyclin-dependent kinase inhibitor proteins in 1990 opened the door in how we think about cell cycle control. It has steered to various other fields of study such as developmental biology, cell biology and cancer research. The discovery of the first CKIs in yeast (Far1) and P21 in mammals has led to research on family of molecules. Further research has demonstrates that Cdks, cyclins and CKIs play essential roles in processes such as transcription, epigenetic regulation, metabolism, stem cell self-renewal, neuronal functions and spermatogenesis. In mammals, p27, a cyclin-dependent kinase inhibitor protein, helps control CDK activity in G1. Also, the INK4 proteins help stop the G1-CDK activity when they encounter anti-proliferative signals within the environment.  CKIs help promote the specific inhibitory signals that contain the cell from entering the S phase. In budding yeast, SIC 1 and Roughex, RUX, in Drosophila possess the same contributions that contribute to the stability of G1 cells. They are expressed in higher numbers in G1 cells to make sure that no S or M CDKs are in the cell. Structure In the cyclin-dependent kinase (CDK) family, or CDK, Cyclin, and CKIs, serine/threonine kinases play an integral role in regulating the eukaryotic cell cycle. The structure of CDK2-CyclinA and p27 is determined by crystallography, demonstrating that the inhibitor of p27 stretches at the top of the Cyclin-CDK complex. The amino terminal of p27 has an RXL motif exhibiting a hydrophobic patch of cyclin A. The carboxyl-terminal end of the p27 fragment interacts with the beta sheet of CDKs, causing interference with the structure; p27 slides into the ATP-binding site of CDK2 and inhibits ATP binding. Clinical significance Role in cancer: Cyclin-dependent kinase inhibitor (CKI) mutants are frequent in human cancers. The function of CKI is to stop cell growth when there are mistakes due to DNA damage. Once a cell is stopped at a checkpoint due to DNA damage, either the damage is repaired or the cell is induced to perform apoptosis. However, if CKI’s mutations don’t stop the cell, Cyclin D is transcribed. It moves into the cytoplasm and eventually activates a specific cyclin-dependent kinase (CDK). The active cyclin/CDK complex then phosphorylates proteins, activates them, and sends the cell into the next phase of the cell cycle. Since the cell with damaged DNA is not stopped, the cell eventually moves out of the G1 checkpoint and prepares for DNA synthesis. When there is uncontrolled cell growth, it can lead to cancer cells due to the inactivation of the CKIs. Associated gene and target References External links Protein domains
Cyclin-dependent kinase inhibitor protein
Biology
1,176
26,908,280
https://en.wikipedia.org/wiki/Organorhenium%20chemistry
Organorhenium chemistry describes the compounds with Re−C bonds. Because rhenium is a rare element, relatively few applications exist, but the area has been a rich source of concepts and a few useful catalysts. General features Rhenium exists in ten known oxidation states from −3 to +7 except −2, and all but Re(−3) are represented by organorhenium compounds. Most are prepared from salts of perrhenate and related binary oxides. The halides, e.g., ReCl5 are also useful precursors as are certain oxychlorides. A noteworthy feature of organorhenium chemistry is the coexistence of oxide and organic ligands in the same coordination sphere. Carbonyl compounds Dirhenium decacarbonyl is a common entry point to other rhenium carbonyls. The general patterns are similar to the related manganese carbonyls. It is possible to reduce this dimer with sodium amalgam to Na[Re(CO)5] with rhenium in the formal oxidation state −1. Bromination of dirhenium decacarbonyl gives bromopentacarbonylrhenium(I), then reduced with zinc and acetic acid to pentacarbonylhydridorhenium: Re2(CO)10 + Br2 → 2 Re(CO)5Br Re(CO)5Br + Zn + HOAc → Re(CO)5H + ZnBr(OAc) Bromopentacarbonylrhenium(I) is readily decarbonylated. In refluxing water, it forms the triaquo cation: Re(CO)5Br + 3 H2O → [Re(CO)3(H2O)3]Br + 2 CO With tetraethylammonium bromide Re(CO)5Br reacts to give the anionic tribromide: Re(CO)5Br + 2 NEt4Br → [NEt4]2[Re(CO)3Br3] + 2 CO Cyclopentadienyl complexes One of the first transition metal hydride complexes to be reported was (C5H5)2ReH. A variety of half-sandwich compounds have been prepared from (C5H5)Re(CO)3 and (C5Me5)Re(CO)3. Notable derivatives include the electron-precise oxide (C5Me5)ReO3 and (C5H5)2Re2(CO)4. Re-alkyl and aryl compounds Rhenium forms a variety of alkyl and aryl derivatives, often with pi-donor coligands such as oxo groups. Well known is methylrhenium trioxide ("MTO"), CH3ReO3 a volatile, colourless solid, a rare example of a stable high-oxidation state metal alkyl complex. This compound has been used as a catalyst in some laboratory experiments. It can be prepared by many routes, a typical method is the reaction of Re2O7 and tetramethyltin: Re2O7 + (CH3)4Sn → CH3ReO3 + (CH3)3SnOReO3 Analogous alkyl and aryl derivatives are known. Although PhReO3 is unstable and decomposes at –30 °C, the corresponding sterically hindered mesityl and 2,6-xylyl derivatives (MesReO3 and 2,6-(CH3)2C6H3ReO3) are stable at room temperature. The electron poor 4-trifluoromethylphenylrhenium trioxide (4-CF3C6H4ReO3) is likewise relatively stable. MTO and other organylrhenium trioxides catalyze oxidation reactions with hydrogen peroxide as well as olefin metathesis in the presence of a Lewis acid activator. Terminal alkynes yield the corresponding acid or ester, internal alkynes yield diketones, and alkenes give epoxides. MTO also catalyses the conversion of aldehydes and diazoalkanes into an alkene. Rhenium is also able to make complexes with fullerene ligands such as Re2(PMe3)4H8(η2:η2C60). Further reading Synthesis of Organometallic Compounds: A Practical Guide Sanshiro Komiya Ed. S. Komiya, M. Hurano 1997. Pericles Stavropoulos, Peter G. Edwards, Geoffrey Wilkinson, Majid Motevalli, K. M. Abdul Malik and Michael B. Hursthouse "Oxoalkyls of rhenium-(V) and-(VI). X-Ray crystal structures of (Me4ReO)2Mg(thf)4,[(Me3SiCH2)4ReO]2Mg(thf)2, Re2O3Me6 and Re2O3(CH2SiMe3)6" J. Chem. Soc., Dalton Trans., 1985, pp. 2167-2175. References Rhenium compounds Organometallic compounds
Organorhenium chemistry
Chemistry
1,099
11,548,017
https://en.wikipedia.org/wiki/Leandria%20momordicae
Leandria momordicae is an ascomycete fungus that is a plant pathogen. References External links Index Fungorum USDA ARS Fungal Database Fungal plant pathogens and diseases Enigmatic Ascomycota taxa Fungus species
Leandria momordicae
Biology
50
44,557,148
https://en.wikipedia.org/wiki/Laves%20graph
In geometry and crystallography, the Laves graph is an infinite and highly symmetric system of points and line segments in three-dimensional Euclidean space, forming a periodic graph. Three equal-length segments meet at 120° angles at each point, and all cycles use ten or more segments. It is the shortest possible triply periodic graph, relative to the volume of its fundamental domain. One arrangement of the Laves graph uses one out of every eight of the points in the integer lattice as its points, and connects all pairs of these points that are nearest neighbors, at distance . It can also be defined, divorced from its geometry, as an abstract undirected graph, a covering graph of the complete graph on four vertices. named this graph after Fritz Laves, who first wrote about it as a crystal structure in 1932. It has also been called the K4 crystal, (10,3)-a network, diamond twin, triamond, and the srs net. The regions of space nearest each vertex of the graph are congruent 17-sided polyhedra that tile space. Its edges lie on diagonals of the regular skew polyhedron, a surface with six squares meeting at each integer point of space. Several crystalline chemicals have known or predicted structures in the form of the Laves graph. Thickening the edges of the Laves graph to cylinders produces a related minimal surface, the gyroid, which appears physically in certain soap film structures and in the wings of butterflies. Constructions From the integer grid As describes, the vertices of the Laves graph can be defined by selecting one out of every eight points in the three-dimensional integer lattice, and forming their nearest neighbor graph. Specifically, one chooses the points and all the other points formed by adding multiples of four to these coordinates. The edges of the Laves graph connect pairs of points whose Euclidean distance from each other is the square root of two, , as the points of each pair differ by one unit in two coordinates, and are the same in the third coordinate. The edges meet at 120° angles at each vertex, in a flat plane. All pairs of vertices that are non-adjacent are farther apart, at a distance of at least from each other. The edges of the resulting geometric graph are diagonals of a subset of the faces of the regular skew polyhedron with six square faces per vertex, so the Laves graph is embedded in this skew polyhedron. It is possible to choose a larger set of one out of every four points of the integer lattice, so that the graph of distance- pairs of this larger set forms two mirror-image copies of the Laves graph, disconnected from each other, with all other pairs of points farther than apart. As a covering graph As an abstract graph, the Laves graph can be constructed as the maximal abelian covering graph of the complete graph . Being an abelian covering graph of means that the vertices of the Laves graph can be four-colored such that each vertex has neighbors of the other three colors and so that there are color-preserving symmetries taking any vertex to any other vertex with the same color. For the Laves graph in its geometric form with integer coordinates, these symmetries are translations that add even numbers to each coordinate (additionally, the offsets of all three coordinates must be congruent modulo four). When applying two such translations in succession, the net translation is irrespective of their order: they commute with each other, forming an abelian group. The translation vectors of this group form a three-dimensional lattice. Finally, being a maximal abelian covering graph means that there is no other covering graph of involving a higher-dimensional lattice. This construction justifies an alternative name of the Laves graph, the crystal. A maximal abelian covering graph can be constructed from any finite graph ; applied to , the construction produces the (abstract) Laves graph, but does not give it the same geometric layout. Choose a spanning tree of , let be the number of edges that are not in the spanning tree (in this case, three non-tree edges), and choose a distinct unit vector in for each of these non-tree edges. Then, fix the set of vertices of the covering graph to be the ordered pairs where is a vertex of and is a vector in . For each such pair, and each edge adjacent to in , make an edge from to where is the zero vector if belongs to the spanning tree, and is otherwise the basis vector associated with , and where the plus or minus sign is chosen according to the direction the edge is traversed. The resulting graph is independent of the chosen spanning tree, and the same construction can also be interpreted more abstractly using homology. Using the same construction, the hexagonal tiling of the plane is the maximal abelian covering graph of the three-edge dipole graph, and the diamond cubic is the maximal abelian covering graph of the four-edge dipole. The -dimensional integer lattice (as a graph with unit-length edges) is the maximal abelian covering graph of a graph with one vertex and self-loops. As a unit distance graph The unit distance graph on the three-dimensional integer lattice has a vertex for each lattice point; each vertex has exactly six neighbors. It is possible to remove some of the points from the lattice, so that each remaining point has exactly three remaining neighbors, and so that the induced subgraph of these points has no cycles shorter than ten edges. There are four ways to do this, one of which is isomorphic as an abstract graph to the Laves graph. However, its vertices are in different positions than the more-symmetric, conventional geometric construction. Another subgraph of the simple cubic net isomorphic to the Laves graph is obtained by removing half of the edges in a certain way. The resulting structure, called semi-simple cubic lattice, also has lower symmetry than the Laves graph itself. Properties The Laves graph is a cubic graph, meaning that there are exactly three edges at each vertex. Every pair of a vertex and adjacent edge can be transformed into every other such pair by a symmetry of the graph, so it is a symmetric graph. More strongly, for every two vertices and , every one-to-one correspondence between the three edges incident to and the three edges incident to can be realized by a symmetry. However, the overall structure is chiral: no sequence of translations and rotations can make it coincide with its mirror image. The symmetry group of the Laves graph is the space group . The girth of this structure is 10—the shortest cycles in the graph have 10 vertices—and 15 of these cycles pass through each vertex. The numbers of vertices at distance 0, 1, 2, ... from any vertex (forming the coordination sequence of the Laves graph) are: If the surrounding space is partitioned into the regions nearest each vertex—the cells of the Voronoi diagram of this structure—these form heptadecahedra with 17 faces each. They are plesiohedra, polyhedra that tile space isohedrally. Experimenting with the structures formed by these polyhedra led physicist Alan Schoen to discover the gyroid minimal surface, which is topologically equivalent to the surface obtained by thickening the edges of the Laves graph to cylinders and taking the boundary of their union. The Laves graph is the unique shortest triply-periodic network, in the following sense. Triply-periodic means repeating infinitely in all three dimensions of space, so a triply-periodic network is a connected geometric graph with a three-dimensional lattice of translational symmetries. A fundamental domain is any shape that can tile space with its translated copies under these symmetries. Any lattice has infinitely many choices of fundamental domain, of varying shapes, but they all have the same volume . One can also measure the length of the edges of the network within a single copy of the fundamental domain; call this number . Similarly to , does not depend on the choice of fundamental domain, as long as the domain boundary only crosses the edges, rather than containing parts of their length. The Laves graph has four symmetry classes of vertices (orbits), because the symmetries considered here are only translations, not the rotations needed to map these four classes into each other. Each symmetry class has one vertex in any fundamental domain, so the fundamental domain contains twelve half-edges, with total length . The volume of its fundamental domain is 32. From these two numbers, the ratio (a dimensionless quantity) is therefore . This is in fact the minimum possible value: All triply-periodic networks have with equality only in the case of the Laves graph. Physical examples Art A sculpture titled Bamboozle, by Jacobus Verhoeff and his son Tom Verhoeff, is in the form of a fragment of the Laves graph, with its vertices represented by multicolored interlocking acrylic triangles. It was installed in 2013 at the Eindhoven University of Technology. Molecular crystals The Laves graph has been suggested as an allotrope of carbon, analogous to the more common graphene and graphite carbon structure which also have three bonds per atom at 120° angles. In graphene, adjacent atoms have the same bonding planes as each other, whereas in the Laves graph structure the bonding planes of adjacent atoms are twisted by an angle of approximately 70.5° around the line of the bond. However, this hypothetical carbon allotrope turns out to be unstable. The Laves graph may also give a crystal structure for boron, one which computations predict should be stable. Other chemicals that may form this structure include SrSi2 (from which the "srs net" name derives) and elemental nitrogen, as well as certain metal–organic frameworks and cyclic hydrocarbons. The electronic band structure for the tight-binding model of the Laves graph has been studied, showing the existence of Dirac and Weyl points in this structure. Other The structure of the Laves graph, and of gyroid surfaces derived from it, has also been observed experimentally in soap-water systems, and in the chitin networks of butterfly wing scales. References External links . Crystallography Infinite graphs Regular graphs
Laves graph
Physics,Chemistry,Materials_science,Mathematics,Engineering
2,106
2,115,222
https://en.wikipedia.org/wiki/Banert%20cascade
The Banert cascade is an organic reaction in which an NH-1,2,3-triazole is prepared from a propargyl halide or sulfate and sodium azide in a dioxane- water mixture at elevated temperatures. It is named after Klaus Banert, who first reported the process in 1989. This cascade reaction is unusual because it consists of two consecutive rearrangement reactions. The starting material is prepared from propargyl chloride and an aldehyde or ketone such as acetaldehyde. In the first step an azido compound is formed in situ in a nucleophilic displacement of chloride by the azide ion. A (3,3)Sigmatropic reaction takes place between the azide and the alkyne to the allenyl azide. This allene rearranges to the triazafulvene in a 6 pi electrocyclization. The exocyclic alkene in this intermediate is very electrophilic because the triazole group has a dipole moment of 5 debye. The reaction sequence concludes with nucleophilic attack of a second azide ion on this alkene with more double bond rearrangements and proton abstraction from a proton source. References Heterocycle forming reactions Rearrangement reactions Name reactions
Banert cascade
Chemistry
273
42,042,702
https://en.wikipedia.org/wiki/Band%20%28software%29
Band is a mobile community application that facilitates group communication. Created by Naver Corporation, the service is available on iOS, Android, and desktop. Users can create separate spaces for communicating with members of different groups, depending on the purpose of those groups. Types of groups include existing circles such as sports teams, marching bands, campus groups, faith groups, teams, friends, family as well as interest-based groups, like those for hobbyists, gamers, and fans, which are also searchable within the app. Band is a popular social app in Korea whose number of monthly active users surpassed that of Facebook by June 2014. Usage Secret Bands are mostly created by pre-existing offline groups when members intend to stay connected, plan, and collaborate with each other via mobile. Examples of such groups include sports teams, clubs, classes, work teams, faith groups, organizations, and extended families. Once a member creates a group on the Band app, the member can invite other members by sharing a Band URL via SMS, messenger apps, or email. The members then can sign up on the Band app and join the group by clicking the URL. Closed and Public Bands are created by interest-based groups, like people with common life experiences or hobbies, gamers, fans, and such. Band launched a gaming platform in April 2014 allowing users to play games and compete with each other. In South Korea, Band has become an official communication tool in the Republic of Korea Army. Features Users can manage their preferences for notifications to select how and if they want their mobile notifications to be received. Band allows the group's leader to see which of the members have read the post using the "read by" feature so that users can easily track group member's participation. COVID-19 pandemic According to Naver, as the COVID-19 pandemic spread in the United States in March 2020, Band's monthly live users exceeded 2.5 million, a 17-fold increase from 2016. The number of new groups increased by 140% and new subscribers increased by 81%. In addition, the number of groups that performed live broadcasting increased by 512% and the number of viewers increased by 886%. As remote work and distance education are becoming more common, the band is drawing attention as a remote communication tool. This is because it provides functions such as live broadcasting, attendance check, voting, and group call online. References External links Android (operating system) software IOS software South Korean social networking websites Instant messaging clients Internet properties established in 2013 Multilingual websites Internet properties established in 2012 2012 software
Band (software)
Technology
532
6,247,216
https://en.wikipedia.org/wiki/HD%20172051
HD 172051 (86 G. Sagittarii) is a single, yellow-hued star in the southern constellation of Sagittarius. The star is barely bright enough to be seen with the naked eye, having an apparent visual magnitude of 5.85. Based upon an annual parallax shift of 76.64 mas, it is located some 43 light years from the Sun. It is moving away from the Sun with a radial velocity of +37 km/s. This ordinary G-type main-sequence star is considered a solar analog, having physical properties sufficiently similar to the Sun. It has a stellar classification of G5 V and is around 4.5 billion years old. The mass is similar to the Sun, although it is cooler and has a lower luminosity. Due to this similarity, HD 172051 has been selected as an early target star for both the Terrestrial Planet Finder and Darwin missions, which seek to find an Earth-like extrasolar planet. During a search for brown dwarf companions using the Hale Telescope in 2004, two candidate companions were identified at angular separations of 5 and 6. However, these were determined to be background stars. References Sources G-type main-sequence stars HD, 172051 Sagittarius (constellation) BD-21 5081 0722 Sagittarii, 86 172051 091438 6998
HD 172051
Astronomy
283
53,243,172
https://en.wikipedia.org/wiki/Eridanus%20II
The Eridanus II Dwarf is a low-surface brightness dwarf galaxy in the constellation Eridanus. Eridanus II was independently discovered by two groups in 2015, using data from the Dark Energy Survey (Bechtol et al., 2015; Koposov et al. 2015). This galaxy is probably a distant satellite of the Milky Way (Li et al., 2016). Eridanus II contains a centrally located globular cluster; and is the smallest, least luminous galaxy known to contain a globular cluster. Crnojević et al., 2016. Eridanus II is significant, in a general sense, because the widely accepted Lambda CDM cosmology predicts the existence of many more dwarf galaxies than have yet been observed. The search for just such bodies was one of the motivations for the ongoing Dark Energy Survey observations. Eridanus II has special significance because of its apparently stable globular cluster. The stability of this cluster, near the center of such a small, diffuse, galaxy places constraints on the nature of dark matter (Brandt 2016; Li et al., 2016). Discovery and history of observations Since the end of the Twentieth century, the most widely accepted cosmologies have been built on the foundations of the ΛCDM model which, in turn, is founded on the bedrock of the Big Bang cosmologies of the 1960s and 1970s. In the simplest terms, ΛCDM adds dark energy (Λ) and cold dark matter (CDM) to the Big Bang in order to explain the major features of the universe we observe today. ΛCDM describes a universe whose mass is dominated by dark matter. In such a universe, galaxies might be thought of as accretions of normal (baryonic) matter onto the largest concentrations of dark matter. However, ΛCDM does not predict any particular scale of CDM concentrations (Koposov et al. 2015; Besla et al., 2010:5). In fact, it suggests that there ought to be tens or hundreds of smaller dark matter bodies for each observable galaxy the size of our own Milky Way galaxy (Koposov et al. 2015; Bechtol et al., 2015). These should contain much less baryonic matter than a “normal” galaxy. Thus, we should observe many, very faint, satellite galaxies around the Milky Way. Until about 1990, however, only about 11 Milky Way satellites were known (Pawlowski et al., 2015; Bechtol et al., 2015). The difference between the number of satellites known and the number expected in ΛCDM is referred to as the "missing dwarf" or "substructure" problem. Simon & Geha (2007) also discuss various cosmological and astrophysical "fixes" which might reconcile theory and observation without requiring a great many new dwarf galaxies. Efforts have been underway to determine whether the predicted population of faint satellite galaxies could be observed, and many new dwarf satellites are now being reported. One of the most notable current efforts is the Dark Energy Survey (DES), which makes extensive use of one of the new generation of Chilean telescopes, the 4 m Blanco instrument at the Cerro Telolo Inter-American Observatory (Bechtol et al., 2015: 1). As of early 2016, the results have been promising, with over a dozen new satellite galaxies observed and reported. Eridanus II is one these newly discovered satellites. The discovery was made independently by two groups working from the DES data, and their results were published simultaneously in 2015 (Bechtol et al. 2015; Koposov et al., 2015). The DES group and a third group of researchers conducted more detailed follow-up observations in late 2015, using both of the Magellan instruments at Las Campanas, Chile. These observations included more detailed spectral data and also focused on Eridanus II's central globular cluster (Crnojević et al., 2016; Zaritsky et al., 2016; Li et al., 2016). Finally, Crnojević et al. (2016) also conducted observations in early 2016 using the Byrd Green Bank radio telescope at Green Bank, West Virginia, USA. Additional data have been obtained from a re-examination of older radio telescope surveys which included the region of the sky occupied by Eridanus II (Westmeier et al., 2015). Properties Location Eridanus II is located deep in the southern sky. Since Eridanus II is a faint, diffuse object, spread over several arc-minutes of the sky, its position cannot be stated with great precision. The most detailed observations are probably those of Crnojević et al. (2016), who report (J2000) celestial coordinates of RA 3h 44m 20.1s (56.0838°) and Dec −43° 32' 0.1" (−43.5338°). These correspond to galactic coordinates of l = 249.7835°, b = −51.6492°. Standing on the galactic plane at the position of the Sun, facing the center of the galaxy, Eridanus II would be on the right and below, about half-way down the sky from the horizontal. The distance to Eridanus II has been estimated using a variety of methods. All rely on fitting the observed stars to a curve (an isochrone) on a color-magnitude diagram (CMD), then comparing the luminosity of stars from the target galaxy with the luminosity of stars from equivalent positions on the CMD in galaxies of known distance, after various corrections for the estimated age and metallicity (derived in part from the curve-fitting process). See, e.g., Sand et al. (2012). The results have been fairly consistent: 330 kpc (1076 kly) (Bechtol et al., 2015), 380 kpc (1238 kly) (Koposov et al., 2015), and 366 ± 17 kpc (1193 ± 55 kly) (Crnojević et al., 2016). Whatever the exact distance value, Eridanus II is the most distant of currently known bodies which are likely satellites of the Milky Way (Id.). Velocity Determining whether or not Eridanus II is, in fact, a satellite galaxy depends in part on an understanding of its velocity. Li et al. (2016) have recently taken up that challenging series of measurements. Most of the difficulty relates to the fact that, while Eridanus II is distant in astronomical terms, it is too close in cosmological terms. Not only are spectral redshifts quite small at this distance, but the galaxy cannot be treated as a point object. Li et al. were forced to look at the spectra of individual stars, all of which were moving with respect to each other at speeds not much less than that of Eridanus II with respect to the observers, who were also moving at appreciable speeds around the center of the Earth, the Sun, and the center of the Milky Way galaxy. In spite of these difficulties, Li et al. were able to obtain a very tight distribution of velocities centering on 75.6 km/sec in a direction away from us. However, since the Sun's rotation about the center of the Milky Way is presently carrying us almost directly away from Eridanus II (i.e., towards the left of the observer described above), Eridanus II's motion is actually carrying it toward the center of the galaxy at about 67 km/sec (Li et al., 2016: 5, Table 1). While these observations solve the problem of radial velocity, the movement of Eridanus II towards the center of the Milky Way galaxy, they cannot solve the problem of transverse velocity, motion at right angles to the line between Eridanus II and the Milky Way. That is, we cannot determine whether Eridanus II is orbiting the Milky Way, or simply moving in its direction from outside the system. Li et al. (2016: 7–8) report that Eridanus II does not exhibit a "tail" or gradient of lower (or higher) velocity stars in a particular direction, which might give a clue to that galaxy's transverse velocity. However, they point out that an object similar to Eridanus II would need a total velocity of about 200 km/sec to escape capture by the Milky Way. Given its radial velocity of 75 km/sec, Eridanus II would need a transverse velocity of some 185 km/sec to avoid capture—certainly possible, but not likely. In addition, they point to the results of detailed simulation studies of the Local Group (Garrison-Kimmel et al., 2014). All objects situated similarly to Eridanus II in these simulations were determined to be satellites of the Milky Way (Li et al. (2016: 8)). For reasons to be discussed in the concluding section, most researchers now believe that Eridanus II is an extremely long-period (i.e., several billion years per orbit) satellite of the Milky Way, probably beginning only its second approach to our galaxy. Eridanus II is moving toward the center of the Milky Way at 67 km/sec. However, applying the current value of the Hubble Constant (i.e. about 76 km/sec/Mpc), the space between the two galaxies is also increasing at about 26 km/sec. The Hubble Constant is also believed to change over time, so that orbital dynamics on the scale of megaparsecs and billions of years cannot simply be computed using Newton's law of gravitation. In addition, the speed of light delay must be considered. The velocity measurements of Li et al. (2016) made use of light emitted by Eridanus II approximately one million years ago. At the present moment, Eridanus II is probably only around 300 kpc away (vs. the 380 kpc observed) and has accelerated significantly beyond the observed 67 km/sec toward the Milky Way. Size, shape, and rotation Eridanus II does not have a spherical shape, and its ellipticity (ε) has been estimated at 0.45 (Crnojević et al., 2016; Koposov et al., 2015). Its size depends on assumptions about mass distribution and three-dimensional structure. Crnojević et al. (2016) find that their data are consistent with a simple exponential distribution of mass and a half-light radius (a radius enclosing half the luminosity of the galaxy) of 277 ±14 pc (~890 light years), with an apparent half-light diameter of 4.6 arcmin to observers on Earth. A galactic structure of this small size is not expected to show signs of coherent rotation. In their studies of Eridanus II's velocity, Li et al. (2016) found no velocity gradient or anisotropy which would suggest coherent rotation. The material making up Eridanus II must orbit about the galactic center, but there is no evidence of a well-defined plane or concerted direction of rotation. Relationship to other objects A number of workers have speculated about an association between the Magellanic Clouds and various dwarf galaxies in the Local Group, including Eridanus II. The Magellanic Clouds are two satellite galaxies of the Milky Way, which are both presently about 60 kpc distant, and separated by 24 kpc from each other. This work is reviewed—briefly, but cogently—by Koposov et al. (2015: 16–17). Koposov and co-workers note that the Clouds show significant signs of distortion characteristic of tidal stress. This stress may have been induced by proximity to the Milky Way, but simulations suggest that it is more likely a result of interactions between the Clouds themselves (Besla et al. (2010); Diaz & Bekki (2011)). Koposov's group suggest that the Magellanic Clouds are of the right size and age to have been part of a loosely-bound association of small galaxies which has been captured by the Milky Way, resulting in a scatter of small galaxies, including Eridanus II, roughly aligned along the trajectory of the Clouds. As they note, the evidence for such pre-existing association is not compelling, but it does explain an otherwise "alarming" number of small galaxies found along a relatively narrow celestial corridor. In addition, similar clusters of dwarf galaxies are known to inhabit specific corridors around other major galaxies in the Local Group. Pawlowski et al. (2015) also note Eridanus II's alignment with the Magellanic Clouds, but doubt that Eridanus II is properly part of a Magellanic cluster of dwarf galaxies because of its considerable distance from the other suspected members of the group. On the other hand, they argue for the existence of a well-defined plane running from the Andromeda Galaxy to the Milky Way. This plane, only 50 kpc (160 ly) thick, but up to 2 Mpc (6.5 million ly) wide, includes 10 presently-known dwarves, all more than 300 kpc from any of the major galaxies of the Local Group. These workers observe that Eridanus II is not as well confined to the plane as are other members, and suggest that this may have something to do with its distant alignment to the Magellanic Clouds. Stellar properties Stellar population and age The stars in Eridanus II are largely consistent with a very old (~10 billion years) and low-metal ([Fe/H] < −1) population, similar to other small dwarf galaxies as well as many globular clusters. Its color-magnitude diagram (CMD) shows a marked red horizontal branch (RHB), which sometimes marks a metal-rich population (Koposov et al. (2015: 11); Crnojević et al., (2016: 2–3)). The Red Giant Branch (RGB) is relatively vertical, ruling out any large proportion of young (250 million years or less), metal-rich stars (Crnojević et al., 2016: 2–3). Nevertheless, the strength of the Horizontal Branch and the presence of an unexpectedly large number of stars to the left (i.e. bluer) side of the main sequence, suggested that Eridanus II contained at least two populations of stars (Koposov et al. (2015); Crnojević et al., (2016)). Based on these hints of underlying diversity, Crnojević et al., (2016) chose to reconstruct the CMD as the sum of two populations. They found a good fit with a model in which Eridanus II composed over 95% of ancient stars formed 10 billion years ago or more, with a few percent of intermediate age stars, on the order of 3 billion years old. This general picture has been partially confirmed by Li et al. (2016), who showed that many apparently young stars in Eridanus II had velocities and spectra marking them as foreground contaminants—stars from the Milky Way galaxy which lay in the same part of the sky as Eridanus II. Luminosity and metallicity Based on their two-component model and the known distance to Eridanus II, Crnojević et al., (2016: 4) determined its absolute magnitude MV = −7.1 ± 0.3. Of the total light emitted by Eridanus II, they attributed 94% (~5.6 ± 1.5 x 104 L⊙) to the old stellar population, and 6% (~3.5 ± 3 x 103 L⊙) to the intermediate-age stars. Li et al. (2016) calculated the mean metallicity of Eridanus II by measuring the size of the calcium triplet absorption peaks in spectra from 16 individual stars on the RGB. This technique is normally requires the spectra of Horizontal Branch stars, but these could not be sufficiently resolved in their system. They therefore used the spectra of RGB stars with corrections previously worked out by the DES group (Simon et al., 2015). From these data, Li et al. calculated a very low mean metallicity of −2.38, with a broad dispersion of 0.47 dex. This unusually wide scatter of metallicity values may also reflect the presence of multiple stellar populations. Mass Bechtol et al. (2015) have estimated the total mass of stars in Eridanus II to be on the order of 8.3 x 104 solar masses. This is the Initial Mass Function described by Chabrier (2001), calculated on the basis of various assumptions about the mass of the population of stars too faint to be detected directly. Chabrier's semi-empirical formula was based on stars relatively close to the Sun, a population radically different from the stars of Eridanus II. However, the estimate is based on the basics of stellar chemistry which are thought to be universal. The total mass of the galaxy is given below in the discussion of dark matter. Eridanus II globular cluster Perhaps the most surprising characteristic of Eridanus II is that it hosts its own globular cluster. This makes Eridanus II by orders of magnitude the least luminous object so-far known to include a globular cluster (Crnojević et al., (2016: 4)). The cluster has a half-light radius of 13 pc (42 ly) and an absolute magnitude of −3.5. It contributes about 4% of total galactic luminosity (Crnojević et al., (2016: 4)). The cluster lies within 45 pc (150 ly) of the calculated galactic center (in projection). Such nuclear clusters are quite common in dwarf galaxies, and this has motivated investigations into the possible role of nuclear clusters in forming galaxies (Georgiev et al., 2009; Georgiev et al., 2010). Zaritsky et al. (2015) have shown that the existence and properties of the Eridanus II globular cluster are consistent with what is already known about clusters in dwarf galaxies, when extrapolated to unexpectedly low-luminosity objects. Other components Gas Another unanticipated feature of Eridanus II was the near absence of free interstellar gas. Until the discovery of Eridanus II, astronomers had generally believed that dwarf galaxies close (<300 kpc) to the Milky Way were largely gas-free, while more distant dwarf galaxies retained significant amounts of free hydrogen gas (e.g., Garrison-Kimmel et al., 2014: 14; Spekkens et al., 2014). Such interstellar gas is detected using radio telescopes to measure the characteristic spectral signatures of atomic hydrogen. However, neither a review of previous survey work (Westmeier et al., 2016), nor targeted radio telescope observations of Eridanus II (Crnojević et al., 2016) were able to detect hydrogen gas associated with Eridanus II. The general absence of gas in dwarf galaxies close to the Milky Way (or to other large galaxies) is believed to be the result either of tidal stripping in the gravitational field of the larger body, or of ram pressure by direct contact with its interstellar gas envelope (see, e.g., Jethwa et al., 2016: 17). This understanding led Crnojević et al., 2016 to conclude that Eridanus II is bound to the Milky Way and is on its second in-fall toward our galaxy. However, other explanations are possible. For example, as Li et al. (2016: 10) point out, Eridanus II may have lost its gas during the Re-ionization Event which occurred approximately 1 billion years after the Big Bang; although, as Li et al. point out, that explanation is somewhat inconsistent with the presence of an intermediate-age population of stars which presumably formed from free hydrogen 4–6 billion years ago. Dark matter By definition, Dark Matter has little, if any, interaction with baryonic matter except through its gravitational field. The amount of dark matter in a galaxy can be estimated by comparing its dynamical mass, the mass necessary to account for the relative motion of the stars in the galaxy, to its stellar mass, the mass contained in stars necessary to account for the galaxy's luminosity. As noted above, Bechtol et al. (2015) have estimated the luminous mass of Eridanus II to be on the order of 8.3 x 104 solar masses. Furthermore, as explained in the previous section, Westmeier et al. (2016) and Crnojević et al. (2016) have shown that the contribution of free gas to the total mass of Eridanus II is probably negligible and will not complicate the comparison. It remains only to estimate the dynamical mass. The dynamical mass of a galaxy can be estimated if we know the velocities of the stars relative to one another. As discussed in the section on velocity, the velocities of stars in Eridanus II—relative to Earth—was measured by Li et al. (2016). The movement of the stars relative to one another can then be estimated from the variation ("dispersion") of the velocities relative to an outside observer. This number was calculated by Li et al. (2016: 5) and found to be σv = 6.9 km/sec. However, as mentioned in the velocity section, it is only possible to measure the stellar velocities in one direction, along the line joining the observer and Eridanus II. Fortunately, this is sufficient. Wolf et al. (2010) showed that the necessarily symmetrical movement of stars in a globular cluster or spheroidal dwarf allows one to calculate dynamical mass included in the half-light radius (i.e., the radius enclosing half of the luminosity) from radial velocity dispersion alone, with very few additional assumptions. Applying this formula, Li et al. (2016: 5–6) found that the half-light dynamical mass was on the order of 1.2 x 107 solar masses. Using Bechtol et al.'s estimate of total luminous mass, this would imply that 99.7% of Eridanus II's mass is dark matter. However, this relationship is more usually expressed as a mass-to-light ratio, in solar units (M⊙/L⊙). Thus, applying the luminosity results of Crnojević et al. (2016), Li et al. (2016) report a mass to light ratio of 420. Note that the ratio of dark matter to baryonic matter in the universe at large is on the order of 5 or 6. Plainly Eridanus II is dark matter-dominated to an extraordinary degree. Discussion and significance Eridanus II has mainly attracted attention from the astrophysical community in three areas. These are (1) the partial confirmation of the predictions of ΛCDM cosmology concerning the number of small, faint dwarf galaxies in the Local Group; (2) the questions Eridanus II raises about the history of the Milky Way and the Magellanic Clouds; and (3) the constraints placed on the nature of dark matter by the unanticipated finding of an apparently stable globular cluster at the heart of this strange little galaxy. The first two points have been discussed to some extent in previous sections. The third requires a little more attention. Eridanus II and Lambda-CDM As noted in the introductory section, one of the principle aims of the Dark Energy Survey was to determine whether the numbers of faint dwarf galaxies predicted by ΛCDM cosmology actually existed. In the main, DES seems to be succeeding. Certainly, DES and similar efforts have shown that the region around the Milky Way contains a much larger number of dwarf galaxies than were known a few decades ago. However, the ultimate outcome of this search is still unclear. In particular, Koposov et al. (2015) briefly sound two interesting, but discordant, notes. First, they note that the dwarf galaxies identified by DES are mainly too big and too bright. These are not members of the class of truly tiny, nearly invisible objects predicted by many versions of ΛCDM. Rather, these are objects similar to those already identified in the Sloan Digital Sky Survey (Koposov et al., 2015: 13)). Thus, something might be wrong about our expectations. The second, and perhaps related, point is that the Sloan Survey "revealed that there appears to be a gap in the distribution of effective radii between globular clusters (GCs) and dwarfs which extends across a large range of luminosities." Koposov et al. (2015: 1). That is, absent finding a new population intermediate between globular clusters and the current crop of rather robust galactic dwarves, we may be forced to conclude that there is something special about certain scales of dark matter organization. While such a gap would scarcely threaten the basics of ΛCDM cosmology, it would call for a serious explanation. Galactic history As previously mentioned, Li et al. (2016) tentatively conclude that Eridanus II is a satellite of the Milky Way. While the velocities determined by these investigators is consistent with either a first or second in-fall, they believe that it is more likely that Eridanus II is making its second approach to our galaxy. In particular, they point to the absence of interstellar gas in Eridanus II. This is most easily explained if an earlier encounter with the Milky Way stripped the galaxy of free gas by tidal stripping or ram pressure. In addition, they note that the second episode of star formation presumably responsible for the intermediate-age population of stars, coincides roughly with the estimates of Eridanus II's orbital period derived from the ELVIS simulation: that is, in the neighborhood of three billion years. Eridanus II is also potentially significant for the history of the Magellanic Clouds and the Local Group. Both Koposov et al. (2015) and Pawlowski et al. (2015) have noted its alignment with other galactic dwarves associated with the Magellanic Clouds, although Eridanus II is quite distant from the other members of that group. Pawlowski et al. (2015) observe that it is also aligned with a number of dwarves associated with the Andromeda Galaxy, but seems slightly out-of-plane. Accordingly, Eridanus II may be a member of either of those galactic communities, of both, or of neither. Whatever the final judgment, Eridanus II is likely to be an important factor in the resolving that important segment of our galactic history. Constraints on dark matter In an important recent paper, Brandt (2016) has argued that the presence of a stable globular cluster near the center of Eridanus II places severe constraints on certain possible forms of dark matter. Although any number of dark matter candidates have been proposed, the main contenders may be divided into two groups: WIMPS (Weakly Interacting Massive Particles) and MACHOs (MAssive Compact Halo Objects). One important class of MACHOs consists of primordial black holes. These objects might range from 10−2 to 105 solar masses, or higher, depending on the details of the applicable cosmology and the extent of possible post-Big Bang merger. See, e.g., García-Bellido (2017). Brandt's work addresses black holes toward the middle and upper end of this range of masses. Brandt notes that the physics of globular clusters are similar to those of diffusion. Repeated gravitational interchanges between bodies gradually act to equalize kinetic energy, which is proportional to the square of velocity. The net effect, over sufficiently long times, is sorting by mass. The more massive, low-velocity, objects tend to remain near the center of the cluster, while less massive objects are set on more distant trajectories, or expelled from the system entirely. In any case, the cluster gradually expands, while the most massive objects remain relatively close to the center of mass. Given the overwhelming dominance of dark matter in Eridanus II, the gravitational dynamics of the globular cluster must be driven by dark matter. And, if dark matter is mainly a collection of black holes larger than an average star, the sorting effect should cause the cluster to expand to large size and perhaps eventually eject all but the largest stars. Green (2016) has recently expanded on Brandt's equations to allow for a diverse range of black hole masses. There are several limitations to this argument, all of which are acknowledged and discussed by Brandt. Three of these are pertinent here. First, of all the many possible types of dark matter proposed by theorists, exactly one has received experimental support; but that one type is precisely the sort of black hole at issue here. If nothing else, the first detection of gravitational waves by LIGO showed (a) that black holes of this size do exist and (b) that they are sufficiently common that the collision and merger of two such objects was the first discrete event observed by LIGO (Abbott et al., 2016). Second, as discussed by Brandt (2016) and Carr (2016), the strength of the constraints imposed by Eridanus II's globular cluster depends both on the proportion of the dark matter made up of these intermediate-mass black holes, the distribution of that matter, and the time scales allowed for the mass-sorting process. Third, the Eridanus II globular cluster is virtually unique. It is possible, if not particularly likely, that the cluster will turn out to be a foreground contaminant, a transient phenomenon, or a structure formed elsewhere and recently captured by Eridanus II. In short, the Eridanus II globular cluster is likely to be an important, but not decisive, part of the dark matter lexicon for some time to come. References Citations Dwarf galaxies Local Group Milky Way Subgroup Eridanus (constellation) ?
Eridanus II
Astronomy
6,247
60,066,159
https://en.wikipedia.org/wiki/CoRoT-8b
CoRoT-8b is a transiting exoplanet orbiting the K-type main sequence star CoRoT-8 1,050 light years away in the equatorial constellation Aquila. The planet was discovered in April 2010 by the CoRoT telescope. Discovery This planet was discovered using the transit method, which detects planet via eclipses. The discovery paper's abstract states that CoRoT-8b is extremely dense compared to Saturn. Properties CoRoT-8b has 21.8% Jupiter's mass, and due to its close orbit, a radius 61.9% that of Jupiter. This classifies the planet as a hot Saturn. Despite the bloated radius, the planet is extremely dense, with it being 1.1 times greater than water's; CoRoT-8b has a temperature of 870 K from its 6-day orbit. References Hot Jupiters Transiting exoplanets Exoplanets discovered in 2010 8b Aquila (constellation)
CoRoT-8b
Astronomy
204
31,468,613
https://en.wikipedia.org/wiki/Book%20of%20Nature
The Book of Nature is a religious and philosophical concept originating in the Latin Middle Ages that explores the relationship between religion and science, which views nature as a book for knowledge and understanding. Early theologians, such as St. Paul, believed the Book of Nature was a source of God's revelation to humankind. He believed that when read alongside sacred scripture, the "book" and the study of God's creations would lead to a knowledge of God himself. This type of revelation is often referred to as a general revelation. The concept corresponds to the early Greek philosophical belief that humans, as part of a coherent universe, are capable of understanding the design of the natural world through reason. Philosophers, theologians, and scholars frequently deploy the concept. The phrase was first used by Galileo, who used the term when writing about how "the book of nature [can become] readable and comprehensible". History From the earliest times in known civilizations, events in the natural world were expressed through a collection of stories concerning everyday life. In ancient times, it was believed that the visible, mortal world existed alongside an upper world of spirits and gods acting through nature to create a unified and intersecting moral and natural cosmos. Humans, living in a world that was acted upon by free-acting and conspiring gods of nature, attempted to understand their world and the actions of the divine by observing and correctly interpreting natural phenomena, such as the motion and position of stars and planets. Efforts to analyze and understand divine intentions led mortals to believe that intervention and influence over godly acts were possible—either through religious persuasions, such as prayer and gifts, or through magic, which depended on sorcery and the manipulation of nature to bend the will of the gods. Humans believed they could discover divine intentions through observing or manipulating the natural world. Thus, mankind had a reason to learn more about nature. Around the sixth century BCE, humanity’s relationship with the deities and nature began to change. Greek philosophers, such as Thales of Miletus, no longer viewed natural phenomena as the result of omnipotent gods. Instead, natural forces resided within nature, an integral part of a created world, and appeared under certain conditions that had little to do with personal deities. The Greeks believed that natural phenomena occurred by "necessity" through intersecting chains of "cause" and "effect". Greek philosophers, however, lacked a new vocabulary to express such abstract concepts as "necessity" or "cause" and consequently used words available to them to refer metaphorically to the new philosophy of nature. As such, they began to conceptualize the natural world in more specific terms that aligned with a unique philosophy that viewed nature as immanent and where natural phenomena occurred by necessity. The Greek concept of nature, metaphorically expressed through the Book of Nature, gave birth to three philosophical traditions that became the wellspring for natural philosophy and early scientific thinking. Among the three traditions inspired by Plato, Aristotle, and Pythagoras, the Aristotelian corpus became a pervasive force in natural philosophy until it was challenged in early modern times. Natural philosophy, which encompassed a body of work whose purpose was to describe and explain the natural world, derived its foremost authority in the medieval era from Christian interpretations of Aristotle, in which his natural philosophy was viewed as a doctrine intended to explain natural events in terms of readily understood causes. Aristotle reasoned that knowledge of natural phenomena was derived by abstraction from a sensory awareness of the natural world—in short, knowledge was obtained through sensory experience. A world constructed by abstract ideas alone could not exist. In his reasoning, the structures inherent in nature are revealed through a process of abstraction, which may result in metaphysical principles that can be used to explain various natural phenomena, including their causes and effects. Events with no identifiable reason happen by chance and reside outside the boundaries of natural philosophy. The search for causal explanations became a dominant focus in natural philosophy, whose origins lay in the Book of Nature as conceived by the earliest Greek philosophers. Aristotle’s influence throughout Europe lasted centuries until the Enlightenment warranted fresh investigations of entrenched ideas. Christianity and Greek culture The Greeks constructed a view of the natural world in which all references to mythological origins and causes were removed. Greek philosophers inadvertently left the upper world vacant by abandoning ancient ties to free-acting, conspiring gods of nature. The new philosophy of nature made unseen mythological forces irrelevant. While some philosophers drifted toward atheism, others worked within the new philosophy to reconstitute the concept of a divine being. Consequently, the new outlook toward the natural world inspired the belief in one supreme force compatible with the new philosophy—in other words, monotheistic. However, the path from nature to rediscovering a divine being was uncertain. The belief in causality in nature implied an endless, interconnected chain of causation acting upon the natural world. It is presumed, however, that Greek thought denied the existence of a natural world where causality was infinite, which gave rise to the notion of "first cause", upon which the order of other causes must rely. The first contact between Christianity and Greek culture occurred in Athens in the first century CE. The Christian Scriptures note that within a few years of Christ’s crucifixion, Paul and Silas were debating with Epicureans and Stoics at the Areopagus. Christian theologians viewed the Greeks as a pagan culture whose philosophers were obsessed with the wonders of the material, or the natural world. Observation and explanation of natural phenomena were of little value to the Church. Consequently, early Christian theologians dismissed Greek knowledge as perishable in contrast to actual knowledge derived from sacred Scripture. At the same time, the Church Fathers struggled with questions concerning the natural world and its creation that reflected the concerns of Greek philosophers. Despite their rejection of pagan thinking, the Church Fathers benefited from Greek dialectic and ontology by inheriting a technical language that could help express solutions to their concerns. As Peter Harrison observes, "In the application of the principles of pagan philosophy to the raw materials of a faith, the content of which was expressed in those documents which were to become the New Testament, we can discern the beginnings of Christian theology." Eventually, Church Fathers would recognize the value of the natural world because it provided a means of deciphering God’s work and acquiring true knowledge of Him. God was believed to have infused the material world with symbolic meaning, which, if understood by man, reveals higher spiritual truths. What the Church Fathers needed, and did not inherit from the early Greek philosophers, was a method of interpreting the symbolic meanings embedded in the material world. According to Harrison, it was Church Father Origen in the third century who perfected a hermeneutical method that was first developed by the Platonists of the Alexandrian school by which the natural world could be persuaded to give up hidden meanings. In Christianity, early Church Fathers appeared to use the idea of a book of nature, librum naturae, as part of a two-book theology: "Among the Fathers of the Church, explicit references to the Book of Nature can be found, in St. Basil, St. Gregory of Nyssa, St. Augustine, John Cassian, St. John Chrysostom, Ephrem the Syrian, St. Maximus the Confessor". St Augustine suggested that Nature and the Bible were a two-volume set of books written by God and filled with divine knowledge. Rediscovering the natural world By the twelfth century, a renewed study of nature was beginning to emerge along with the recovered works of ancient philosophers, translated from Arabic to original Greek. The writings of Aristotle were seen as being among the most important of the ancient texts and had a remarkable influence among intellectuals. Interest in the material world, in conjunction with the doctrines of Aristotle, elevated sensory experience to new levels of importance. Earlier teachings concerning the relationship between God and man’s knowledge of material things gave way to a world in which knowledge of the material world conveyed the knowledge of God. Whereas scholars and theologians once held a symbolist mentality of the natural world as expressive of spiritual realities, intellectual thinking now regarded nature as a "coherent entity which the senses could systematically investigate. The idea of nature is that of a particular ordering of natural objects, and the study of nature is the systematic investigation of that order". The idea of order in nature implied a structure to the physical world whereby relationships between objects could be defined. According to Harrison, the twelfth century marked an important time in the Christian era when the world became invested with its patterns of order—patterns based on networks of likeness or similarities among material things, which led to a pre-modern knowledge of nature. It was believed that "While God has made all things that reside in the Book of Nature, certain objects in nature share similar characteristics with other objects, which delineates the sphere of nature and 'establishes the systematizing principles upon which knowledge of the natural world is based'". Nature could now be read like a book. The birth of modern science By the sixteenth century, discord between traditional authorities was beginning to surface. The European Renaissance and the Age of Enlightenment had begun, and many areas of tradition and knowledge were being challenged. More universities were being built across Europe, and the invention of the printing press meant scientific and artistic ideas could be exchanged more easily. Ancient philosophies and accepted writings such as those of Galen and Ptolemy were set aside, and more theories were being actively investigated. New technology and equipment, such as improved telescopes and microscopes, could be used to explore what had previously been taken for granted. Improved ships meant that, for the first time, the entire globe could be visited, revealing new natural wonders which had never been seen before. Ancient texts and doctrines were disputed, knowledge of the natural world was incomplete, interpretation of Christian Scripture was challenged, and Greek philosophy—which helped draft the Book of Nature—and Christian Scripture were viewed as fundamentally opposed. By this time, "Nature" was moving from being seen as a personified, independent, active entity to being seen as an impersonal machine, such as by Kepler as a kind of clockwork. The Book of Nature was acquiring greater authority for its wisdom and as an unmediated source of natural and divine knowledge. Hands-on investigations, whether of the human body, horticulture, or the stars, were encouraged. As a source of revelation, the Book of Nature remained moored to the Christian faith and occupied a prominent location in Western culture alongside the Bible. Scientific philosophers such as Robert Boyle and Sir Isaac Newton believed that nature could teach humans the breadth of work which God had carried out; Francis Bacon told his readers that they could never be too well-versed in the book of God’s Scripture or the book of God’s nature. The Book of Nature was seen as a way of learning more about God. Two books - two worlds? The view of nature as divine revelation and the need for scientific research continued for several centuries. When the word scientist began to replace the term natural philosopher in the 1830s, the most talked-about scientific books in the UK were the eight-volume Bridgewater Treatises. These books, funded by the last Earl of Bridgewater, were written by men appointed by the Royal Society to "explore the Power, Wisdom and Goodness of Gd, as manifested in the Creation". At that time, nature and the divine were seen to be parallel. However, the concern that the two books would eventually collide was becoming increasingly evident among scholars, natural philosophers, and theologians, who saw the possibility of two separate and incompatible worlds—one determined to possess nature, and the other determined to uphold Christian faith. Works by scientists such as Charles Darwin and Alfred Russel Wallace began to show that nature may not reveal God, but may show that there is no god at all. Discoveries in paleontology led many to question the Christian scriptures and other divine beliefs. Scientists engaged in physical observation of nature separated themselves from spiritual issues. In contrast, the emerging disciplines of psychology and sociology led others to see religious belief as a temporary step in a society’s development rather than a central and essential element. By 1841, Auguste Comte proposed that empirical observation was the final culmination of human society. See also The Assayer Notes Bibliography Evernden, Lorne Leslie Neil. The Social Creation of Nature. Baltimore, MD: Johns Hopkins University Press, 1992. Further reading Binde, Per. "Nature in Roman Catholic Tradition". Anthropological Quarterly 74, no. 1 (January 2001): 15-27. Blackwell, Richard J. Galileo, Bellarmine, and the Bible. Notre Dame: University of Notre Dame Press, 1991. Eddy, Matthew, and Knight, David M. Introduction. Natural Theology. By William Paley. 1802. New York: Oxford University Press, 2006. ix-xxix. Eisenstein, Elizabeth L. The Printing Revolution in Early Modern Europe. New York: Cambridge University Press, 2005. Findlen, Paula. Possessing Nature: Museums, Collecting, and Scientific Culture in Early Modern Italy. Berkeley: University of California Press, 1996. Henry, John. The Scientific Revolution and the Origins of Modern Science. New York: Palgrave Macmillan, 2008. Kay, Lily E. Who Wrote the Book of Life?: A History of the Genetic Code. Stanford, CA: Stanford University Press, 2000. Kosso, Peter. Reading the book of nature: an introduction to the philosophy of science. Cambridge: Cambridge University Press, 1992. Nelson, Benjamin. "Certitude, and the Books of Scripture, Nature, and Conscience". In On the Roads to Modernity: Conscience, Science, and Civilizations. Selected Writings by Benjamin Nelson, edited by Toby E. Huff. Totowa, N.J.: Rowman and Littlefield, 1981. History of science Religion and science God in Christianity Creationism Medieval philosophy Concepts in epistemology
Book of Nature
Technology,Biology
2,848
2,056,787
https://en.wikipedia.org/wiki/Widlar%20current%20source
A Widlar current source is a modification of the basic two-transistor current mirror that incorporates an emitter degeneration resistor for only the output transistor, enabling the current source to generate low currents using only moderate resistor values. The Widlar circuit may be used with bipolar transistors, MOS transistors, and even vacuum tubes. An example application is the 741 operational amplifier, and Widlar used the circuit as a part in many designs. This circuit is named after its inventor, Bob Widlar, and was patented in 1967. DC analysis Figure 1 is an example Widlar current source using bipolar transistors, where the emitter resistance R2 is connected to the output transistor Q2, and has the effect of reducing the current in Q2 relative to Q1. The key to this circuit is that the voltage drop across the resistance R2 subtracts from the base-emitter voltage of transistor Q2, thereby turning this transistor off compared to transistor Q1. This observation is expressed by equating the base voltage expressions found on either side of the circuit in Figure 1 as: where β2 is the beta-value of the output transistor, which is not the same as that of the input transistor, in part because the currents in the two transistors are very different. The variable IB2 is the base current of the output transistor, VBE refers to base-emitter voltage. This equation implies (using the Shockley diode equation): Eq. 1 where VT is the thermal voltage. This equation makes the approximation that the currents are both much larger than the scale currents, IS1 and IS2; an approximation valid except for current levels near cut off. In the following, the scale currents are assumed to be identical; in practice, this needs to be specifically arranged. Design procedure with specified currents To design the mirror, the output current must be related to the two resistor values R1 and R2. A basic observation is that the output transistor is in active mode only so long as its collector-base voltage is non-zero. Thus, the simplest bias condition for design of the mirror sets the applied voltage VA to equal the base voltage VB. This minimum useful value of VA is called the compliance voltage of the current source. With that bias condition, the Early effect plays no role in the design. These considerations suggest the following design procedure: Select the desired output current, IO = IC2. Select the reference current, IR1, assumed to be larger than the output current, probably considerably larger (that is the purpose of the circuit). Determine the input collector current of Q1, IC1: Determine the base voltage VBE1 using the Shockley diode law where IS is a device parameter sometimes called the scale current. The value of base voltage also sets the compliance voltage VA = VBE1. This voltage is the lowest voltage for which the mirror works properly. Determine R1: Determine the emitter leg resistance R2 using Eq. 1 (to reduce clutter, the scale currents are chosen equal): Finding the current with given resistor values The inverse of the design problem is finding the current when the resistor values are known. An iterative method is described next. Assume the current source is biased so the collector-base voltage of the output transistor Q2 is zero. The current through R1 is the input or reference current given as, Rearranging, IC1 is found as: Eq. 2 The diode equation provides: Eq. 3 Eq.1 provides: These three relations are a nonlinear, implicit determination for the currents that can be solved by iteration. We guess starting values for IC1 and IC2. We find a value for VBE1: We find a new value for IC1: We find a new value for IC2: This procedure is repeated to convergence, and is set up conveniently in a spreadsheet. One simply uses a macro to copy the new values into the spreadsheet cells holding the initial values to obtain the solution in short order. Note that with the circuit as shown, if VCC changes, the output current will change. Hence, to keep the output current constant despite fluctuations in VCC, the circuit should be driven by a constant current source rather than using the resistor R1. Exact solution The transcendental equations above can be solved exactly in terms of the Lambert W function. Output impedance An important property of a current source is its small signal incremental output impedance, which should ideally be infinite. The Widlar circuit introduces local current feedback for transistor . Any increase in the current in Q2 increases the voltage drop across R2, reducing the VBE for Q2, thereby countering the increase in current. This feedback means the output impedance of the circuit is increased, because the feedback involving R2 forces use of a larger voltage to drive a given current. Output resistance is found using a small-signal model for the circuit, shown in Figure 2. Transistor Q1 is replaced by its small-signal emitter resistance rE because it is diode connected. Transistor Q2 is replaced with its hybrid-pi model. A test current Ix is attached at the output. Using the figure, the output resistance is determined using Kirchhoff's laws. Using Kirchhoff's voltage law from the ground on the left to the ground connection of R2: Rearranging: Using Kirchhoff's voltage law from the ground connection of R2 to the ground of the test current: or, substituting for Ib: Eq. 4 According to Eq. 4, the output resistance of the Widlar current source is increased over that of the output transistor itself (which is rO) so long as R2 is large enough compared to the rπ of the output transistor (large resistances R2 make the factor multiplying rO approach the value (β + 1)). The output transistor carries a low current, making rπ large, and increase in R2 tends to reduce this current further, causing a correlated increase in rπ. Therefore, a goal of R2 ≫ rπ can be unrealistic, and further discussion is provided below. The resistance R1∥rE usually is small because the emitter resistance rE usually is only a few ohms. Current dependence of output resistance The current dependence of the resistances rπ and rO is discussed in the article hybrid-pi model. The current dependence of the resistor values is: and is the output resistance due to the Early effect when VCB = 0 V (device parameter VA is the Early voltage). From earlier in this article (setting the scale currents equal for convenience): Eq. 5 Consequently, for the usual case of small rE, and neglecting the second term in RO with the expectation that the leading term involving rO is much larger: Eq. 6 where the last form is found by substituting Eq. 5 for R2. Eq. 6 shows that a value of output resistance much larger than rO of the output transistor results only for designs with IC1 >> IC2. Figure 3 shows that the circuit output resistance RO is not determined so much by feedback as by the current dependence of the resistance rO of the output transistor (the output resistance in Figure 3 varies four orders of magnitude, while the feedback factor varies only by one order of magnitude). Increase of IC1 to increase the feedback factor also results in increased compliance voltage, not a good thing as that means the current source operates over a more restricted voltage range. So, for example, with a goal for compliance voltage set, placing an upper limit upon IC1, and with a goal for output resistance to be met, the maximum value of output current IC2 is limited. The center panel in Figure 3 shows the design trade-off between emitter leg resistance and the output current: a lower output current requires a larger leg resistor, and hence a larger area for the design. An upper bound on area therefore sets a lower bound on the output current and an upper bound on the circuit output resistance. Eq. 6 for RO depends upon selecting a value of R2 according to Eq. 5. That means Eq. 6 is not a circuit behavior formula, but a design value equation. Once R2 is selected for a particular design objective using Eq. 5, thereafter its value is fixed. If circuit operation causes currents, voltages or temperatures to deviate from the designed-for values; then to predict changes in RO caused by such deviations, Eq. 4 should be used, not Eq. 6. See also Current source Current mirror Wilson current source References Further reading Current mirrors and active loads: Mu-Huo Cheng Analog circuits Electronic design de:Stromspiegel#Beispiele
Widlar current source
Engineering
1,843
17,527,994
https://en.wikipedia.org/wiki/William%20Horrocks
Brigadier General Sir William Heaton Horrocks (25 August 1859 – 26 January 1941) was an officer of the British Army remembered chiefly for confirming Sir David Bruce's theory that Malta fever was spread through goat's milk. He also contributed to the making safe of water, developing a simple method of testing and purifying water in the field. Because of his work, he became the first Director of Hygiene at the War Office in 1919. Early life and career William Heaton Horrocks was the son of William Holden Horrocks of Bolton. Horrocks studied for his M.B. at Owen's College and passed his first M.B. examination in 1881. He received a Third Class Honours pass in Anatomy, and a Second Class in Physiology and Histology. Previously a Surgeon on probation, Horrocks was promoted to Surgeon (the equivalent of Captain) on 5 February 1887. While serving in India, Horrocks married Minna Moore (died 1921), the daughter of the Reverend J.C. Moore of Connor, County Antrim on 27 September 1894 at Christ Church, Mussoorie. Together they had one son and one daughter. His son Brian also joined the British Army, and became a leading corps commander during the Second World War. Horrocks was promoted from Captain to Major on 5 February 1899. Malta fever In 1904 Horrocks was appointed as a member of the Royal Society's Mediterranean Fever Commission, to investigate the highly contagious disease Malta fever which was prevalent in the British colony of Malta. Identified by Sir David Bruce in 1887, Malta fever was characterised by a low mortality rate but was of indefinite duration. It was accompanied by profuse perspiration, pain and occasional swelling of the joints. In 1905 Sir Themistocles Zammit infected a goat with the bacteria Micrococcus Melitanensis which then caught Malta fever. Horrocks was the first person to find the bacteria in goat's milk, thus identifying the method of transmission. In attempting to settle the matter of who was responsible for the discovery, Bruce (who had served as chairman of the Commission, wrote to The Times newspaper: Horrocks afterwards served as sanitary officer at the British colony of Gibraltar, where he noted that the incidence of Malta fever practically disappeared with the removal of Maltese goats from that place. Later career Horrocks was promoted to Lieutenant-Colonel on 19 May 1911, then in July was promoted Brevet Colonel dated 20 May, in recognition of his services. In 1915, Horrocks was honoured by becoming an Honorary Surgeon to King George V, commencing 6 November 1914, holding the appointment until 26 December 1917. Horrocks also developed the "Horrocks Box", following his research into contamination of water. This device used sand filtration and chlorine sterilisation plants to provide a portable means of decontaminating water supplies. It proved of particular use during the First World War, when it kept the Allied forces largely free of water-borne disease. In addition to this he also developed means of removing poisons from water and assisted in the design of the first gas mask. For his services in the war, Horrocks was honoured with appointments to a number of orders. On 24 January 1917 he was appointed a Companion of the Bath. On 3 June 1918 (in the King's Birthday Honours) Horrocks was appointed a Knight Commander of the Order of Saint Michael and Saint George. He became the first Director of Hygiene at the War Office on 1 June 1919 in recognition of his expertise in military hygiene, this last period of active duty came to an end on 1 November 1919, and he relinquished his temporary rank of Brigadier-General. Horrocks died on 26 January 1941 at the age of eighty-one, at Hersham in Surrey. His funeral took place at St. Peter's Church, Hersham on 31 January with his son and daughter, among others, present. Notes References Published works (Report II by Major W. H. Horrocks) pdf at militaryhealth.bmj.com Selected articles 1859 births 1941 deaths Hygienists Royal Army Medical Corps officers Knights Commander of the Order of St Michael and St George Companions of the Order of the Bath People from Bolton British Army personnel of World War I Water filters People from Hersham British Army generals Military personnel from the Metropolitan Borough of Bolton British Army brigadiers 19th-century British Army personnel
William Horrocks
Chemistry
903
71,568,815
https://en.wikipedia.org/wiki/Xiaomi%20MIX%20Fold%202
Xiaomi MIX Fold 2 is an Android-based foldable smartphone manufactured by Xiaomi. For the first time in the MIX Fold series, the phone developed in partnership with Leica camera, it was announced on August 11, 2022. A feature of MIX Fold 2 is its thickness: 5.4 mm in the unfolded state and 11.2 mm in the folded state, in this state, its thickness is close to ordinary smartphones, and this thickness also makes it the thinnest smartphone with a foldable display after the Huawei Mate Xs 2. Design The external screen is made of Corning Gorilla Glass Victus. The inner screen is made of plastic Schott UTG glass. The back panel is made of glass. The ends are made of aluminum. The main camera unit is made in the style of Redmi K50 Ultra, but it is placed in a horizontal position. The USB-C connector, speaker and microphone are located below. On top are the second speaker, a slot for 2 SIM cards, a second microphone and an IR port. On the right side are volume buttons and a button to lock the smartphone, which has a built-in fingerprint scanner. Xiaomi MIX Fold 2 is sold in 4 colors: Moon Shadow Black (black), Star Gold (gold), Night Black (black with matte blocks) and Moonlight Silver (silver with matte blocks). Specifications Platform The smartphone received a Qualcomm Snapdragon 8+ Gen 1 processor and an Adreno 730 GPU. Battery MIX Fold 2 has a battery with a capacity of 4500 mAh and support for 67-watt fast charging. Camera The smartphone received a main triple camera of 50 MP, (wide-angle) + 8 MP, (telephoto lens) ) + 13 MP, (ultra-wide-angle) with Dual Pixel phase autofocus and the ability to record video in 8K@24fps resolution. Also, like the Xiaomi 12S line, the MIX Fold 2 received Leica optics for rear cameras, as well as additional modes. The front camera received a resolution of 20 MP (wide-angle) and the ability to record video in 1080p@60fps resolution. Screen The internal screen is a flexible LTPO 2.0 Eco² OLED-matrix, 8.02", 2480 × 1914 with a pixel density of 360 ppi and support for HDR10+ and Dolby Vision technologies. The smartphone also received an external AMOLED screen, 6.56", FullHD+ 2520 × 1080 with an aspect ratio of 21:9, a display refresh rate of 120Hz, support for HDR10+ and Dolby Vision technologies, and a round cut-out for the front camera located top in the center. Sound The smartphone received stereo speakers developed in cooperation with Harman Kardon. Speakers are located on the upper and lower ends. Memory The device is sold in 12 GB/256 GB, 12 GB/515 GB and 12 GB/1 TB. Software The smartphone was released on MIUI Fold 13 based on Android 12. Later, it was updated to MIUI Fold 14 based on Android 13. See also Samsung Galaxy Z Fold 4 Foldable smartphone References Android (operating system) devices MIX Fold 2 Foldable_smartphones Mobile phones with multiple rear cameras Mobile phones introduced in 2022 Discontinued flagship smartphones
Xiaomi MIX Fold 2
Technology
700
10,384,126
https://en.wikipedia.org/wiki/Fa%C3%A7ade%20engineering
Building façades are one of the largest, most important elements in the overall aesthetic and technical performance of a building. Façade engineering is the art and science of resolving aesthetic, environmental and structural issues to achieve the effective enclosure of buildings. Specialist companies are dedicated to this niche sector of the building industry and engineers operate within technical divisions of façade manufacturing companies. Generally, façade engineers are specifically qualified in the discipline of façade engineering and consultants work with the design team on construction projects for architects, building owners, construction managers and product manufacturers. Façade engineers must consider aspects such as the design, certification, fabrication and installation of the building façades with regards to the performance of materials, aesthetic appearance, structural behaviour, weathertightness, safety and serviceability, security, maintenance and build ability. The skill set will include matters such as computational fluid dynamics, heat transfer through two- and three-dimensional constructions, the behaviour of materials, manufacturing methodologies, structural engineering and logistics. Over time, the specialist skills necessary in this niche sector have surpassed the capabilities of architects, structural and mechanical engineers as buildings are designed with more complexity and with the introduction of Building Information Modelling (BIM). Building façades are considered to be one of the most expensive and potentially the highest risk element of any major project. Historically building facades have the greatest level of failure of any part of a building fabric and the pressure for change and adaptation due to environmental and energy performance needs is greater than any other element of a building. As a consequence façade engineering has become a science in its own right. In the United Kingdom, a professional body associated with the industry is the Society of Façade Engineering. Qualifications in façade engineering recognised by the Society of Façade Engineering and international professional qualifications include the MSc in façade engineering. This may be from the University of Bath; Technical University Delft or Detmolder Schule fur Architektur und Innenarchitekter Hochschule or other qualifications subject to review by the Membership panel. References Building engineering Architectural elements
Façade engineering
Technology,Engineering
401
66,071,501
https://en.wikipedia.org/wiki/Hydration%20%28web%20development%29
In web development, hydration or rehydration is a technique in which client-side JavaScript converts a web page that is static from the perspective of the web browser, delivered either through static rendering or server-side rendering, into a dynamic web page by attaching event handlers to the HTML elements in the DOM. Because the HTML is pre-rendered on a server, this allows for a fast "first contentful paint" (when useful data is first displayed to the user), but there is a period of time afterward where the page appears to be fully loaded and interactive, but is not until the client-side JavaScript is executed and event handlers have been attached. Frameworks that use hydration include Next.js and Nuxt.js. React v16.0 introduced a "hydrate" function, which hydrates an element, in its API. Variations Streaming server-side rendering Streaming server-side rendering allows one to send HTML in chunks that the browser can progressively render as it is received. This can provide a fast first paint and first contentful paint as HTML markup arrives to users faster. Progressive rehydration In progressive rehydration, individual pieces of a server-rendered application are “booted up” over time, rather than the current common approach of initializing the entire application at once. This can help reduce the amount of JavaScript required to make pages interactive, since client-side upgrading of low priority parts of the page can be deferred to prevent blocking the main thread. It can also help avoid one of the most common server-side rendering rehydration pitfalls, where a server-rendered DOM tree gets destroyed and then immediately rebuilt – most often because the initial synchronous client-side render required data that wasn't quite ready, perhaps awaiting Promise resolution. Partial rehydration Partial rehydration has proven difficult to implement. This approach is an extension of the idea of progressive rehydration, where the individual pieces (components/views/trees) to be progressively rehydrated are analyzed and those with little interactivity or no reactivity are identified. For each of these mostly-static parts, the corresponding JavaScript code is then transformed into inert references and decorative functionality, reducing their client-side footprint to near-zero. The partial hydration approach comes with its own issues and compromises. It poses some interesting challenges for caching, and client-side navigation means it cannot be assumed that server-rendered HTML for inert parts of the application will be available without a full page load. One framework that supports partial rehydration is Elder.js, which is based on Svelte. Trisomorphic rendering Trisomorphic rendering is a technique which uses streaming server-side rendering for initial/non-JS navigations, and then uses service worker to take on rendering of HTML for navigations after it has been installed. This can keep cached components and templates up to date and enables SPA-style navigations for rendering new views in the same session. This approach works best when one can share the same templating and routing code between the server, client page, and service worker. References  Portions of this page are modifications based on work created and shared by Google and used according to terms described in the Creative Commons 4.0 Attribution License, specifically the article "Rendering on the Web" by Jason Miller and Addy Osmani. Web development
Hydration (web development)
Engineering
712
35,127,635
https://en.wikipedia.org/wiki/Seed-counting%20machine
Seed counting machines count seeds for research and packaging purposes. The machines typically provide total counts of seeds or batch sizes for packaging. Background The first seed counters were developed to count legumes and other seeds which were large. Traditionally, the seed packaging industry packed seeds by weight but sold them by number. In order to assure the correct quantity of seeds, the distributors added a safety margin to the packed weight, like a bakers' dozen. This safety margin increased cost. By counting the seeds, the margin of error could be reduced and so costs reduced. History Originally people counted seeds by hand, or used a trip board. The first seed-counting machine was the vibratory mechanical seed counter. Modern day electronic seed counters are faster and more accurate. In 1929 the US Bureau of Plant Industry worked with several seed companies to perfect a seed counter. In 1962 an electric seed counter was developed by the USDA's Agricultural Marketing Service. The electronic counter operation involved the a vibrating the seeds so that they move to the edge of the counting machine. The machine will pay for itself over the labor intensive tedious task of manually counting seeds, which is necessarily characterized by human error. By contrast, the new devices, even in the early 1960s, boasted increased speed and “about 1 error in counting 10,000 seeds counted.” The accuracy helps lessen the need to build in safety margins for quantity; and the costs of the machinery can be more than paid for by reduced labor costs. In the 1970s other electronic seed counting advancements included an electric eye to count the seeds. Seed counting still involved vibrating the seed, but now the seed would fall through a seed hole. If the items are put onto the conveyor in a single file, then a simple counting mechanism may provide satisfactory results. However, such a mechanism is inherently slower than if the items were freely placed on the conveyor without posing such limitations. Thus, in the 2000s other parallel counting of multiple objects evolved, including devices that use multiple electromagnetic energy sources and receptors. Technology At one time, the methodology included use of vacuum tubes, vacuum pumps, a light source and a photo transistor. The size needed to be adjusted so only one seed passes through at a time. To be useful, batch counters need to be commercially available. A single preset count facility is a plus, as is “adequate count capacity, the ability to provide external power supplies and [control of] ... the means to stop the picking up and counting of seeds.” In commercial operations, it is important for the counter to be automatic and accurate. For example, one commercial counter is capable of measuring the hundredth/thousandth grain weight for seeds, tablets, pearls, and small components. It adopts far-infrared area sensor, and a large enough photosensitive area, "suitable for the sensitivity of all crops (millet-peanut)." Blockages or splashes are to be avoided. Adaptable speed variation adjustment helps "solve the contradiction between speed and accuracy, and ensure error-free counting (counting error of 0/1000)" Manual feeding vs automatic cup changing "improve the counting efficiency, reduce labor intensity." Automatic discharge can obviate demands for the operator to constantly feed the vibrating plate. One counter is so fast that Millet "counting can reach 2000 grains/min, and the wheat and rice counting" Suitability of the vibrating plate for different seeds is a consideration. It is useful to have an adjustable baffle at the exit of the bowl "according to the diameter of the seeds (workpieces), only one seed (workpiece) at a time, not side by side, for all large and small seeds." Some seed counters use laser light. In counting, it is important to position one seed at a time by manipulating slit width when using a Photoelectric seed counter. Some are able to handle up to 23 sample containers. They can do this while maintaining notable accuracy. General purpose electronic seed counters usually count seeds during free fall. They have achieved satisfactory error rates. For example: "Counting errors of less than 0.4% at counting speeds of 400 to 1,180 seeds/min were obtained for seeds of nine different species ranging in size from corn (Zea mays L.) to trefoil (Lotus corniculatus L.). Under some conditions, the seed dispenser, a vibratory small parts feeder, segregated wheat kernels (Triticum aestivum L.) into weight classes dispensing heavier kernels first into the counting system." Gallery See also Agricultural machinery industry List of agricultural machinery Mechanised agriculture Seed drill (box drill, air drill) References Notes Citations External links Dodder counting machine 20th-century inventions Agricultural machinery Harvest History of agriculture Packaging machinery
Seed-counting machine
Engineering
977
17,258,404
https://en.wikipedia.org/wiki/Pramiconazole
Pramiconazole is a triazole antifungal which was under development by Barrier Therapeutics for the treatment of acute skin and mucosal fungal infections but was never marketed. References Dioxolanes Imidazolidinones Isopropyl compounds Lanosterol 14α-demethylase inhibitors Organofluorides Phenol ethers Piperazines Phenylethanolamine ethers Triazole antifungals Ureas
Pramiconazole
Chemistry
100
39,811,873
https://en.wikipedia.org/wiki/Lomaphorus
Lomaphorus is a possibly dubious extinct genus of glyptodont that lived during the Pleistocene in eastern Argentina. Although many species have been referred, the genus itself is possibly dubious or synonymous with other glyptodonts like Neoslerocalyptus from the same region. Etymology The genus name Lomaphorus is derived from the Greek roots loma- meaning "fringe" and -phorus meaning "bearing" after the striated anatomy of the dermal armor of L. imperfectus. In 1935, a Trematode was named Lomaphorus unwittingly, but it has since been moved to a new genus name, Lomasoma. Taxonomy The first fossils referred to Lomaphorus were described as early as 1857 with the description of Glyptodon elevatus based on dorsal carapace osteoderms recovered from Pleistocene deposits in Argentina, but majority of the fossils were described by Argentine paleontologist Florentino Ameghino during the late 19th century. Several more species were referred to the genus that later were synonymized with more complete species or their own genera, Ameghino even admitting that many of his species were diagnosed based on very fine details that could be individual variation. Many species have been named as or referred to Lomaphorus, but most of these referrals or descriptions were erroneously based on taphonomic characteristics of fossilized osteoderms instead of genuine diagnostic features. Few species have received detailed descriptions either, further complicating the situation. Species Type: Lomaphorus (Hoplophorus) imperfectus (Gervais & Ameghino, 1880); Undesignated holotype, but Ameghino illustrated some material that may be the holotype that shows many similarities to Neosclerocalyptus. Possibly synonymous with N. pseudornatus or N. ornatus, but further analysis is necessary. Species referred to Lomaphorus according to Zurita et al (2016): Lomaphorus chapalmalensis Ameghino, 1908; Holotype is a distal fragment of a caudal tube (MACN Pv 5806). The morphology of the tube is indistinguishable from that of fossils of Eosclerocalyptus and also juveniles of Neoslerocalyptus, making it a nomen dubium. Lomaphorus cingulatus Ameghino, 1889; Holotype is a single dorsal carapace osteoderm that has been lost, though a calcotype (MACN A-592) was created. This calcotype is indistinguishable from other Lomaphorus species', making it a nomen dubium. It could also be a synonym of Trachycalyptus. Lomaphorus (Hoplophorus) compressus Ameghino, 1882; Holotype is dorsal carapace osteoderms. The osteoderms' supposed diagnostic traits are the same as those in Neoslerocalyptus species, making it a nomen dubium. Lomaphorus (Hoplophorus) elegans (Burmeister, 1871); Holotype includes dorsal carapace osteoderms, though many fossils have been referred to the species. Lomaphorus (Glyptodon) elevatus (Nodot, 1857); Holotype is dorsal carapace osteoderms. The osteoderms' supposed diagnostic traits are the same as those in juveniles of Neoslerocalyptus species, making it a nomen dubium. Other species referred to Lomaphorus: Lomaphorus (Hoplophorus) clarazianus (Ameghino, 1889); Holotype is fragmentary osteoderms and a referred skull, though the skull is lost and has been referred to Neoslerocalyptus. The type osteoderms lack diagnostic traits, making it a nomen dubium. Lomaphorus (Glyptodon) gracilis (Nodot, 1857); Holotype is fragmentary osteoderms from Brazil. The species was referred to Lomaphorus by Lydekker (1894). Lomaphorus (Zaphilus) larranagai (Ameghino, 1889); Holotype is dorsal carapace osteoderms (MACN 1233). The species was referred to Lomaphorus by Lydekker (1894), but has since been declared a nomen dubium and placed back in Zaphilus. Lomaphorus (Hoplophorus) lydekkeri (Ameghino, 1889); Holotype is a distal caudal tube fragment (BMNH 40664). The species has since been placed in its own genus, Uruguayurus. Lomaphorus (Hoplophorus) "meyeri" (Lund, 1843); A nomen nudum, referred to Lomaphorus by Lydekker (1894). Lomaphorus (Plohophorus) orientalis (Ameghino, 1889); Holotype is a caudal tube fragment (MACN-A ?). The species was referred to Lomaphorus by Lydekker (1894), but has since been placed in Pseudoplohophorus. Lomaphorus (Hoplophorus) paranensis (Ameghino, 1883); Holotype is a breastplate fragment (MACN ?). The species was referred to Lomaphorus by Lydekker (1894), but has since been declared a nomen dubium and placed in Neoslerocalyptus. Lomaphorus (Hoplophorus) pseudornatus (Ameghino, 1889); Holotype is dorsal carapace osteoderms (MACN 1233). The species was referred to Lomaphorus by Lydekker (1894), but has since been placed in Neoslerocalyptus. Lomaphorus? (Hoplophorus) scrobiculatus Ameghino, 1889; Holotype is a dorsal carapace and caudal tube apparently in the collections of the MACN. The carapace was said by Ameghino (1895) to be from Lomaphorus compressus and the caudal tube to Neoslcerocalyptus, but it has since been declared a species inquirenda. Description Due to problems with the diagnostics of Lomaphorus and its problems with its internal taxonomy, many of the diagnostic traits for the taxon are uncertain. Lomaphorus, like most of the glyptodons, was large at 2.5 meters long but not as large as its relative Hoplophorus. Lomaphorus possessed a powerful carapace that covered a large part of the body, formed by osteoderms melted together. The carapace was relatively low and long, but not as much as that of the Neosclerocalyptus. The dorsal plates brought a central figure of medium size, surrounded by a peripheral area of radial ornamentation. The tail was protected thanks to a series of bone rings and a terminal bone tube; The latter still retained a narrow peripheral band, and was equipped with large side osteoderms. At the end of the tube there were two great convex osteoderms. References Prehistoric cingulates Pleistocene xenarthrans Prehistoric placental genera Pleistocene mammals of South America Lujanian Pleistocene Argentina Fossils of Argentina Fossil taxa described in 1889 Taxa named by Florentino Ameghino Nomina dubia
Lomaphorus
Biology
1,611
683,583
https://en.wikipedia.org/wiki/Transcutaneous%20electrical%20nerve%20stimulation
A transcutaneous electrical nerve stimulation (TENS or TNS) is a device that produces mild electric current to stimulate the nerves for therapeutic purposes. TENS, by definition, covers the complete range of transcutaneously applied currents used for nerve excitation, but the term is often used with a more restrictive intent, namely, to describe the kind of pulses produced by portable stimulators used to reduce pain. The unit is usually connected to the skin using two or more electrodes which are typically conductive gel pads. A typical battery-operated TENS unit is able to modulate pulse width, frequency, and intensity. Generally, TENS is applied at high frequency (>50 Hz) with an intensity below motor contraction (sensory intensity) or low frequency (<10 Hz) with an intensity that produces motor contraction. More recently, many TENS units use a mixed frequency mode which alleviates tolerance to repeated use. Intensity of stimulation should be strong but comfortable with greater intensities, regardless of frequency, producing the greatest analgesia. While the use of TENS has proved effective in clinical studies, there is controversy over which conditions the device should be used to treat. Medical uses Pain Transcutaneous electrical nerve stimulation is a commonly used treatment approach to alleviate acute and chronic pain by reducing the sensitization of dorsal horn neurons, elevating levels of gamma-aminobutyric acid and glycine, and inhibiting glial activation. Many systematic reviews and meta-analyses assessing clinical trials looking at the efficacy of TENS for different sources of pain, however, have been inconclusive due to a lack of high-quality and unbiased evidence. Potential benefits of TENS treatment include its safety profile, relative affordability, ease of self-administration, and availability over-the-counter without a prescription. In principle, an adequate intensity of stimulation is necessary to achieve pain relief with TENS. An analysis of treatment fidelity—meaning that the delivery of TENS in a trial was in accordance with current clinical advice, such as using "a strong but comfortable sensation" and suitable, frequent treatment durations—showed that higher-fidelity trials tended to have a positive outcome. Acute pain For people with recent-onset pain i.e., fewer than three months, such as pain associated with surgery, trauma, and medical procedures, TENS may be better than placebo in some cases. The evidence of benefit is very weak, though. Musculoskeletal and neck/back pain There is some evidence to support a benefit of using TENS in chronic musculoskeletal pain. Results from a task force on neck pain in 2008 found no clinically significant benefit of TENS for the treatment of neck pain when compared to placebo. A 2010 review did not find evidence to support the use of TENS for chronic low back pain. Another study examining knee osteoarthritis patients found that TENS demonstrated efficacy and a better safety profile relative to weak opiates. Given the age, comorbidity frequency, tendency toward polypharmacy, and sensitivity to adverse reactions among individuals most frequently reporting osteoarthritis, TENS could be a non-pharmacological alternative to analgesics in the management of knee osteoarthritis pain. Neuropathy and phantom limb pain There is tentative evidence that TENS may be useful for painful diabetic neuropathy. As of 2015, the efficacy of TENS for phantom limb pain is unknown; no randomized controlled trials have been performed. A few studies have shown objective evidence that TENS may modulate or suppress pain signals in the brain. One used evoked cortical potentials to show that electric stimulation of peripheral A-beta sensory fibers reliably suppressed A-delta fiber nociceptive (pain perception) processing. Two other studies used functional magnetic resonance imaging (fMRI): one showed that high-frequency TENS produced a decrease in pain-related cortical activations in patients with carpal tunnel syndrome, while the other showed that low-frequency TENS decreased shoulder impingement pain and modulated pain-induced activation in the brain. Labor and menstrual pain Early studies found that TENS "has been shown not to be effective in postoperative and labour pain." These studies also had questionable ability to truly blind the patients. However, more recent studies have shown that TENS was "effective for relieving labour pain, and they are well considered by pregnant participants." One study also showed that there was a significant change in laboring individuals' time to request analgesia such as an epidural. The group with the TENS waited five additional hours relative to those without TENS. Both groups were satisfied with the pain relief that they had from their choices. No maternal, infant, or labor problems were noted. There is tentative evidence that TENS may be helpful for treating pain from dysmenorrhoea, however further research is required. Cancer pain Non-pharmacological treatment options for people experiencing pain caused by cancer are much needed, however, it is not clear from the weak studies that have been published if TENS is an effective approach. Bladder function Percutaneous and transcutaneous electrical nerve stimulation in the tibial nerve have been used in the treatment of overactive bladder and urinary retention. Sometimes it is also done in the sacrum. Systematic review studies have shown limited evidence on the effectiveness, and more quality research is needed. A major trial found that in a care home context transcutaneous posterior tibial nerve stimulation did not improve urinary incontinence. Dentistry TENS has been extensively used in non-odontogenic orofacial pain relief. In addition, TENS and ultra low frequency-TENS (ULF-TENS) are commonly employed in diagnosis and treatment of temporomandibular joint dysfunction (TMD). Further clinical studies are required to determine its efficacy. Tremor A wearable neuromodulation device that delivers electrical stimulation to nerves in the wrist is now available by prescription. Worn around the wrist, it acts as a non-invasive treatment for those living with essential tremor. The stimulator has electrodes that are placed circumferentially around a patient's wrist. Positioning the electrodes on generally opposing sides of the target nerve can result in improved stimulation of the nerve. In clinical trials reductions in hand tremors were reported following noninvasive median and radial nerve stimulation. Transcutaneous afferent patterned stimulation (TAPS) is a tremor-customized therapy, based on the patient's measured tremor frequency, and is delivered transcutaneously to the median and radial nerves of a patient's wrist. The patient specific TAPS stimulation is determined through a calibration process performed by the accelerometer and microprocessor on the device. The Cala ONE delivers TAPS in a wrist-worn device that is calibrated to treat tremor symptoms. Cala ONE received de novo FDA clearance in April 2018 for the transient relief of hand tremors in adults with essential tremor and is currently marketed as Cala Trio. Contraindications People who have implanted electronic medical devices including pacemakers and cardiodefibrillators are not suggested to use TENS. In addition, caution should be taken before using TENS in those who are pregnant, have epilepsy, have an active malignancy, have deep vein thrombosis, have skin that is damaged, or are frail. The use of TENS is likely to be less effective on areas of numb skin or decreased sensation due to nerve damage. It may also cause skin irritation due to the inability to feel currents until they are too high. There is an unknown level of risk when placing electrodes over an infection (possible spreading due to muscle contractions), but cross contamination with the electrodes themselves is of greater concern. There are several anatomical locations where TENS electrodes are contraindicated: Over the eyes due to the risk of increasing intraocular pressure Transcerebrally On the front of the neck due to the risk of an acute hypotension (through a vasovagal response) or even a laryngospasm Through the chest using anterior and posterior electrode positions, or other transthoracic applications understood as "across a thoracic diameter"; this does not preclude coplanar applications Internally, except for specific applications of dental, vaginal, and anal stimulation that employ specialized TENS units On broken skin areas or wounds, although it can be placed around wounds Over a tumor or malignancy, based on in vitro experiments where electricity promotes cell growth Directly over the spinal column Cardiac pacemakers TENS used across an artificial cardiac pacemaker or other indwelling stimulator, including across its leads, may cause interference and failure of the implanted device. Serious accidents have been recorded in cases when this principle was not observed. A 2009 review in this area suggests that electrotherapy, including TENS, is "best avoided" in patients with pacemakers or implantable cardioverter-defibrillators (ICDs). They add that "there is no consensus and it may be possible to safely deliver these modalities in a proper setting with device and patient monitoring", and recommend further research. The review found several reports of ICDs administering inappropriate treatment due to interference with TENS devices, but notes that the reports on pacemakers are mixed: some non-programmable pacemakers were inhibited by TENS, but others were unaffected or auto-reprogrammed. Pregnancy TENS should be used with caution on people with epilepsy or on pregnant women; do not use over area of the uterus, as the effects of electrical stimulation on the developing fetus are not known. Side effects Overall, TENS has been found to be safe compared with pharmaceutical medications for treating pain. Potential side effects include skin itching near the electrodes and mild redness of the skin (erythema). Some people also report that they dislike the sensation associated with TENS. Device types The TENS device acts to stimulate the sensory nerves and a small portion of the peripheral motor nerves; the stimulation causes multiple mechanisms to trigger and manage the sense of pain in a patient. TENS operates by two main mechanisms: it stimulates competing sensory neurons at the pain perception gate, and it stimulates the opiate response. The mechanism that will be used varies with the type of device. The table below lists the types of devices: History Electrical stimulation for pain control was used in ancient Rome, in AD 63. It was reported by Scribonius Largus that pain was relieved by standing on an electrical fish at the seashore. In the 16th through the 18th centuries various electrostatic devices were used for headache and other pains. Benjamin Franklin was a proponent of this method for pain relief. In the 19th century a device called the electreat, along with numerous other devices were used for pain control and cancer cures. Only the electreat survived into the 20th century, but was not portable, and had limited control of the stimulus. Development of the modern TENS unit is generally credited to C. Norman Shealy. Modern The first modern, patient-wearable TENS was patented in the United States in 1974. It was initially used for testing the tolerance of chronic pain patients to electrical stimulation before implantation of electrodes in the spinal cord dorsal column. The electrodes were attached to an implanted receiver, which received its power from an antenna worn on the surface of the skin. Although intended only for testing tolerance to electrical stimulation, many of the patients said they received so much relief from the TENS itself that they never returned for the implant. A number of companies began manufacturing TENS units after the commercial success of the Medtronic device became known. The neurological division of Medtronic, founded by Don Maurer, Ed Schuck and Charles Ray, developed a number of applications for implanted electrical stimulation devices for treatment of epilepsy, Parkinson's disease, and other disorders of the nervous system. Today many people confuse TENS with electrical muscle stimulation (EMS). EMS and TENS devices look similar, with both using long electric lead wires and electrodes. TENS is for blocking pain, where EMS is for stimulating muscles. Beginning in the late 1970s, in the USSR as part of their space program further research was conducted into electronic pain reduction devices. Dr. Alexander Karasev developed scenar (or skenar) devices, and later in the early 2000s cosmodic devices. Each of these device types uses the fundamental technique of reading electrical signals in the skin, analyzing the signals, and returning therapeutic electrical pulses into the nerves. He terms the TENS devices first generation electronic pain relief devices, scenar devices second generation devices, cosmodic devices as third generation devices, and the D.O.V.E. (Device Organizing Vital Energy) device as an advanced second generation device which automatically incorporates some cosmodic therapeutic features. Research As reported, TENS has different effects on the brain. A randomized controlled trial in 2017 shown that sensory ULF-TENS applied on the skin proximally to trigeminal nerve, reduced the effect of acute mental stress assessed by heart rate variability (HRV). Further high quality studies are required to determine the effectiveness of TENS for treating dementia. A head-mounted TENS device called Cefaly was approved by the United States Food and Drug Administration (FDA), in March 2014, for the prevention of migraine attacks. The Cefaly device was found effective in preventing migraine attacks in a randomized sham-controlled trial. This was the first TENS device the FDA approved for pain prevention, as opposed to pain suppression. A study performed on healthy human subjects demonstrates that repeated application of TENS can generate analgesic tolerance within five days, reducing its efficacy. The study noted that TENS causes the release of endogenous opioids, and that the analgesia is likely due to opioid tolerance mechanisms. The pain reduction ability of TENS is unconfirmed by sufficient randomized controlled trials so far. One meta-analysis of several hundred TENS studies concluded that there was a significant overall reduction of pain intensity due to TENS, but there were too few participants and controls to be entirely certain of their validity. Therefore, the authors downgraded their confidence in the results by two levels, to low-certainty. See also Electroacupuncture Electrical muscle stimulation Erotic electrostimulation — for sexual uses of TENS devices Microcurrent electrical neuromuscular stimulator References Books cited Further reading Electrotherapy Neurotechnology Medical equipment Pain management
Transcutaneous electrical nerve stimulation
Biology
2,974
72,818,708
https://en.wikipedia.org/wiki/12%20Cassiopeiae
12 Cassiopeiae (12 Cas) is a white giant in the constellation Cassiopeia, about 860 light years away. It has an apparent magnitude of 5.4, so it faintly visible to the naked eye. The spectrum of 12 Cassiopeiae is classified as a B9-type giant. About three times as massive as the Sun and 386 times as luminous, it has expanded away from the main sequence after exhausting its core hydrogen. It now has a radius of with an effective temperature of about , leading to a bolometric luminosity of . References Cassiopeia (constellation) Cassiopeiae, 12 0093 002011 001960 BD+61 0069 B-type giants
12 Cassiopeiae
Astronomy
156
71,367,978
https://en.wikipedia.org/wiki/Hans%20Max%20Jahn
Hans Max Jahn (4 July 1853 – 7 August 1906) was a German physical chemist who worked on thermochemistry and electrochemistry. As an experimental chemist he identified problems in the contemporary theory of electrolyte conductivity and examined the thermodynamic validity of the Gibbs-Helmholtz equation. Jahn was born in Küstrin (now in Poland) and was educated at the Universities of Berlin and Heidelberg in chemistry and mathematics. His early influences included A. W. von Hofmann whom he assisted as a student, Robert Bunsen, G. Kirchhoff and the mathematician L. Kronecker. After receiving a doctorate in 1875 for work in organic chemistry he became an assistant to Anastassios Christomanos at Athens. In 1877 he moved to Vienna, working under Ernst Ludwig (1842–1915) and in 1884 he moved to Graz. From 1899 he taught at the agricultural school and university in Berlin. Jahn worked with Walther Nernst and one of his experimental result in 1900 was that there was an increased conductivity with increase in concentration of certain electrolytes. This went against the theory that Svante August Arrhenius has proposed and resulted in a major debate. Jahn married Sophie von Sichrovsky in 1883. Jahn was a keen violinist but suffered from deteriorating hearing. He died in 1906 following complications after an appendictomy. References External links Grundriss der Elektrochemie (1895) 1853 births 1906 deaths Physical chemists Electrochemists Heidelberg University alumni
Hans Max Jahn
Chemistry
319
22,685,393
https://en.wikipedia.org/wiki/S2%20%28star%29
S2, also known as S0–2, is a star in the star cluster close to the supermassive black hole Sagittarius A* (Sgr A*), orbiting it with a period of 16.0518 years, a semi-major axis of about 970 au, and a pericenter distance of 17 light hours (18 Tm or 120 au) – an orbit with a period only about 30% longer than that of Jupiter around the Sun, but coming no closer than about four times the distance of Neptune from the Sun. The mass when the star first formed is estimated by the European Southern Observatory (ESO) to have been approximately . Based on its spectral type (B0V ~ B3V), it probably has a mass of 10 to 15 solar masses. Its changing apparent position has been monitored since 1995 by two groups (at UCLA and at the Max Planck Institute for Extraterrestrial Physics) as part of an effort to gather evidence for the existence of a supermassive black hole in the center of the Milky Way galaxy. The accumulating evidence points to Sgr A* as being the site of such a black hole. By 2008, S2 had been observed for one complete orbit. In 2020, partway through its next orbit, the GRAVITY collaboration released an analysis showing full agreement with Schwarzschild geodesics. A team of astronomers, mainly from the Max Planck Institute for Extraterrestrial Physics, used observations of S2's orbital dynamics around Sgr A* to measure the distance from the Earth to the Galactic Center. They determined it to be in close agreement with prior determinations by other methods. S2 was precisely tracked during its May 2018 close approach to Sgr A*, with results in accord with general relativity predictions. Nomenclature The designation S0–2 was first used in 1998. S0 indicates a star within one arc-second of Sgr A*, indicating the galactic centre, and S0–2 was the second closest star seen at the time of the measurements. The star had been catalogued simply as S2 a year earlier, the second of eleven infrared sources near the galactic centre, numbered approximately anti-clockwise. It is a coincidence that the star is numbered "2" in both lists; other catalogues number it differently. Orbit The highly eccentric orbit of S2 will give astronomers an opportunity to test for various effects predicted by general relativity and even extra-dimensional effects. These effects reached a maximum at closest approach, which occurred in mid-2018. Given a recent estimate of for the mass of the Sgr A* black hole and S2's close approach, this makes S2's the fastest known ballistic orbit, reaching speeds exceeding 5,000 km/s (11,000,000 mph, or the speed of light). The motion of S2 is also useful for detecting the presence of other objects near to Sgr A*. It is believed that there are thousands of stars, as well as dark stellar remnants (stellar black holes, neutron stars, white dwarfs) distributed in the volume through which S2 moves. These objects will perturb S2's orbit, causing it to deviate gradually from the Keplerian ellipse that characterizes motion around a single point mass. So far, the strongest constraint that can be placed on these remnants is that their total mass comprises less than one percent of the mass of the supermassive black hole. 2018 pericentre passage In 2018, S2 made its closest approach to Sgr A*, reaching 7,650 km/s or almost 3% of the speed of light, while passing the black hole at a distance of just 120 AU or about 1,400 times its Schwarzschild radius. S2 reached its pericenter on May 19, 2018, while its velocity in the line of sight from Earth peaked in April, and later hit its minimum in late August and early September. Independent analyses by the GRAVITY collaboration (led by Reinhard Genzel) and the KECK/UCLA Galactic Center Group (led by Andrea Ghez) revealed a combined transverse Doppler and gravitational redshift up to 200 km/s/c, in agreement with general relativity predictions. Additional analysis has revealed a Schwarzschild precession of 12 arcminutes (0.2 degrees) in S2's orbit caused by the close passage, fully consistent with general relativity. S0–102 In 2012, a star called S0–102 (or S55) was found to be orbiting even closer to the Milky Way's central supermassive black hole than S0–2 does. At one-sixteenth the brightness of S0–2, S0–102 was not initially recognized because it required many more years of observations to distinguish it from its local infrared background. S0–102 has an orbital period of 12.8 years, even shorter than that of S0–2. Of all the stars orbiting the black hole, only these two have their orbital parameters and trajectories fully known in all three dimensions of space. The discovery of two stars orbiting the central black hole so closely with their orbits fully described is of extreme interest to astronomers, as the pair together will allow much more precise measurements on the nature of gravity and general relativity around the black hole than would be possible from using S0–2 alone. A still closer star S62 has since been discovered with an orbital period of 9.9 years. Image gallery See also Lists of stars References External links B-type main-sequence stars Sagittarius (constellation) Tests of general relativity Galactic Center
S2 (star)
Astronomy
1,159
4,862,582
https://en.wikipedia.org/wiki/ALCOR
ALCOR (ALGOL Converter, acronym) is an early computer language definition created by the ALCOR Group, a consortium of universities, research institutions and manufacturers in Europe and the United States which was founded in 1959 and which had 60 members in 1966. The group had the aim of a common compiler specification for a subset of ALGOL 60 after the ALGOL meeting in Copenhagen in 1958. In addition to its programming application, as the name Algol is also an astronomical reference, to the star Algol, so too, Alcor is a reference to the star Alcor. This star is the fainter companion of the 2nd magnitude star Zeta Ursae Majoris. This was sometimes ironized as being a bad omen for the future of the language. In Europe, a high level machine architecture for ALGOL 60 was devised which was emulated on various real computers, among them the Siemens 2002 and the IBM 7090. An ALGOL manual was published which provided a detailed introduction of all features of the language with many program snippets, and four appendixes: Revised Report on the Algorithmic Language ALGOL 60 Report on Subset ALGOL 60 (IFIP) Report on Input-Output Procedures for ALGOL 60 An early "standard" character set for representing ALGOL 60 code on paper and paper tape. This character set introduced the characters "×", ";", "[", "]", and "⏨" into the CCITT-2 code, the first two replacing "?" and the BEL control character, the others taking unused code points. References Baumann, R. (1961) Baumann, R. "ALGOL Manual of the ALCOR Group, Pts. 1, 2 & 3" Elektronische Rechenanlagen No. 5 (Oct. 1961), 206–212; No. 6 (Dec. 1961), 259–265; No. 2 (Apr. 1962); (in German) Papertape, punched card, magnetic tape coding schemes Computer Museum, University of Amsterdam, the Netherlands External links ALCOR in The Encyclopedia of Computer Languages The ALCOR Project, Klaus Samelson, Friedrich L. Bauer, 1962. Algol programming language family Systems programming languages Procedural programming languages Character encoding Character sets Programming languages created in the 1960s
ALCOR
Technology
471
41,164,716
https://en.wikipedia.org/wiki/Malettinin
Malettinins are polyketide-derived antimicrobial compounds produced by fungal ascomycetes in the genus Hypoxylon. Chemical structures References Antimicrobials
Malettinin
Chemistry,Biology
42
13,265,673
https://en.wikipedia.org/wiki/Lanix
Lanix Internacional, S.A. de C.V. is a multinational computer and mobile phone manufacturer company based in Hermosillo, Mexico. Lanix primarily markets and sells its products in Mexico and the Latin American export market. History Lanix was founded in Hermosillo, Sonora, Mexico in 1990, and released its first computer, the PC 286 the same year. Throughout the 1990s Lanix expanded into the development and production of more sophisticated electronics components such as optical drives, servers, memory drives and flash memory. In 2002 Lanix opened its first factory outside of Mexico in Santiago, Chile to cater to the South American market. By 2006 Lanix had gained a market share of 5% of Mexico's electronics market and began diversifying its product line to include LCD televisions and monitors and in 2007 began manufacturing mobile phones. Currently Lanix offers products in the consumer, professional and government markets throughout Latin America. In 2010 Lanix announced an ambitious plan to gain market share in the Latin American computer market and expanded operations to include every country in Latin America Lanix has production facilities at its original headquarters in Hermosillo, Sonora, Mexico and international facilities in Santiago, Chile and Bogota, Colombia. At the 2009 Intel Solutions Summit hosted by Intel, Lanix won an award in the "mobile solution" category. In March 2011, Lanix began offering a system where buyers can custom build their own computer, choosing different types of chipsets, memory, and other components. In 2012 Lanix expanded its product portfolio by integrating its first Smartphone, Ilium S100, and positioned itself as one of the bestselling brands in the Mexican market. In 2015 announces the first smartphone with Windows Phone of the company. In June 2017 Lanix image is renewed by updating its logo, launching new high-end smartphones, and updating its webpage. Products , Lanix manufactures desktops, laptops, tablets, servers, netbooks, monitors, optical disc drives, smartphones flash memory and random-access memory. As of 2010, it made one of the most powerful production Windows desktops in the world, the Lanix Titan Magnum Extreme. Smartphones and tablet computers In 2007, Lanix announced a mobile division specializing in developing smartphones and tablets. In 2010, it showed a smartphone named the Illium running the Android operating system. Lanix smartphones are offered by Telcel, a subsidiary of América Móvil. In 2010, Lanix unveiled a tablet computer named the W10 running Windows 7. An Android version will be available through Telcel. In 2017, Lanix announces its new portfolio of innovative smartphones with competitive features in the current market. Mexican government contracts Lanix has won several major contracts to provide electronics to government entities in Mexico which has been a key part of the company's success including a contract from the Mexican Secretariat of Public Education to supply 16,000 classrooms across Mexico with computers. See also References Mobile phone manufacturers Electronics companies established in 1990 Consumer electronics brands Computer companies of Mexico Computer hardware companies Computer systems companies Hermosillo Mexican brands Mexican companies established in 1990 Electronics companies of Mexico
Lanix
Technology
629
230,527
https://en.wikipedia.org/wiki/Seismic%20hazard
A seismic hazard is the probability that an earthquake will occur in a given geographic area, within a given window of time, and with ground motion intensity exceeding a given threshold. With a hazard thus estimated, risk can be assessed and included in such areas as building codes for standard buildings, designing larger buildings and infrastructure projects, land use planning and determining insurance rates. The seismic hazard studies also may generate two standard measures of anticipated ground motion, both confusingly abbreviated MCE; the simpler probabilistic Maximum Considered Earthquake (or Event ), used in standard building codes, and the more detailed and deterministic Maximum Credible Earthquake incorporated in the design of larger buildings and civil infrastructure like dams or bridges. It is important to clarify which MCE is being discussed. Calculations for determining seismic hazard were first formulated by C. Allin Cornell in 1968 and, depending on their level of importance and use, can be quite complex. The regional geology and seismology setting is first examined for sources and patterns of earthquake occurrence, both in depth and at the surface from seismometer records; secondly, the impacts from these sources are assessed relative to local geologic rock and soil types, slope angle and groundwater conditions. Zones of similar potential earthquake shaking are thus determined and drawn on maps. The well known San Andreas Fault is illustrated as a long narrow elliptical zone of greater potential motion, like many areas along continental margins associated with the Pacific Ring of Fire. Zones of higher seismicity in the continental interior may be the site for intraplate earthquakes) and tend to be drawn as broad areas, based on historic records, like the 1812 New Madrid earthquake, since specific causative faults are generally not identified as earthquake sources. Each zone is given properties associated with source potential: how many earthquakes per year, the maximum size of earthquakes (maximum magnitude), etc. Finally, the calculations require formulae that give the required hazard indicators for a given earthquake size and distance. For example, some districts prefer to use peak acceleration, others use peak velocity, and more sophisticated uses require response spectral ordinates. The computer program then integrates over all the zones and produces probability curves for the key ground motion parameter. The final result gives a 'chance' of exceeding a given value over a specified amount of time. Standard building codes for homeowners might be concerned with a 1 in 500 years chance, while nuclear plants look at the 10,000 year time frame. A longer-term seismic history can be obtained through paleoseismology. The results may be in the form of a ground response spectrum for use in seismic analysis. More elaborate variations on the theme also look at the soil conditions. Higher ground motions are likely to be experienced on a soft swamp compared to a hard rock site. The standard seismic hazard calculations become adjusted upwards when postulating characteristic earthquakes. Areas with high ground motion due to soil conditions are also often subject to soil failure due to liquefaction. Soil failure can also occur due to earthquake-induced landslides in steep terrain. Large area landsliding can also occur on rather gentle slopes as was seen in the Good Friday earthquake in Anchorage, Alaska, March 28, 1964. MCEs In a normal seismic hazard analyses intended for the public, that of a "maximum considered earthquake", or "maximum considered event" (MCE) for a specific area, is an earthquake that is expected to occur once in approximately 2,500 years; that is, it has a 2-percent probability of being exceeded in 50 years. The term is used specifically for general building codes, which people commonly occupy; building codes in many localities will require non-essential buildings to be designed for "collapse prevention" in an MCE, so that the building remains standing – allowing for safety and escape of occupants – rather than full structural survival of the building. A far more detailed and stringent MCE stands for "maximum credible earthquake", which is used in designing for skyscrapers and larger civil infrastructure, like dams, where structural failure could lead to other catastrophic consequences. These MCEs might require determining more than one specific earthquake event, depending on the variety of structures included. US seismic hazard maps Some maps released by the USGS are shown with peak ground acceleration with a 10% probability of exceedance in 50 years, measured in Metre per second squared. For parts of the US, the National Seismic Hazard Mapping Project in 2008 resulted in seismic hazard maps showing peak acceleration (as a percentage of gravity) with a 2% probability of exceedance in 50 years. Temblor, a company founded in 2014, offers a seismic hazard rank for all of the conterminous US. This service is free and ad-free for the public. The hazard rank "is made for the likelihood of experiencing strong shaking (0.4g peak ground acceleration) in 30 years, based on the 2014 USGS NSHMP hazard model." Global seismic hazard maps Global seismic hazard maps exist too, which similarly present the level of certain ground motions that have a 10% probability of exceedance (or a 90% chance of non-exceedance) during a 50-year time span (that corresponds to a return period of 475 years). See also C. Allin Cornell Earthquake engineering Mitigation of seismic motion Neotectonics Seismic loading Seismic performance Vibration control References External links Global Seismic Hazard Assessment Program Infrastructure Risk Research Project at The University of British Columbia, Vancouver, Canada Diagnose the impact of global earthquakes from direct and indirect eyewitnesses contributions Earthquake and seismic risk mitigation
Seismic hazard
Engineering
1,118
769,434
https://en.wikipedia.org/wiki/Setoid
In mathematics, a setoid (X, ~) is a set (or type) X equipped with an equivalence relation ~. A setoid may also be called E-set, Bishop set, or extensional set. Setoids are studied especially in proof theory and in type-theoretic foundations of mathematics. Often in mathematics, when one defines an equivalence relation on a set, one immediately forms the quotient set (turning equivalence into equality). In contrast, setoids may be used when a difference between identity and equivalence must be maintained, often with an interpretation of intensional equality (the equality on the original set) and extensional equality (the equivalence relation, or the equality on the quotient set). Proof theory In proof theory, particularly the proof theory of constructive mathematics based on the Curry–Howard correspondence, one often identifies a mathematical proposition with its set of proofs (if any). A given proposition may have many proofs, of course; according to the principle of proof irrelevance, normally only the truth of the proposition matters, not which proof was used. However, the Curry–Howard correspondence can turn proofs into algorithms, and differences between algorithms are often important. So proof theorists may prefer to identify a proposition with a setoid of proofs, considering proofs equivalent if they can be converted into one another through beta conversion or the like. Type theory In type-theoretic foundations of mathematics, setoids may be used in a type theory that lacks quotient types to model general mathematical sets. For example, in Per Martin-Löf's intuitionistic type theory, there is no type of real numbers, only a type of regular Cauchy sequences of rational numbers. To do real analysis in Martin-Löf's framework, therefore, one must work with a setoid of real numbers, the type of regular Cauchy sequences equipped with the usual notion of equivalence. Predicates and functions of real numbers need to be defined for regular Cauchy sequences and proven to be compatible with the equivalence relation. Typically (although it depends on the type theory used), the axiom of choice will hold for functions between types (intensional functions), but not for functions between setoids (extensional functions). The term "set" is variously used either as a synonym of "type" or as a synonym of "setoid". Constructive mathematics In constructive mathematics, one often takes a setoid with an apartness relation instead of an equivalence relation, called a constructive setoid. One sometimes also considers a partial setoid using a partial equivalence relation or partial apartness (see e.g. Barthe et al., section 1). See also Groupoid Notes References . . External links Implementation of setoids in Coq Abstract algebra Category theory Proof theory Type theory Equivalence (mathematics)
Setoid
Mathematics
585
45,459,609
https://en.wikipedia.org/wiki/Leccinellum%20rugosiceps
Leccinellum rugosiceps, commonly known as the wrinkled Leccinum, is a species of bolete fungus. It is found in Asia, North America, Central America, and South America, where it grows in an ectomycorrhizal association with oak. Fruitbodies have convex, yellowish caps up to in diameter. In age, the cap surface becomes wrinkled, often revealing white cracks. The stipe is up to long and wide, with brown scabers on an underlying yellowish surface. It has firm flesh that stains initially pinkish to reddish and then to grayish or blackish when injured. The pore surface on the cap underside is yellowish. Fruitbodies are edible, although opinions vary as to their desirability. Taxonomy The species was first described scientifically in 1904 by American mycologist Charles Horton Peck as Boletus rugosiceps. The type collection was made in the woods of Port Jefferson, New York. Rolf Singer transferred it to Leccinum in 1945. Synonyms include Krombholzia rugosiceps, published by Rolf Singer in 1942, and Krombholziella rugosiceps, published by Josef Šutara in 1982. Krombholzia and Krombholziella are now obsolete genera that have since been subsumed into Leccinum. Leccinellum rugosiceps is classified in a grouping of species that are associated with oak and hornbeam. Others in this grouping include L. albellum and L. pseudoscabrum. The specific epithet rugosiceps, which is derived from the Latin roots for "rough" and "head", refers to its wrinkled cap. It is commonly known as the "wrinkled Leccinum". Description The convex cap measures wide. Its color is orange-yellow, aging to yellow-brown. The cap margin has a narrow flap of sterile tissue. The surface of the cap is dry, with wrinkles and pits at maturity. It often becomes cracked in age, and the whitish flesh underneath shows through. The cap tends to undergo significant color changes throughout its development—first bright yellow, then dark brown, then finally pale tan—which may make it difficult to identify in the field. The flesh is white to pale yellow, and it stains reddish to burgundy when cut or bruised. This staining is most prominent at the junction of the cap and the stipe. Further exposure over the course of 20–60 minutes results in the flesh becoming grayish to blackish. The flesh has no distinctive door or taste. The pore surface is initially dull yellow, and sometimes ages to dingy olive-brown. Unlike many other boletes, it does not turn blue when bruised, although it may have natural blue-green stains. The pores are circular, measuring less than 1 mm, while the tubes extend to 8–14 mm deep. The stipe measures long by thick. It is nearly equal in length throughout or tapered from the top to base. Its color is pale yellow to brownish underneath the pale brown scabers that darken in age. The spore print color ranges from brown to olive-brown. Spores are spindle shaped, measuring 15–19 long by 5–6 μm. They have a smooth surface, and are inamyloid (i.e., not staining with Melzer's reagent). The cap flesh is bilateral and inamyloid. The cystidia on the pores are present as conspicuous pleuro- and cheilocystidia. The cap cuticle is present as a hymeniform layer. Clamp connections are absent. Several chemical tests can be used to help verify an identification of L. rugosiceps. A drop of ammonium hydroxide solution turns the cap cuticle a reddish color or is unreactive, and yellow or unreactive on the flesh. A drop of dilute potassium hydroxide (KOH) turns the cap surface red, and the flesh yellowish to orangish. Application of iron(II) sulfate solution produces a gray color on the cap surface, and greenish-gray to olive coloration on the flesh. Similar species The Costa Rican bolete Leccinum neotropicalis is a closely allied species. It is distinguished from L. rugosiceps by its dark brown to dark reddish-brown color, and flesh that does not stain with injury. L. viscosum, found in Belize, features a similar cap and scaber pigmentation on the stipe, and similar color changes in response to injury in the flesh of the cap and the apex of the stipe; unlike L. rugosiceps, however, it also stains at the stipe base, and the cap is sticky rather than dry. L. crocipodium is a lookalike that is difficult to distinguish from L. rugosiceps. It generally has a darker cap, paler scabers, and somewhat wider spores, although these characteristics are variable. In his original species description, Charles Peck noted that L. rugosiceps grew with Hemileccinum rubropunctum, "from which it is easily separated by its dry pileus, smaller tubes and stouter stem." Edibility An edible species, Leccinellum rugosiceps mushrooms have been described variously as "great", and "of poor quality". They have a nutty flavor and firm texture; older specimens are less firm but retain the flavor. Drying the mushrooms enhances the flavor. The stipe tends to harbor insect larvae and should be cleaned before consumption. The sugar alcohol mannitol is present in the fruitbodies. Habitat and distribution Leccinellum rugosiceps is an ectomycorrhizal fungus that associates with oak. In eastern North America, pin oak (Quercus palustris) is a frequent host. The bolete fruits singly or in groups in forests, shaded lawns, and often found in areas disturbed by human activity, such as pathsides and picnic areas. Fruiting typically occurs from July to September. A Chinese study evaluating the concentrations of heavy metals in boletes found that in L. rugosiceps fruitbodies, the levels of cadmium, zinc, copper, and mercury exceeded that of national safety standards for edible fungi. The bolete is found from eastern Canada south to Florida and Mississippi, west to Michigan in the United States. The distribution extends south to Mexico, Costa Rica, and Colombia. It is one of several boletes that have a north to south clinal trend. In Asia, the species has been reported from India, Korea, China, and Taiwan. Taiwanese specimens tend to have slightly smaller spores (10–16 by 4–5 μm) than those from mainland China or from America. See also List of North American boletes References rugosiceps Edible fungi Fungi described in 1904 Fungi of Central America Fungi of North America Fungi of Asia Taxa named by Charles Horton Peck Fungus species
Leccinellum rugosiceps
Biology
1,438
75,357,507
https://en.wikipedia.org/wiki/Ji%C5%99%C3%AD%20Rosick%C3%BD%20%28mathematician%29
Jiří Rosický (born 1946) is a Czech mathematician. He works on the field of category theory. He is cited as one of the first researchers to introduce tangent categories and tangent bundle functors. Life Jiří Rosický was born in 1946. In 1963–1968, he studied mathematics at the Faculty of Science of the Masaryk University. In 1969, he started to work in the department of algebra and geometry at the Faculty of Science. In 1979, he became head of the department. Work His work is in category theory, model theory, abstract homotopy theory, and general algebra. Along with Jiří Adámek he has written a book on the theory of locally presentable and accessible categories. References 1946 births Living people Czech mathematicians Category theorists
Jiří Rosický (mathematician)
Mathematics
157
5,785,677
https://en.wikipedia.org/wiki/Landau%27s%20problems
At the 1912 International Congress of Mathematicians, Edmund Landau listed four basic problems about prime numbers. These problems were characterised in his speech as "unattackable at the present state of mathematics" and are now known as Landau's problems. They are as follows: Goldbach's conjecture: Can every even integer greater than 2 be written as the sum of two primes? Twin prime conjecture: Are there infinitely many primes p such that p + 2 is prime? Legendre's conjecture: Does there always exist at least one prime between consecutive perfect squares? Are there infinitely many primes p such that p − 1 is a perfect square? In other words: Are there infinitely many primes of the form n2 + 1? , all four problems are unresolved. Progress toward solutions Goldbach's conjecture Goldbach's weak conjecture, every odd number greater than 5 can be expressed as the sum of three primes, is a consequence of Goldbach's conjecture. Ivan Vinogradov proved it for large enough n (Vinogradov's theorem) in 1937, and Harald Helfgott extended this to a full proof of Goldbach's weak conjecture in 2013. Chen's theorem, another weakening of Goldbach's conjecture, proves that for all sufficiently large n, where p is prime and q is either prime or semiprime. Bordignon, Johnston, and Starichkova, correcting and improving on Yamada, proved an explicit version of Chen's theorem: every even number greater than is the sum of a prime and a product of at most two primes. Bordignon and Starichkova reduce this to assuming the Generalized Riemann hypothesis (GRH) for Dirichlet L-functions. Johnson and Starichkova give a version working for all n ≥ 4 at the cost of using a number which is the product of at most 369 primes rather than a prime or semiprime; under GRH they improve 369 to 33. Montgomery and Vaughan showed that the exceptional set of even numbers not expressible as the sum of two primes has a density zero, although the set is not proven to be finite. The best current bounds on the exceptional set is (for large enough x) due to Pintz, and under RH, due to Goldston. Linnik proved that large enough even numbers could be expressed as the sum of two primes and some (ineffective) constant K of powers of 2. Following many advances (see Pintz for an overview), Pintz and Ruzsa improved this to K = 8. Assuming the GRH, this can be improved to K = 7. Twin prime conjecture In 2013 Yitang Zhang showed that there are infinitely many prime pairs with gap bounded by 70 million, and this result has been improved to gaps of length 246 by a collaborative effort of the Polymath Project. Under the generalized Elliott–Halberstam conjecture this was improved to 6, extending earlier work by Maynard and Goldston, Pintz and Yıldırım. In 1966 Chen showed that there are infinitely many primes p (later called Chen primes) such that p + 2 is either a prime or a semiprime. Legendre's conjecture It suffices to check that each prime gap starting at p is smaller than . A table of maximal prime gaps shows that the conjecture holds to 264 ≈ 1.8. A counterexample near that size would require a prime gap a hundred million times the size of the average gap. Järviniemi, improving on work by Heath-Brown and by Matomäki, shows that there are at most exceptional primes followed by gaps larger than ; in particular, A result due to Ingham shows that there is a prime between and for every large enough n. Near-square primes Landau's fourth problem asked whether there are infinitely many primes which are of the form for integer n. (The list of known primes of this form is .) The existence of infinitely many such primes would follow as a consequence of other number-theoretic conjectures such as the Bunyakovsky conjecture and Bateman–Horn conjecture. , this problem is open. One example of near-square primes are Fermat primes. Henryk Iwaniec showed that there are infinitely many numbers of the form with at most two prime factors. Ankeny and Kubilius proved that, assuming the extended Riemann hypothesis for L-functions on Hecke characters, there are infinitely many primes of the form with . Landau's conjecture is for the stronger . The best unconditional result is due to Harman and Lewis and it gives . Merikoski, improving on previous works, showed that there are infinitely many numbers of the form with greatest prime factor at least . Replacing the exponent with 2 would yield Landau's conjecture. The Friedlander–Iwaniec theorem shows that infinitely many primes are of the form . Baier and Zhao prove that there are infinitely many primes of the form with ; the exponent can be improved to under the Generalized Riemann Hypothesis for L-functions and to under a certain Elliott-Halberstam type hypothesis. The Brun sieve establishes an upper bound on the density of primes having the form : there are such primes up to . Hence almost all numbers of the form are composite. See also List of unsolved problems in mathematics Hilbert's problems Notes References External links Conjectures about prime numbers Unsolved problems in number theory
Landau's problems
Mathematics
1,134
47,340,705
https://en.wikipedia.org/wiki/Nu2%20Lyrae
{{DISPLAYTITLE:Nu2 Lyrae}} Nu2 Lyrae, Latinized from ν2 Lyrae, or sometimes simply Nu Lyrae, is a solitary star in the northern constellation of Lyra. Based upon an annual parallax shift of 14.09 mas as seen from Earth, it is located around 231 light years from the Sun. With an apparent visual magnitude of 5.23, it is bright enough to be faintly visible to the naked eye. This is a white-hued A-type main sequence star with a stellar classification of A3 V. At an estimated age of 214 million years, it is spinning with a projected rotational velocity of 128 km/s. This is giving the star an oblate shape with an equatorial bulge that is 5% larger than the polar radius. Nu2 Lyrae has an estimated 1.9 times the mass of the Sun and about 1.5 times the Sun's radius. It is radiating 32 times the solar luminosity from its photosphere at an effective temperature of around 8,912 K. References A-type main-sequence stars Lyra Lyrae, Nu2 Lyrae, 09 174602 092405 7102 Durchmusterung objects
Nu2 Lyrae
Astronomy
262
21,626
https://en.wikipedia.org/wiki/Near-Earth%20object
A near-Earth object (NEO) is any small Solar System body orbiting the Sun whose closest approach to the Sun (perihelion) is less than 1.3 times the Earth–Sun distance (astronomical unit, AU). This definition applies to the object's orbit around the Sun, rather than its current position, thus an object with such an orbit is considered an NEO even at times when it is far from making a close approach of Earth. If an NEO's orbit crosses the Earth's orbit, and the object is larger than across, it is considered a potentially hazardous object (PHO). Most known PHOs and NEOs are asteroids, but about a third of a percent are comets. There are over 37,000 known near-Earth asteroids (NEAs) and over 120 known short-period near-Earth comets (NECs). A number of solar-orbiting meteoroids were large enough to be tracked in space before striking Earth. It is now widely accepted that collisions in the past have had a significant role in shaping the geological and biological history of Earth. Asteroids as small as in diameter can cause significant damage to the local environment and human populations. Larger asteroids penetrate the atmosphere to the surface of the Earth, producing craters if they impact a continent or tsunamis if they impact the sea. Interest in NEOs has increased since the 1980s because of greater awareness of this risk. Asteroid impact avoidance by deflection is possible in principle, and methods of mitigation are being researched. Two scales, the simple Torino scale and the more complex Palermo scale, rate the risk presented by an identified NEO based on the probability of it impacting the Earth and on how severe the consequences of such an impact would be. Some NEOs have had temporarily positive Torino or Palermo scale ratings after their discovery. Since 1998, the United States, the European Union, and other nations have been scanning the sky for NEOs in an effort called Spaceguard. The initial US Congress mandate to NASA to catalog at least 90% of NEOs that are at least in diameter, sufficient to cause a global catastrophe, was met by 2011. In later years, the survey effort was expanded to include smaller objects which have the potential for large-scale, though not global, damage. NEOs have low surface gravity, and many have Earth-like orbits that make them easy targets for spacecraft. , five near-Earth comets and six near-Earth asteroids, one of them with a moon, have been visited by spacecraft. Samples of three have been returned to Earth, and one successful deflection test was conducted. Similar missions are in progress. Preliminary plans for commercial asteroid mining have been drafted by private startup companies, but few of these plans were pursued. Definitions Near-Earth objects (NEOs) are formally defined by the International Astronomical Union (IAU) as all small Solar System bodies with orbits around the Sun that are at least partially closer than 1.3 astronomical units (AU; Sun–Earth distance) from the Sun. This definition excludes larger bodies such as planets, like Venus; natural satellites which orbit bodies other than the Sun, like Earth's Moon; and artificial bodies orbiting the Sun. A small Solar System body can be an asteroid or a comet, thus an NEO is either a near-Earth asteroid (NEA) or a near-Earth comet (NEC). The organisations cataloging NEOs further limit their definition of NEO to objects with an orbital period under 200 years, a restriction that applies to comets in particular, but this approach is not universal. Some authors further restrict the definition to orbits that are at least partly further than 0.983 AU away from the Sun. NEOs are thus not necessarily currently near the Earth, but they can potentially approach the Earth relatively closely. Many NEOs have complex orbits due to constant perturbation by the Earth's gravity, and some of them can temporarily change from an orbit around the Sun to one around the Earth, but the term is applied flexibly for these objects, too. The orbits of some NEOs intersect that of the Earth, so they pose a collision danger. These are considered potentially hazardous objects (PHOs) if their estimated diameter is above 140 meters. PHOs include potentially hazardous asteroids (PHAs). PHAs are defined based on two parameters relating to respectively their potential to approach the Earth dangerously closely and the estimated consequences that an impact would have if it occurs. Objects with both an Earth minimum orbit intersection distance (MOID) of 0.05 AU or less and an absolute magnitude of 22.0 or brighter (a rough indicator of large size) are considered PHAs. Objects that either cannot approach closer to the Earth than , or which are fainter than H = 22.0 (about in diameter with assumed albedo of 14%), are not considered PHAs. History of human awareness of NEOs The first near-Earth objects to be observed by humans were comets. Their extraterrestrial nature was recognised and confirmed only after Tycho Brahe tried to measure the distance of a comet through its parallax in 1577 and the lower limit he obtained was well above the Earth diameter; the periodicity of some comets was first recognised in 1705, when Edmond Halley published his orbit calculations for the returning object now known as Halley's Comet. The 1758–1759 return of Halley's Comet was the first comet appearance predicted. The extraterrestrial origin of meteors (shooting stars) was only recognised on the basis of the analysis of the 1833 Leonid meteor shower by astronomer Denison Olmsted. The 33-year period of the Leonids led astronomers to suspect that they originate from a comet that would today be classified as an NEO, which was confirmed in 1867, when astronomers found that the newly discovered comet 55P/Tempel–Tuttle has the same orbit as the Leonids. The first near-Earth asteroid to be discovered was 433 Eros in 1898. The asteroid was subject to several extensive observation campaigns, primarily because measurements of its orbit enabled a precise determination of the then imperfectly known distance of the Earth from the Sun. Encounters with Earth If a near-Earth object is near the part of its orbit closest to Earth's at the same time Earth is at the part of its orbit closest to the near-Earth object's orbit, the object has a close approach, or, if the orbits intersect, could even impact the Earth or its atmosphere. Close approaches , only 23 comets have been observed to pass within of Earth, including 10 which are or have been short-period comets. Two of these near-Earth comets, Halley's Comet and 73P/Schwassmann–Wachmann, have been observed during multiple close approaches. The closest observed approach was 0.0151 AU (5.88 LD) for Lexell's Comet on July 1, 1770. After an orbit change due to a close approach of Jupiter in 1779, this object is no longer an NEC. The closest approach ever observed for a current short-period NEC is 0.0229 AU (8.92 LD) for Comet Tempel–Tuttle in 1366. Orbital calculations show that P/1999 J6 (SOHO), a faint sungrazing comet and confirmed short-period NEC observed only during its close approaches to the Sun, passed Earth undetected at a distance of 0.0120 AU (4.65 LD) on June 12, 1999. In 1937, asteroid 69230 Hermes was discovered when it passed the Earth at twice the distance of the Moon. On June 14, 1968, the diameter asteroid 1566 Icarus passed Earth at a distance of , or 16.5 times the distance of the Moon. During this approach, Icarus became the first minor planet to be observed using radar. This was the first close approach predicted years in advance, since Icarus had been discovered in 1949. The first near-Earth asteroid known to have passed Earth closer than the distance of the Moon was , a body which passed at a distance of . By the 2010s, each year, several mostly small NEOs were observed passing Earth closer than the distance of the Moon. As astronomers became able to discover ever smaller and fainter and ever more numerous near-Earth objects, they began to routinely observe and catalogue close approaches. , the closest approach without atmospheric or ground impact ever detected was an encounter with asteroid on November 14, 2020. The NEA was detected receding from Earth; calculations showed that on the day before, it had a close approach at about from the Earth's centre, or about above its surface. On November 8, 2011, asteroid , relatively large at about in diameter, passed within (0.845 lunar distances) of Earth. On February 15, 2013, the asteroid 367943 Duende () passed approximately above the surface of Earth, closer than satellites in geosynchronous orbit. The asteroid was not visible to the unaided eye. This was the first sub-lunar close passage of an object discovered during a previous passage, and was thus the first to be predicted well in advance. Earth-grazers Some small asteroids that enter the upper atmosphere of Earth at a shallow angle remain intact and leave the atmosphere again, continuing on a solar orbit. During the passage through the atmosphere, due to the burning of its surface, such an object can be observed as an Earth-grazing fireball. On August 10, 1972, a meteor that became known as the 1972 Great Daylight Fireball was witnessed by many people and even filmed as it moved north over the Rocky Mountains from the U.S. Southwest to Canada. It passed within of the Earth's surface. On October 13, 1990, Earth-grazing meteoroid EN131090 was observed above Czechoslovakia and Poland, moving at along a trajectory from south to north. The closest approach to the Earth was above the surface. It was captured by two all-sky cameras of the European Fireball Network, which for the first time enabled geometric calculations of the orbit of such a body. Impacts When a near-Earth object impacts Earth, objects up to a few tens of metres across ordinarily explode in the upper atmosphere (usually harmlessly), with most or all of the solids vaporized and only small amounts of meteorites arriving to the Earth surface. Larger objects, by contrast, hit the water surface, forming tsunami waves, or the solid surface, forming impact craters. The frequency of impacts of objects of various sizes is estimated on the basis of orbit simulations of NEO populations, the frequency of impact craters on the Earth and the Moon, and the frequency of close encounters. The study of impact craters indicates that impact frequency has been more or less steady for the past 3.5 billion years, which requires a steady replenishment of the NEO population from the asteroid main belt. One impact model based on widely accepted NEO population models estimates the average time between the impact of two stony asteroids with a diameter of at least at about one year; for asteroids across (which impacts with as much energy as the atomic bomb dropped on Hiroshima, approximately 15 kilotonnes of TNT) at five years, for asteroids across (an impact energy of 10 megatons, comparable to the Tunguska event in 1908) at 1,300 years, for asteroids across at 440 thousand years, and for asteroids across at 18 million years. Some other models estimate similar impact frequencies, while others calculate higher frequencies. For Tunguska-sized (10 megaton) impacts, the estimates range from one event every 2,000–3,000 years to one event every 300 years. The second-largest observed event after the Tunguska meteor was a 1.1 megaton air blast in 1963 near the Prince Edward Islands between South Africa and Antarctica. However, this event was detected only by infrasound sensors, which led to speculation that this may have been a nuclear test. The third-largest, but by far best-observed impact, was the Chelyabinsk meteor of 15 February 2013. A previously unknown asteroid exploded above this Russian city with an equivalent blast yield of 400–500 kilotons. The calculated orbit of the pre-impact asteroid is similar to that of Apollo asteroid , making the latter the meteor's possible parent body. On October 7, 2008, 20 hours after it was first observed and 11 hours after its trajectory has been calculated and announced, asteroid blew up above the Nubian Desert in Sudan. It was the first time that an asteroid was observed and its impact was predicted prior to its entry into the atmosphere as a meteor. 10.7 kg of meteorites were recovered after the impact. , eleven impacts have been predicted, all of them small bodies that produced meteor explosions, with some impacts in remote areas only detected by the Comprehensive Nuclear-Test-Ban Treaty Organization's International Monitoring System (IMS), a network of infrasound sensors designed to detect the detonation of nuclear devices. Asteroid impact prediction remains in its infancy and successfully predicted asteroid impacts are rare. The vast majority of impacts recorded by IMS are not predicted. Observed impacts aren't restricted to the surface and atmosphere of Earth. Dust-sized NEOs have impacted man-made spacecraft, including the space probe Long Duration Exposure Facility, which collected interplanetary dust in low Earth orbit for six years from 1984. Impacts on the Moon can be observed as flashes of light with a typical duration of a fraction of a second. The first lunar impacts were recorded during the 1999 Leonid storm. Subsequently, several continuous monitoring programs were launched. A lunar impact that was observed on September 11, 2013, lasted 8 seconds, was likely caused by an object in diameter, and created a new crater across, was the largest ever observed . Risk Through human history, the risk that any near-Earth object poses has been viewed having regard to both the culture and the technology of human society. Through history, humans have associated NEOs with changing risks, based on religious, philosophical or scientific views, as well as humanity's technological or economical capability to deal with such risks. Thus, NEOs have been seen as omens of natural disasters or wars; harmless spectacles in an unchanging universe; the source of era-changing cataclysms or potentially poisonous fumes (during Earth's passage through the tail of Halley's Comet in 1910); and finally as a possible cause of a crater-forming impact that could even cause extinction of humans and other life on Earth. The potential of catastrophic impacts by near-Earth comets was recognised as soon as the first orbit calculations provided an understanding of their orbits: in 1694, Edmond Halley presented a theory that Noah's flood in the Bible was caused by a comet impact. Human perception of near-Earth asteroids as benign objects of fascination or killer objects with high risk to human society has ebbed and flowed during the short time that NEAs have been scientifically observed. The 1937 close approach of Hermes and the 1968 close approach of Icarus first raised impact concerns among scientists. Icarus earned significant public attention due to alarmist news reports. while Hermes was considered a threat because it was lost after its discovery; thus its orbit and potential for collision with Earth were not known precisely. Hermes was only re-discovered in 2003, and it is now known to be no threat for at least the next century. Scientists have recognised the threat of impacts that create craters much bigger than the impacting bodies and have indirect effects on an even wider area since the 1980s, with mounting evidence for the theory that the Cretaceous–Paleogene extinction event (in which the non-avian dinosaurs died out) 65 million years ago was caused by a large asteroid impact. On March 23, 1989, the diameter Apollo asteroid 4581 Asclepius (1989 FC) missed the Earth by . If the asteroid had impacted it would have created the largest explosion in recorded history, equivalent to 20,000 megatons of TNT. It attracted widespread attention because it was discovered only after the closest approach. From the 1990s, a typical frame of reference in searches for NEOs has been the scientific concept of risk. The awareness of the wider public of the impact risk rose after the observation of the impact of the fragments of Comet Shoemaker–Levy 9 into Jupiter in July 1994. In March 1998, early orbit calculations for recently discovered asteroid showed a potential 2028 close approach from the Earth, well within the orbit of the Moon, but with a large error margin allowing for a direct hit. Further data allowed a revision of the 2028 approach distance to , with no chance of collision. By that time, inaccurate reports of a potential impact had caused a media storm. In 1998, the movies Deep Impact and Armageddon popularised the notion that near-Earth objects could cause catastrophic impacts. Also at that time, a conspiracy theory arose about a supposed 2003 impact of a planet called Nibiru with Earth, which persisted on the internet as the predicted impact date was moved to 2012 and then 2017. Risk scales There are two schemes for the scientific classification of impact hazards from NEOs, as a way to communicate the risk of impacts to the general public. The simple Torino scale was established at an IAU workshop in Torino in June 1999, in the wake of the public confusion about the impact risk of . It rates the risks of impacts in the next 100 years according to impact energy and impact probability, using integer numbers between 0 and 10: ratings of 0 and 1 are of no concern to astronomers or the public, ratings of 2 to 4 are used for events with increasing magnitude of concern to astronomers trying to make more precise orbit calculations, but not yet a concern for the public, ratings of 5 to 7 are meant for impacts of increasing magnitude which are not certain but warrant public concern and governmental contingency planning, 8 to 10 would be used for certain collisions of increasing severity. The more complex Palermo Technical Impact Hazard Scale, established in 2002, compares the likelihood of an impact at a certain date to the probable number of impacts of a similar energy or greater until the possible impact, and takes the logarithm of this ratio. Thus, a Palermo scale rating can be any positive or negative real number, and risks of any concern are indicated by values above zero. Unlike the Torino scale, the Palermo scale is not sensitive to newly discovered small objects with an orbit known with low confidence. Highly rated risks The National Aeronautics and Space Administration NASA maintains an automated system to evaluate the threat from known NEOs over the next 100 years, which generates the continuously updated Sentry Risk Table. All or nearly all of the objects are highly likely to drop off the list eventually as more observations come in, reducing the uncertainties and enabling more accurate orbital predictions. A similar table is maintained on NEODyS (Near Earth Objects Dynamic Site) by the European Space Agency (ESA). In March 2002, became the first asteroid with a temporarily positive rating on the Torino Scale, with about a 1 in 9,300 chance of an impact in 2049. Additional observations reduced the estimated risk to zero, and the asteroid was removed from the Sentry Risk Table in April 2002. It is now known that within the next two centuries, will pass the Earth at a safe closest distance (perigee) of on August 31, 2080. Asteroid was lost after its 1950 discovery, since its observations over just 17 days were insufficient to precisely determine its orbit. It was rediscovered in December 2000 prior to a close approach the next year, when new observations, including radar imaging, allowed much more precise orbit calculations. It has a diameter of about a kilometer (0.6 miles), and an impact would therefore be globally catastrophic. Although this asteroid will not strike for at least 800 years and thus has no Torino scale rating, it was added to the Sentry list in April 2002 as the first object with a Palermo scale value greater than zero. The then-calculated 1 in 300 maximum chance of impact and +0.17 Palermo scale value was roughly 50% greater than the background risk of impact by all similarly large objects until 2880. After additional radar and optical observations, , the probability of this impact is assessed at 1 in 2,600. The corresponding Palermo scale value of −0.93 is the highest for all objects on the Sentry List Table. On December 24, 2004, asteroid 99942 Apophis (at the time yet unnamed and therefore known only by its provisional designation ) was assigned a 4 on the Torino scale, the highest rating given to date, as the information available at the time translated to a 1.6% chance of Earth impact in April 2029. As observations were collected over the next three days, the calculated chance of impact first increased to as high as 2.7%, then fell back to zero, as the shrinking uncertainty zone for this close approach no longer included the Earth. There was at that time still some uncertainty about potential impacts during later close approaches. However, as the precision of orbital calculations improved due to additional observations, the risk of impact at any date was completely eliminated and Apophis was removed from the Sentry Risk Table in February 2021. In February 2006, , having a diameter around 300 metres, was assigned a Torino Scale rating of 2 due to a close encounter predicted for May 4, 2102. After additional observations allowed increasingly precise predictions, the Torino rating was lowered first to 1 in May 2006, then to 0 in October 2006, and the asteroid was removed from the Sentry Risk Table entirely in February 2008. In 2021, was listed with the highest chance of impacting Earth, at 1 in 22 on September 5, 2095. At only across, the asteroid however is much too small to be considered a potentially hazardous asteroid and it poses no serious threat: the possible 2095 impact therefore rated only −3.32 on the Palermo Scale. Observations during the August 2022 close approach were expected to ascertain whether the asteroid will impact or miss Earth in 2095. , the risk of the 2095 impact was put at 1 in 10, still the highest, with a Palermo Scale rating of −2.97. Projects to minimize the threat A year before the 1968 close approach of asteroid Icarus, Massachusetts Institute of Technology students launched Project Icarus, devising a plan to deflect the asteroid with rockets in case it was found to be on a collision course with Earth. Project Icarus received wide media coverage, and inspired the 1979 disaster movie Meteor, in which the US and the USSR join forces to blow up an Earth-bound fragment of an asteroid hit by a comet. The first astronomical program dedicated to the discovery of near-Earth asteroids was the Palomar Planet-Crossing Asteroid Survey. The link to impact hazard, the need for dedicated survey telescopes and options to head off an eventual impact were first discussed at a 1981 interdisciplinary conference in Snowmass, Colorado. Plans for a more comprehensive survey, named the Spaceguard Survey, were developed by NASA from 1992, under a mandate from the United States Congress. To promote the survey on an international level, the International Astronomical Union (IAU) organised a workshop at Vulcano, Italy in 1995, and set up The Spaceguard Foundation also in Italy a year later. In 1998, the United States Congress gave NASA a mandate to detect 90% of near-Earth asteroids over diameter (that threaten global devastation) by 2008. Several surveys have undertaken "Spaceguard" activities (an umbrella term), including Lincoln Near-Earth Asteroid Research (LINEAR), Spacewatch, Near-Earth Asteroid Tracking (NEAT), Lowell Observatory Near-Earth-Object Search (LONEOS), Catalina Sky Survey (CSS), Campo Imperatore Near-Earth Object Survey (CINEOS), Japanese Spaceguard Association, Asiago-DLR Asteroid Survey (ADAS) and Near-Earth Object WISE (NEOWISE). As a result, the ratio of the known and the estimated total number of near-Earth asteroids larger than 1 km in diameter rose from about 20% in 1998 to 65% in 2004, 80% in 2006, and 93% in 2011. The original Spaceguard goal has thus been met, only three years late. , 867 NEAs larger than 1 km have been discovered. In 2005, the original USA Spaceguard mandate was extended by the George E. Brown, Jr. Near-Earth Object Survey Act, which calls for NASA to detect 90% of NEOs with diameters of or greater, by 2020. In September 2020, it was estimated that about half of these have been found, but objects of this size hit the Earth only about once in 30,000 years. In December 2023, using a lower absolute brightness estimate for smaller asteroids, the ratio of discovered NEOs with diameters of or greater was estimated at 38%. The Chile-based Vera C. Rubin Observatory, which will survey the southern sky for transient events from 2025, is expected to increase the number of known asteroids by a factor of 10 to 100 and increase the ratio of known NEOs with diameters of or greater to at least 60%, while the NEO Surveyor satellite, to be launched in 2027, is expected to push the ratio to 76% during its 5-year mission. In January 2016, NASA announced the creation of the Planetary Defense Coordination Office (PDCO) to track NEOs larger than about in diameter and coordinate an effective threat response and mitigation effort. Survey programs aim to identify threats years in advance, giving humanity time to prepare a space mission to avert the threat. The ATLAS project, by contrast, aims to find impacting asteroids shortly before impact, much too late for deflection maneuvers but still in time to evacuate and otherwise prepare the affected Earth region. Another project, the Zwicky Transient Facility (ZTF), which surveys for objects that change their brightness rapidly, also detects asteroids passing close to Earth. Scientists involved in NEO research have also considered options for actively averting the threat if an object is found to be on a collision course with Earth. All viable methods aim to deflect rather than destroy the threatening NEO, because the fragments would still cause widespread destruction. Deflection, which means a change in the object's orbit months to years prior to the predicted impact, also requires orders of magnitude less energy. Number and classification When an NEO is detected, like all other small Solar System bodies, its positions and brightness are submitted to the (IAU's) Minor Planet Center (MPC) for cataloging. The MPC maintains separate lists of confirmed NEOs and potential NEOs. The MPC maintains a separate list for the potentially hazardous asteroids (PHAs). NEOs are also catalogued by two separate units of the Jet Propulsion Laboratory (JPL) of NASA: the Center for Near Earth Object Studies (CNEOS) and the Solar System Dynamics Group. CNEOS's catalog of near-Earth objects includes the approach distances of asteroids and comets. NEOs are also catalogued by a unit of ESA, the Near-Earth Objects Coordination Centre (NEOCC). Near-Earth objects are classified as meteoroids, asteroids, or comets depending on size, composition, and orbit. Those which are asteroids can additionally be members of an asteroid family, and comets create meteoroid streams that can generate meteor showers. and according to statistics maintained by CNEOS, 37,378 NEOs have been discovered. Only 123 (0.33%) of them are comets, whilst 37,255 (99.67%) are asteroids. 2,465 of those NEOs are classified as potentially hazardous asteroids (PHAs). , 1,872 NEAs appear on the Sentry impact risk page at the NASA website. All but 106 of these NEAs are less than 50 meters in diameter and only one recently discovered object is placed even in the "green zone" (Torino Scale 1), meaning that none warrant the attention of the general public or even the special attention of astronomers. Observational biases The main problem with estimating the number of NEOs is that the probability of detecting one is influenced by a number of aspects of the NEO, starting naturally with its size but also including the characteristics of its orbit and the reflectivity of its surface. What is easily detected will be more counted, and these observational biases need to be compensated when trying to calculate the number of bodies in a population from the list of its detected members. Bigger asteroids reflect more light, and the two biggest near-Earth objects, 433 Eros and 1036 Ganymed, were naturally also among the first to be detected. 1036 Ganymed is about in diameter and 433 Eros is about in diameter. Meanwhile, the apparent brightness of objects that are closer is higher, introducing a bias that favours the discovery of NEOs of a given size that get closer to Earth. Earth-based astronomy requires dark skies and hence nighttime observations, and even space-based telescopes avoid looking into directions close to the Sun, thus most NEO surveys are blind towards objects passing Earth on the side of the Sun. This bias is further enhanced by the effect of phase: the narrower the angle of the asteroid and the Sun from the observer, the lesser part of the observed side of the asteroid will be illuminated. Another bias results from the different surface brightness or albedo of the objects, which can make a large but low-albedo object as bright as a small but high-albedo object. In addition, the reflexivity of asteroid surfaces is not uniform but increases towards the direction opposite of illumination, resulting in the phenomenon of phase darkening, which makes asteroids even brighter when the Earth is close to the axis of sunlight. An asteroid's observed albedo usually has a strong peak or opposition surge very close to the direction opposite of the Sun. Different surfaces display different levels of phase darkening, and research showed that, on top of albedo bias, this favours the discovery of silicon-rich S-type asteroids over carbon-rich C types, for example. As a result of these observational biases, in Earth-based surveys, NEOs tended to be discovered when they were in opposition, that is, opposite from the Sun when viewed from the Earth. The most practical way around many of these biases is to use thermal infrared telescopes in space that observe their thermal emissions instead of the visible light they reflect, with a sensitivity that is almost independent of the illumination. In addition, space-based telescopes in an orbit around the Sun in the shadow of the Earth can make observations as close as 45 degrees to the direction of the Sun. Further observational biases favour objects that have more frequent encounters with the Earth, which makes the detection of Atens more likely than that of Apollos; and objects that move slower when encountering the Earth, which makes the detection of NEAs with low eccentricities more likely. Such observational biases must be identified and quantified to determine NEO populations, as studies of asteroid populations then take those known observational selection biases into account to make a more accurate assessment. In the year 2000 and taking into account all known observational biases, it was estimated that there are approximately 900 near-Earth asteroids of at least kilometer size, or technically and more accurately, with an absolute magnitude brighter than 17.75. Near-Earth asteroids These are asteroids in a near-Earth orbit without the tail or coma of a comet. , 37,255 near-Earth asteroids (NEAs) are known, 2,465 of which are both sufficiently large and may come sufficiently close to Earth to be classified as potentially hazardous. NEAs survive in their orbits for just a few million years. They are eventually eliminated by planetary perturbations, causing ejection from the Solar System or a collision with the Sun, a planet, or other celestial body. With orbital lifetimes short compared to the age of the Solar System, new asteroids must be constantly moved into near-Earth orbits to explain the observed asteroids. The accepted origin of these asteroids is that main-belt asteroids are moved into the inner Solar System through orbital resonances with Jupiter. The interaction with Jupiter through the resonance perturbs the asteroid's orbit and it comes into the inner Solar System. The asteroid belt has gaps, known as Kirkwood gaps, where these resonances occur as the asteroids in these resonances have been moved onto other orbits. New asteroids migrate into these resonances, due to the Yarkovsky effect that provides a continuing supply of near-Earth asteroids. Compared to the entire mass of the asteroid belt, the mass loss necessary to sustain the NEA population is relatively small; totalling less than 6% over the past 3.5 billion years. The composition of near-Earth asteroids is comparable to that of asteroids from the asteroid belt, reflecting a variety of asteroid spectral types. A small number of NEAs are extinct comets that have lost their volatile surface materials, although having a faint or intermittent comet-like tail does not necessarily result in a classification as a near-Earth comet, making the boundaries somewhat fuzzy. The rest of the near-Earth asteroids are driven out of the asteroid belt by gravitational interactions with Jupiter. Many asteroids have natural satellites (minor-planet moons). , 104 NEAs were known to have at least one moon, including five known to have two moons. The asteroid 3122 Florence, one of the largest PHAs with a diameter of , has two moons measuring across, which were discovered by radar imaging during the asteroid's 2017 approach to Earth. In May 2022, an algorithm known as Tracklet-less Heliocentric Orbit Recovery or THOR and developed by University of Washington researchers to discover asteroids in the solar system was announced as a success. The International Astronomical Union's Minor Planet Center confirmed a series of first candidate asteroids identified by the algorithm. Size distribution While the size of a very small fraction of these asteroids is known to better than 1%, from radar observations, from images of the asteroid surface, or from stellar occultations, the diameter of the vast majority of near-Earth asteroids has only been estimated on the basis of their brightness and a representative asteroid surface reflectivity or albedo, which is commonly assumed to be 14%. Such indirect size estimates are uncertain by over a factor of 2 for individual asteroids, since asteroid albedos can range at least as low as 5% and as high as 30%. This makes the volume of those asteroids uncertain by a factor of 8, and their mass by at least as much, since their assumed density also has its own uncertainty. Using this crude method, an absolute magnitude of 17.75 roughly corresponds to a diameter of and an absolute magnitude of 22.0 to a diameter of . Diameters of intermediate precision, better than from an assumed albedo but not nearly as precise as good direct measurements, can be obtained from the combination of reflected light and thermal infrared emission, using a thermal model of the asteroid to estimate both its diameter and its albedo. The reliability of this method, as applied by the Wide-field Infrared Survey Explorer and NEOWISE missions, has been the subject of a dispute between experts, with the 2018 publication of two independent analyses, one criticising and another giving results consistent with the WISE method. A 2023 study re-evaluated the relationship of brightness, albedo and diameter. For many objects with a diameter larger than 1 km, brightness estimates were reduced slightly. Meanwhile, based on new albedo estimates of smaller objects, the study found that best corresponds to a diameter of 140 m. In 2000, NASA reduced from 1,000–2,000 to 500–1,000 its estimate of the number of existing near-Earth asteroids over one kilometer in diameter, or more exactly brighter than an absolute magnitude of 17.75. Shortly thereafter, the LINEAR survey provided an alternative estimate of . In 2011, on the basis of NEOWISE observations, the estimated number of one-kilometer NEAs was narrowed to (of which 93% had been discovered at the time), while the number of NEAs larger than 140 meters across was estimated at . The NEOWISE estimate differed from other estimates primarily in assuming a slightly lower average asteroid albedo, which produces larger estimated diameters for the same asteroid brightness. This resulted in 911 then known asteroids at least 1 km across, as opposed to the 830 then listed by CNEOS from the same inputs but assuming a slightly higher albedo. In 2017, two studies using an improved statistical method reduced the estimated number of NEAs brighter than absolute magnitude 17.75 (approximately over one kilometer in diameter) slightly to . The estimated number of near-Earth asteroids brighter than absolute magnitude of 22.0 (approximately over 140 m across) rose to , double the WISE estimate, of which about a fourth were known at the time. The number of asteroids brighter than , which corresponds to about in diameter, is estimated at —of which about 1.3 percent had been discovered by February 2016; the number of asteroids brighter than (larger than ) is estimated at million—of which about 0.003 percent had been discovered by February 2016. A September 2021 study revised the estimated number of NEAs with a diameter larger than 1 km (using both WISE data and the absolute brightness lower than 17.75 as proxy) slightly upwards to , of which 911 were discovered at the time, but reduced the estimated number of asteroids brighter than absolute magnitude of 22.0 (as proxy for a diameter of 140 m) to under 20,000, of which about half were discovered at the time. The 2023 study that re-evaluated the relationship of average absolute brightness, albedo and diameter confirmed the ratios of the number of discovered and estimated total asteroids of different sizes in the 2021 study, but by changing the proxy for a diameter of 140 m to , it estimated that only about 44% of the estimated 35,000 total larger than that have been discovered by the end of 2022. , NEO catalogues still use as proxy for a diameter of 140 m. , and using diameters mostly estimated crudely from a measured absolute magnitude and an assumed albedo, 867 NEAs listed by CNEOS, including 152 PHAs, measure at least 1 km in diameter, and 11,167 known NEAs, including 2,465 PHAs, are larger than 140 m in diameter. The smallest known near-Earth asteroid is with an absolute magnitude of 34.34, corresponding to an estimated diameter of about . The largest such object is 1036 Ganymed, with an absolute magnitude of 9.18 and directly measured irregular dimensions which are equivalent to a diameter of about . Orbital classification Near-Earth asteroids are divided into groups based on their semi-major axis (a), perihelion distance (q), and aphelion distance (Q): The Atiras or Apoheles have orbits strictly inside Earth's orbit: an Atira asteroid's aphelion distance (Q) is smaller than Earth's perihelion distance (0.983 AU). That is, , which implies that the asteroid's semi-major axis is also less than 0.983 AU. This group includes asteroids on orbits that never get close to Earth, including the sub-group of ꞌAylóꞌchaxnims, which orbit the Sun entirely within the orbit of Venus and which include the hypothetical sub-group of Vulcanoids, which have orbits entirely within the orbit of Mercury. The Atens have a semi-major axis of less than 1 AU and cross Earth's orbit. Mathematically, and . (0.983 AU is Earth's perihelion distance.) The Apollos have a semi-major axis of more than 1 AU and cross Earth's orbit. Mathematically, and . (1.017 AU is Earth's aphelion distance.) The Amors have orbits strictly outside Earth's orbit: an Amor asteroid's perihelion distance (q) is greater than Earth's aphelion distance (1.017 AU). Amor asteroids are also near-Earth objects so . In summary, . (This implies that the asteroid's semi-major axis (a) is also larger than 1.017 AU.) Some Amor asteroid orbits cross the orbit of Mars. Some authors define Atens differently: they define it as being all the asteroids with a semi-major axis of less than 1 AU. That is, they consider the Atiras to be part of the Atens. Historically, until 1998, there were no known or suspected Atiras, so the distinction wasn't necessary. Atiras and Amors do not cross the Earth's orbit and are not immediate impact threats, but their orbits may change to become Earth-crossing orbits in the future. , 34 Atiras, 2,952 Atens, 21,132 Apollos and 13,137 Amors have been discovered and cataloged. Co-orbital asteroids Most NEAs have orbits that are significantly more eccentric than that of the Earth and the other major planets and their orbital planes can tilt several degrees relative to that of the Earth. NEAs which have orbits that do resemble the Earth's in eccentricity, inclination and semi-major axis are grouped as Arjuna asteroids. Within this group are NEAs that have the same orbital period as the Earth, or a co-orbital configuration, which corresponds to an orbital resonance at a ratio of 1:1. All co-orbital asteroids have special orbits that are relatively stable and, paradoxically, can prevent them from getting close to Earth: Trojans: Near the orbit of a planet, there are five gravitational equilibrium points, the Lagrangian points, in which an asteroid would orbit the Sun in fixed formation with the planet. Two of these, 60 degrees ahead and behind the planet along its orbit (designated L4 and L5 respectively) are stable; that is, an asteroid near these points would stay there for thousands or even millions of years in spite of light perturbations by other planets and by non-gravitational forces. Trojans circle around L4 or L5 on paths resembing a tadpole. , Earth has two confirmed Trojans: and , both circling Earth's L4 point. Horseshoe librators: The region of stability around L4 and L5 also includes orbits for co-orbital asteroids that run around both L4 and L5. Relative to the Earth and Sun, the orbit can resemble the circumference of a horseshoe, or may consist of annual loops that wander back and forth (librate) in a horseshoe-shaped area. In both cases, the Sun is at the horseshoe's center of gravity, Earth is in the gap of the horseshoe, and L4 and L5 are inside the ends of the horseshoe. Among Earth's known co-orbitals, those with the most stable orbits as well as those with the least stable orbits are horseshoe librators. , at least 13 horseshoe librators of Earth have been discovered. The most-studied and, at about , largest is 3753 Cruithne, which travels along bean-shaped annual loops and completes its horseshoe libration cycle every 770–780 years. is an asteroid on a relatively stable circumference-of-a-horseshoe orbit, with a horseshoe libration period of about 350 years. Quasi-satellites: Quasi-satellites are co-orbital asteroids on a normal elliptic orbit with a higher eccentricity than Earth's, which they travel in a way synchronised with Earth's motion. Since the asteroid orbits the Sun slower than Earth when further away and faster than Earth when closer to the Sun, when observed in a rotating frame of reference fixed to the Sun and the Earth, the quasi-satellite appears to orbit Earth in a retrograde direction in one year, even though it is not bound gravitationally. , six asteroids were known to be a quasi-satellite of Earth. 469219 Kamoʻoalewa is Earth's closest quasi-satellite, in an orbit that has been stable for almost a century. This asteroid is thought to be a piece of the Moon ejected during an impact. Orbit calculations show that almost all quasi-satellites and many horseshoe librators repeatedly transfer between horseshoe and quasi-satellite orbits. One of these objects, , was observed during its transition from a quasi-satellite orbit to a horseshoe orbit in 2006; it is expected to transfer back to a quasi-satellite orbit sometime around year 2066. A quasi-satellite discovered in 2023 but then found in old photographs back to 2012, , was found to have an orbit that is stable for about 4,000 years, from 100 BC to AD 3700. Asteroids on compound orbits: orbital calculations show that some co-orbital asteroids transit between horseshoe and quasi-satellite orbits during every horseshoe resp. quasi-satellite cycle. Theoretically, similar continuous transitions between Trojan and horseshoe orbits are possible, too. , at least 20 Earth co-orbital NEAs are thought to be in the horseshoe-like phase of compound orbits. Temporary satellites: NEAs can also transfer between solar orbits and distant Earth orbits, becoming gravitationally bound temporary satellites. According to simulations, temporary satellites are typically caught when they pass Earth's L1 or L2 Lagrangian points at the time Earth is either at the point in its orbit closest or farthest from the Sun, complete a couple of orbits around Earth, and then return to a heliocentric orbit due to perturbations from the Moon. Strictly speaking, temporary satellites aren't co-orbital asteroids, and they can have orbits of the broader Arjuna type before and after capture by Earth, but simulations show that they can be captured from, or transfer to, horseshoe orbits. The simulations also indicate that Earth typically has at least one temporary satellite across at any given time, but they are too faint to be detected by current surveys. , five temporary satellites have been observed: , , , and . Calculations for the asteroid showed repeated transitions into temporary satellite orbits both in the past and the future 10,000 years. Near-Earth asteroids also include the co-orbitals of Venus. , all known co-orbitals of Venus have orbits with high eccentricity, also crossing Earth's orbit. Meteoroids In 1961, the IAU defined meteoroids as a class of solid interplanetary objects distinct from asteroids by their considerably smaller size. This definition was useful at the time because, with the exception of the Tunguska event, all historically observed meteors were produced by objects significantly smaller than the smallest asteroids then observable by telescopes. As the distinction began to blur with the discovery of ever smaller asteroids and a greater variety of observed NEO impacts, revised definitions with size limits have been proposed from the 1990s. In April 2017, the IAU adopted a revised definition that generally limits meteoroids to a size between 30 μm and 1 m in diameter, but permits the use of the term for any object of any size that caused a meteor, thus leaving the distinction between asteroid and meteoroid blurred. Near-Earth comets Near-Earth comets (NECs) are objects in a near-Earth orbit with a tail or coma made up of dust, gas or ionized particles emitted by a solid nucleus. Comet nuclei are typically less dense than asteroids but they pass Earth at higher relative speeds, thus the impact energy of a comet nucleus is slightly larger than that of a similar-sized asteroid. NECs may pose an additional hazard due to fragmentation: the meteoroid streams which produce meteor showers may include large inactive fragments, effectively NEAs. Although no impact of a comet in Earth's history has been conclusively confirmed, the Tunguska event may have been caused by a fragment of Comet Encke. Comets are commonly divided between short-period and long-period comets. Short-period comets, with an orbital period of less than 200 years, originate in the Kuiper belt, beyond the orbit of Neptune; while long-period comets originate in the Oort Cloud, in the outer reaches of the Solar System. The orbital period distinction is of importance in the evaluation of the risk from near-Earth comets because short-period NECs are likely to have been observed during multiple apparitions and thus their orbits can be determined with some precision, while long-period NECs can be assumed to have been seen for the first and last time when they appeared since the start of precise observations, thus their approaches cannot be predicted well in advance. Since the threat from long-period NECs is estimated to be at most 1% of the threat from NEAs, and long-period comets are very faint and thus difficult to detect at large distances from the Sun, Spaceguard efforts have consistently focused on asteroids and short-period comets. Both NASA's CNEOS and ESA's NEOCC restrict their definition of NECs to short-period comets. , 123 such objects have been discovered. Comet 109P/Swift–Tuttle, which is also the source of the Perseid meteor shower every year in August, has a roughly 130-year orbit that passes close to the Earth. During the comet's September 1992 recovery, when only the two previous returns in 1862 and 1737 had been identified, calculations showed that the comet would pass close to Earth during its next return in 2126, with an impact within the range of uncertainty. By 1993, even earlier returns (back to at least 188 AD) had been identified, and the longer observation arc eliminated the impact risk. The comet will pass Earth in 2126 at a distance of 23 million kilometers. In 3044, the comet is expected to pass Earth at less than 1.6 million kilometers. Artificial near-Earth objects Defunct space probes and final stages of rockets can end up in near-Earth orbits around the Sun. Examples of such artificial near-Earth objects include a Tesla Roadster used as dummy payload in a 2018 rocket test and the Kepler space telescope. Some of these objects have been re-discovered by NEO surveys when they returned to Earth's vicinity and classified as asteroids before their artificial origin was recognised. An object classified as asteroid 1991 VG was discovered during its transition from a temporary satellite orbit around Earth to a solar orbit in November 1991, and could only be observed until April 1992. Some scientists suspected it to be a returning piece of man-made space debris. After new observations in 2017 provided better data on its orbit and surface characteristics, a new study found the artificial origin unlikely. In September 2002, astronomers found an object designated J002E3. The object was on a temporary satellite orbit around Earth, leaving for a solar orbit in June 2003. Calculations showed that it was also on a solar orbit before 2002, but was close to Earth in 1971. J002E3 was identified as the third stage of the Saturn V rocket that carried Apollo 12 to the Moon. In 2006, two more apparent temporary satellites were discovered which were suspected of being artificial. One of them was eventually confirmed as an asteroid and classified as the temporary satellite . The other, 6Q0B44E, was confirmed as an artificial object, but its identity is unknown. Another temporary satellite was discovered in 2013, and was designated as a suspected asteroid. It was later found to be an artificial object of unknown origin. is no longer listed as an asteroid by the Minor Planet Center. In September 2020, an object detected on an orbit very similar to that of the Earth was temporarily designated . However, orbital calculations and spectral observations confirmed that the object was the Centaur rocket booster of the 1966 Surveyor 2 uncrewed lunar lander. In some cases, active space probes on solar orbits have been observed by NEO surveys and erroneously catalogued as asteroids before identification. During its 2007 flyby of Earth on its route to a comet, ESA's space probe Rosetta was detected unidentified and classified as asteroid , with an alert issued due to its close approach. The designation was similarly removed from asteroid catalogues when the observed object was identified with Gaia, ESA's space observatory for astrometry. Exploratory missions Some NEOs are of special interest because the sum total of changes in orbital speed required to send a spacecraft on a mission to physically explore an NEO – and thus the amount of rocket fuel required for the mission – is lower than what is necessary for even lunar missions, due to their combination of low velocity with respect to Earth and weak gravity. They may present interesting scientific opportunities both for direct geochemical and astronomical investigation, and as potentially economical sources of extraterrestrial materials for human exploitation. This makes them an attractive target for exploration. Missions to NEAs The IAU held a minor planets workshop in Tucson, Arizona, in March 1971. At that point, launching a spacecraft to asteroids was considered premature; the workshop only inspired the first astronomical survey specifically aiming for NEAs. Missions to asteroids were considered again during a workshop at the University of Chicago held by NASA's Office of Space Science in January 1978. Of all of the near-Earth asteroids (NEA) that had been discovered by mid-1977, it was estimated that spacecraft could rendezvous with and return from only about 1 in 10 using less propulsive energy than is necessary to reach Mars. It was recognised that due to the low surface gravity of all NEAs, moving around on the surface of an NEA would cost very little energy, and thus space probes could gather multiple samples. Overall, it was estimated that about one percent of all NEAs might provide opportunities for human-crewed missions, or no more than about ten NEAs known at the time. A five-fold increase in the NEA discovery rate was deemed necessary to make a crewed mission within ten years worthwhile. The first near-Earth asteroid to be visited by a spacecraft was 433 Eros when NASA's NEAR Shoemaker probe orbited it from February 2000, landing on the surface of the asteroid in February 2001. A second NEA, the long peanut-shaped 25143 Itokawa, was explored from September 2005 to April 2007 by JAXA's Hayabusa mission, which succeeded in taking material samples back to Earth. A third NEA, the long elongated 4179 Toutatis, was explored by CNSA's Chang'e 2 spacecraft during a flyby in December 2012. The Apollo asteroid 162173 Ryugu was explored from June 2018 until November 2019 by JAXA's Hayabusa2 space probe, which returned a sample to Earth. A second sample-return mission, NASA's OSIRIS-REx probe, targeted the Apollo asteroid 101955 Bennu, which, , has the second-highest cumulative Palermo scale rating (−1.40 for several close encounters between 2178 and 2290). On its journey to Bennu, the probe had searched unsuccessfully for Earth's Trojan asteroids, entered into orbit around Bennu in December 2018, touched down on its surface in October 2020, and was successful in returning samples to Earth three years later. China plans to launch its own sample-return mission, Tianwen-2, in May 2025, targeting Earth quasi-satellite and returning samples to Earth in late 2027. After completing its mission to Bennu, the probe OSIRIS-REx was redirected towards 99942 Apophis, which it is planned to orbit from April 2029. After completing its exploration of 162173 Ryugu, the mission of the Hayabusa2 space probe was extended, to include flybys of S-type Apollo asteroid 98943 Torifune in July 2026 and fast-rotating Apollo asteroid in July 2031. In 2025, JAXA plans to launch another probe, DESTINY+, to explore Apollo asteroid , the parent body of the Geminid meteor shower, during a flyby. Asteroid deflection tests On September 26, 2022, NASA's DART spacecraft reached the system of and impacted the Apollo asteroid's moon Dimorphos, in a test of a method of planetary defense against near-Earth objects. In addition to telescopes on or in orbit around the Earth, the impact was observed by the Italian mini-spacecraft or CubeSat LICIACube, which separated from DART 15 days before impact. The impact shortened the orbital period of Dimorphos around Didymos by 33 minutes, indicating that the moon's momentum change was 3.6 times the momentum of the impacting spacecraft, thus most of the change was due to the ejected material of the moon itself. In October 2024, ESA launched the spacecraft Hera, which is to enter orbit around Didymos in December 2026, to study the consequences of the DART impact. China plans to launch its own pair of asteroid deflection and observation probes in 2027, which are to target Aten asteroid . Space mining From the 2000s, there were plans for the commercial exploitation of near-Earth asteroids, either through the use of robots or even by sending private commercial astronauts to act as space miners, but few of these plans were pursued. In April 2012, the company Planetary Resources announced its plans to mine asteroids commercially. In a first phase, the company reviewed data and selected potential targets among NEAs. In a second phase, space probes would be sent to the selected NEAs; mining spacecraft would be sent in a third phase. Planetary Resources launched two testbed satellites in April 2015 and January 2018, and the first prospecting satellite for the second phase was planned for a 2020 launch prior to the company closing and its assets purchased by ConsenSys Space in 2018. Another American company established with the goal of space mining, AstroForge, plans to launch the probe Odin (formerly Brokkr-2) at the end of February 2025, with the goal of performing a flyby of an as yet undisclosed asteroid to confirm if it is a metal-rich M-type asteroid, and then follow it up later in 2025 with the probe Vestri, which is to land on the same asteroid. Missions to NECs The first near-Earth comet visited by a space probe was 21P/Giacobini–Zinner in 1985, when the NASA/ESA probe International Cometary Explorer (ICE) passed through its coma. In March 1986, ICE, along with Soviet probes Vega 1 and Vega 2, ISAS probes Sakigake and Suisei and ESA probe Giotto flew by the nucleus of Halley's Comet. In 1992, Giotto also visited another NEC, 26P/Grigg–Skjellerup. In November 2010, after completing its primary mission to non-near-Earth comet Tempel 1, the NASA probe Deep Impact flew by the near-Earth comet 103P/Hartley. In August 2014, ESA probe Rosetta began orbiting near-Earth comet 67P/Churyumov–Gerasimenko, while its lander Philae landed on its surface in November 2014. After the end of its mission, Rosetta was crashed into the comet's surface in 2016. See also Asteroid capture Asteroid Day Asteroid Redirect Mission Claimed moons of Earth EURONEAR Interstellar interpoler List of Earth-crossing asteroids List of impact craters on Earth NEOShield Orbit@home References External links Center for Near Earth Object Studies (CNEOS) – Jet Propulsion Laboratory, NASA Table of Asteroids Next Closest Approaches to the Earth – Sormano Astronomical Observatory Catalogue of the Solar System Small Bodies Orbital Evolution – Samara State Technical University Minor Planet Center The NEO Confirmation Page Articles containing video clips Planetary defense Space hazards Solar System
Near-Earth object
Astronomy
12,157
39,313,381
https://en.wikipedia.org/wiki/Linear%20transformation%20in%20rotating%20electrical%20machines
Transformation of three phase electrical quantities to two phase quantities is a usual practice to simplify analysis of three phase electrical circuits. Polyphase a.c machines can be represented by an equivalent two phase model provided the rotating polyphases winding in rotor and the stationary polyphase windings in stator can be expressed in a fictitious two axes coils. The process of replacing one set of variables to another related set of variable is called winding transformation or simply transformation or linear transformation. The term linear transformation means that the transformation from old to new set of variable and vice versa is governed by linear equations. The equations relating old variables and new variables are called transformation equation and the following general form: [new Variable] = [transformation matrix][old variable] [old Variable] = [transformation matrix][new variable] Transformation matrix is a matrix containing the coefficients that relates new and old variables. Note that the second transformation matrix in the above-mentioned general form is inverse of first transformation matrix. The transformation matrix should account for power invariance in the two frames of reference. In case power invariance is not maintained, then torque calculation should be from original machine variables only. Benefits of transformation Linear transformation in rotating machines is generally carried out for the purpose of obtaining new sets of equations governing the machine model that are fewer in number and less complex in nature compared to original machine model. When referred to new frame of reference performance analysis of machine becomes much simpler, smoother and faster. All machine quantities like voltage, current, power, torque, speed etc. can be solved in the transformed model in a less laborious way without losing originality of machine properties. The most striking feature of transformation, which accounts for its high popularity, is that time varying inductances in voltage and current equations of machine are eliminated. Popular Transformation techniques Two most widely used transformation methods are dqo (or qdo or odq or simply d-q) transformation and αβϒ (or α-β) transformation. In d-q transformation the three phase quantities of machine in the abc reference frame is referred to d-q reference frame. Transformation equation has the general form [Fdqo] = [K][Fabc], where K is the transformation matrix, for detail refer Dqo transformation. The d-q reference frame may be stationary or rotating at certain angular speed. Based on speed of reference frame there are four major type of reference frame. For detail on abc to αβ transformation refer αβγ transform Commonly used reference frames Based on speed of reference frame there are four major type of reference frame. Arbitrary reference frame: Reference frame speed is unspecifie(ω), variables denoted by fdqos or fds, fqs and fos, transformation matrix denoted by Ks. Stationary reference frame: Reference frame speed is zero(ω=0), variables denoted by fsdqo or fds, fqs and fos, transformation matrix denoted by Kss. Rotor reference frame: Reference frame speed is equal to rotor speed(ω= ωr), variables denoted by frdqo or fdr, fr and fos, transformation matrix denoted by Kss. Synchronous reference frame: Reference frame speed is equal to synchronous speed(ω= ωe), variables denoted by fedqo or fde, fqe and fos, transformation matrix denoted by Kse. The choice of reference frame is not restricted but otherwise deeply influenced by the type of analysis that is to be performed so as to expedite the solution of the system equations or to satisfy system constraints. The best suited choice of reference frame for simulation of induction machine for various cases of analysis are listed here under: Stationary reference frame is best suited for studying stator variables only, for example variable speed stator fed IM drives, because stator d-axis variables are exactly identical to stator phase a-variable. Rotor reference frame is best suited when analysis is confined to rotor variables as rotor d-axis variable is identical to phase-a rotor variables. Synchronously rotating reference frame is suitable when analog computer is employed because both stator and rotor d-q quantities becomes steady DC quantities. It is also best suited for studying multi-machine system. It is worthwhile to note that all three types of reference frame can be obtained from arbitrary reference frame by simply changing ω. Modeling in arbitrary reference frame is therefore beneficial when a wide range of analysis is to be done. Restrictions There are some restrictions in representing a rotating electrical machine by its d-q axes equivalent, as listed below: This method cannot be used on machine in which both stator and rotor are salient, for example induction alternator. This method cannot be applied on machine in which non salient element have unbalanced windings. Brush contact phenomena, commutation effects and surge phenomena cannot be represented in this model so they have to be accounted for separately. References In-line references General references P.S. Bimbhra "Generalised Theory of Electrical Machines", Khanna Publishers P.C. Krause, O. Wasynczuk, S. D. Sudhoff, "Analysis of Electric Machinery and Drives System", Second edition R.J. Lee, P. Pillay and R.G. Harley,"D,Q Reference Frames for simulation of Induction Motors", Electric Power Systems Research, 8(1984/85) 15-26 External links Electrical engineering
Linear transformation in rotating electrical machines
Engineering
1,117
69,416,798
https://en.wikipedia.org/wiki/Jakub%20Haberfeld
Jakub Haberfeld (original spelling: Jakób Haberfeld or Jakob Haberfeld) – one of the oldest Polish alcohol factiories, founded in 1804 in Oświęcim producing vodka and liqueurs. The company was reactivated in June 2019. History The Haberfeld family settled in Oświęcim in the second half of the XVIIIth century. Jakub, son of Simon and Jacheta, founded in 1804 the Factory of Vodka and Liqueurs. After his death, the business was inherited by his son, also Jakub (1839–1904). In 1906 Emil Haberfeld became the new owner. The Haberfelds were a progressive Jewish family who were involved in social life; many served on the town council and participated in charity initiatives. At the turn of the 19th and 20th centuries, the factory poured beer for the Jan Götz brewery in Okocim. From around 1906 until the end of the interwar period, it partnered with the Żywiec Brewery. At the beginning of the 20th century, the factory expanded and obtained new buildings, including spaces in the Oświęcim castle bought by the family from the city, chiefly warehouses. In August 1939, Alfons Haberfeld and his wife Felicja participated in the 1939 New York World's Fair, presenting their products at the Polish pavilion. On the way back, at the outbreak of World War II, the ship was stopped and directed to Scotland, preventing them from returning to German-occupied Poland. Their five years old daughter Franciszka Henryka and her grandmother were murdered 1942 by the Germans in the death camp Bełżec (https://auschwitz.net/en/stolen-lives-in-auschwitz-the-haberfelds/). Alfons and Felicja returned to the USA. In 1952, along with other Holocaust survivors, they founded an organization in Los Angeles called Club 1939. They both died in Los Angeles, Alfons in 1970, and Felicja in 2010. After the end of hostilities in 1945, the house and factory buildings were taken over by the State Treasury. In the years 1945–1947 the factory was called "Jakub Haberfeld's Factory under state administration", and after 1947 it had been called “Oświęcimskie Zakłady Przemysłu Terenowego", " Non-alcoholic Beverage Factory and Beer Bottling Plant in Oświęcim ". After 1989, the bottling plants were declared bankrupt, and the remaining factory property was plundered. In 1992, a bricked-up cellar was discovered containing several thousand bottles ready for production. By decision of September 25, 1995. the factory complex and the Haberfeld family house was entered in the register of monuments of the Bielsko Province. By 2003, the factory buildings and the Haberfeld family home, displaced and not renovated, fell into ruin. In 2003, it was decided to demolish the tenement house and the factory. Production The drinks were made on the basis of natural juices. They were produced and stored in the cellars of the "Monopol" restaurant, which was located in the family house next to the factory premises. The drinks were poured into characteristic, brand and porcelain glass bottles, made to order. For the orangeade and soda water bottles, porcelain stoppers were used. All products of the factory had original labels, which were produced, among others, in Bielsko and in Opava. The factory produced several dozen types of vodkas and liqueurs in several hundred varieties. The specialties of the factory were "Magister", "Basztówka" and "Zgoda". During World War I, the factory produced vodka for the Austrian army, which was part of the soldier's equipment. This drink was called "Kaizerschutze" (Imperial Gunner). The factory also had a warehouse and sales of products in Kęty, run by Mr. Hoffmann, and a warehouse in Krakow. Haberfeld also had many salesmen who advertised his products. For example, in Silesia it was Franciszek Kehl According to the industry questionnaire submitted by the owners in 1934, the factory was called "Vodka and liqueur factory and fruit juice press", it was a general partnership. The average wage of a manual worker was 750 zloty, and office worker - 2000 zloty. The factory, apart from selling products locally, also exported them to Italy, Austria, Germany and Hungary. Haberfeld also exhibited his products at various foreign exhibitions, where he was awarded with diplomas and medals. During the German occupation, the factory was taken over by the occupant, and a German named Handelmann became a receivership (so-called treuhänder). The Germans then used the following labels: "Haberfeld unter Verwaltung Treuhändler", and production during this period was continued on a smaller scale. All the factory property and the family house remained intact and survived the period of the Nazi occupation. Vodka Museum On June 30, 2019, on the premises of the former Jacob Haberfeld Vodka and Liqueur Factory, the Vodka Museum was opened, commemorating the achievements of this family from Oświęcim, their contribution to the development of the liquor industry both in the region and in the country. The museum shows the history of a family that not only became famous in the world as a significant brand of vodkas and liqueurs, but also outstanding figures for the city. Few people know that Alfons Haberfeld was the only Oświęcim shareholder of the first Polish car factory "Oświęcim-Praga", which was used by such celebrities as Jan Kiepura or Wojciech Kossak. The Jakob Haberfeld brand was also resumed with the introduction of six kosher vodkas and liqueurs, which are made in cooperation with the production plant of the Nissenbaum Family Foundation in Bielsko-Biała. The exhibition is also a story about the fate of one of the two most influential Jewish families in Oświęcim. The fate was dramatically interrupted by the outbreak of World War II and the murder of 5-year-old Franciszka Henryka Haberfeld in the death camp in Bełżec. References External links Jakob Haberfeld Story Vodka Museum&Music Pub The Haberfeld mansion at www.oszpicin.pl Oświęcim Distilleries Companies based in Lesser Poland Voivodeship
Jakub Haberfeld
Chemistry
1,350
24,023,782
https://en.wikipedia.org/wiki/IMA%20%28company%29
IMA – Industria Macchine Automatiche S.p.A. is a multinational Italian company based in the Metropolitan City of Bologna, Italy. Established in 1961, IMA Group is a leader in the design and manufacture of automatic machines for the processing and packaging of pharmaceuticals, cosmetics, food, tobacco, tea and coffee. Alberto Vacchi is both president and CEO of the company. Products It designs and manufactures automatic machines for the processing and packaging of pharmaceuticals, cosmetics, tea and coffee. It specializes in tea bagging and coffee pod machines, solid dose manufacturing, sterile processing equipment, liquid filling, freeze-drying, labelling, blistering, counting, tube filling, end-of-line and cartoning machines. Manufacturing It has local production plants in Bologna and Florence in Italy along with other manufacturing sites in Germany, France, Switzerland, Spain, United Kingdom, U.S.A., India, Malaysia and China. More than 5,000 are employed in the company, 2,600 of those work abroad. Its annual turnover in 2016 was €1.310,55 million. Its employees number more than 5,000 (with about 2,600 outside of Italy) in 41 manufacturing sites in Italy, Germany, France, Switzerland, Spain, the UK, the US, India, Malaysia, China and Argentina. Its sales network covers more than 80 countries. The Group has around 6,000 employees, of which over 2,800 overseas, and has 45 production plants in Italy, Germany, France, Switzerland, Spain, United Kingdom, United States, India, Malaysia, China and Argentina. IMA has an extensive commercial network, which consists of 29 branches with sales and assistance services in Italy, France, Switzerland, United Kingdom, Germany, Austria, Spain, Poland, Israel, Russia, United States, India, China, Malaysia, Thailand and Brazil, representative offices in Central and Eastern European countries and more than 50 agencies covering a total of about 80 countries. In addition, in 2019, it had a turnover of 1,595.5 million euros, of which about 90% outside Italy. Board of directors The board of directors is composed of thirteen members, in office until the approval of the financial statements at 31 December 2020, with Alberto Vacchi in the role of chairman and executive CEO. Marco Vacchi is the honorary president. Main shareholders On 14 June 2016 the share capital amounts to €20,415,200.00 and is divided into 39,260,000 ordinary shares with a nominal value of €0.52 SO.FI.M.A. Società Finanziaria Macchine Automatiche S.p.A. - 56,789% (22.295.194 shares) Lopam Fin S.p.A. - 50% Cofiva (Dichiarante) - 24,65% Cofiva Holding spa - 99,9% Gianluca Vacchi - 55% Hydra S.p.A. - 2,003% (737.665 shares) External links IMA Homepage Business Week Detailed History of IMA S.p.A. References Engineering companies of Italy Companies based in the Metropolitan City of Bologna Technology companies of Italy Italian brands Industrial machine manufacturers Italian companies established in 1961 IMA
IMA (company)
Engineering
671
22,288,224
https://en.wikipedia.org/wiki/Symbolic-numeric%20computation
In mathematics and computer science, symbolic-numeric computation is the use of software that combines symbolic and numeric methods to solve problems. References External links Professional organizations ACM SIGSAM: Special Interest Group in Symbolic and Algebraic Manipulation Computer algebra Numerical analysis Computational science
Symbolic-numeric computation
Mathematics,Technology
54
7,224,143
https://en.wikipedia.org/wiki/Orbit%20Award
The Orbit Awards were given by the National Space Society and the Space Tourism Society to pioneers in the private space travel industry, and presented at the Annual International Space Development Conference The actual award is a holographic crystal created by international artist, Eileen Borgeson and holography pioneer, Jeff Allen. ‘Orbit Awards’ were co-sponsored by EArt Gallery and Interior Systems, and were received by: Buzz Aldrin, Richard Branson, Paul Allen, Rick Searfoss, Robert Bigelow, The X PRIZE Foundation, Scaled Composites, Zero Gravity Corporation, Eric Anderson and Anousheh Ansari. See also List of space technology awards References External links from Art News, September 29, 2006 Space-related awards
Orbit Award
Technology
149
35,942,883
https://en.wikipedia.org/wiki/Biological%20dark%20matter
Biological dark matter is an informal term for unclassified or poorly understood genetic material. This genetic material may refer to genetic material produced by unclassified microorganisms. By extension, biological dark matter may also refer to the un-isolated microorganisms whose existence can only be inferred from the genetic material that they produce. Some of the genetic material may not fall under the three existing domains of life: Bacteria, Archaea and Eukaryota; thus, it has been suggested that a possible fourth domain of life may yet be discovered, although other explanations are also probable. Alternatively, the genetic material may refer to non-coding DNA (so-called "junk DNA") and non-coding RNA produced by known organisms. Genomic dark matter Much of the genomic dark matter is thought to originate from ancient transposable elements and from other low-complexity repetitive elements. Uncategorized genetic material is found in humans and many other species. Their phylogenetic novelty could indicate the cellular organisms or viruses from which they evolved. Unclassified microorganisms Up to 99% of all living microorganisms cannot be cultured, so few functional insights exist about the metabolic potential of these organisms. Sequences that are believed to be derived from unknown microbes are referred to as the microbial dark matter, the dark virome, or dark matter fungi. Such sequences are not rare. It has been estimated that in material from humans, between 40 and 90% of viral sequences are from dark matter. Human blood contains over three thousand different DNA sequences which cannot yet be identified. A mycological study from 2023 found that dark matter fungi seem to dominate the fungal kingdom. Algorithms have been developed that examine sequences for similarities to bacterial 16S RNA sequences, K-mer similarities to known viruses, specific features of codon usage, or for inferring the existence of proteins. These approaches have suggested, for example, the existence of a novel bacteriophage of the microviridae family, and a novel bacterioidales-like phage. Other studies have suggested the existence of 264 new viral genera, discovered in publicly available databases, and a study of human blood suggested that 42% of people have at least one previously unknown virus each, adding up to 19 different new genera. A comprehensive study of DNA sequences from multiple human samples inferred the existence of 4,930 species of microbes of which 77% were previously unreported. Health-related findings include a prophage that might be associated with cirrhosis of the liver, and seven novel sequences from children with type-1 diabetes that have characteristics of viruses. Although they might exist, no organisms that clearly cause human disease have been discovered in the dark matter. In February 2023, scientists reported the findings of unusual DNA strands from the microorganisms in the "dark microbiome" in the driest non-polar desert on Earth. See also References Genetics terms DNA
Biological dark matter
Biology
607
70,939,631
https://en.wikipedia.org/wiki/Xenon%20isotope%20geochemistry
Xenon isotope geochemistry uses the abundance of xenon (Xe) isotopes and total xenon to investigate how Xe has been generated, transported, fractionated, and distributed in planetary systems. Xe has nine stable or very long-lived isotopes. Radiogenic 129Xe and fissiogenic 131,132,134,136Xe isotopes are of special interest in geochemical research. The radiogenic and fissiogenic properties can be used in deciphering the early chronology of Earth. Elemental Xe in the atmosphere is depleted and isotopically enriched in heavier isotopes relative to estimated solar abundances. The depletion and heavy isotopic enrichment can be explained by hydrodynamic escape to space that occurred in Earth's early atmosphere. Differences in the Xe isotope distribution between the deep mantle (from Ocean Island Basalts, or OIBs), shallower Mid-ocean Ridge Basalts (MORBs), and the atmosphere can be used to deduce Earth's history of formation and differentiation of the solid Earth into layers. Background Xe is the heaviest noble gas in the Earth's atmosphere. It has seven stable isotopes (126Xe,128Xe,129Xe,130Xe,131Xe, 132Xe, 134Xe) and two isotopes (124Xe, 136Xe) with long-lived half-lives. Xe has four synthetic radioisotopes with very short half-lives, usually less than one month. Xenon-129 can be used to examine the early history of the Earth. 129Xe was derived from the extinct nuclide of iodine, iodine-129 or 129I (with a half-life of 15.7 Million years, or Myr), which can be used in iodine-xenon (I-Xe) dating. The production of 129Xe stopped within about 100 Myr after the start of the Solar System because 129I became extinct. In the modern atmosphere, about 6.8% of atmospheric 129Xe originated from the decay 129I in the first ~100 Myr of the Solar System's history, i.e., during and immediately following Earth's accretion. Fissiogenic Xe isotopes were generated mainly from the extinct nuclide, plutonium-244 or 244Pu (half-life of 80 Myr), and also the extant nuclide, uranium-238 or 238U (half-life of 4468 Myr). Spontaneous fission of 238U has generated ~5% as much fissiogenic Xe as 244Pu. Pu and U fission produce the four fissiogenic isotopes, 136Xe, 134Xe, 132Xe, and 131Xe in distinct proportions. A reservoir that remains an entirely closed system over Earth's history has a ratio of Pu- to U-derived fissiogenic Xe reaching to ~27. Accordingly, the isotopic composition of the fissiogenic Xe for a closed-system reservoir would largely resemble that produced from pure 244Pu fission. Loss of Xe from a reservoir after 244Pu becomes extinct (500 Myr) would lead to a greater contribution of 238U fission to the fissiogenic Xe. Notation Differences in the abundance of isotopes among natural samples are extremely small (almost always below 0.1% or 1 per mille). Nevertheless, these very small differences can record meaningful geological processes. To compare these tiny but meaningful differences, isotope abundances in natural materials are often reported relative to isotope abundances in designated standards, with the delta (δ) notation. The absolute values of Xe isotopes are normalized to atmospheric 130Xe. Define where i = 124, 126, 128, 129, 131, 132, 134, 136. Applications The age of Earth Iodine-129 decays with a half-life of 15.7 Ma into 129Xe, resulting in excess 129Xe in primitive meteorites relative to primordial Xe isotopic compositions. The property of 129I can be used in radiometric chronology. However, as detailed below, the age of Earth's formation cannot be deduced directly from I-Xe dating. The major problem is the Xe closure time, or the time when the early Earth system stopped gaining substantial new material from space. When the Earth became closed for the I-Xe system, Xe isotope evolution began to obey a simple radioactive decay law as shown below and became predictable. The principle of radiogenic chronology is, if at time t1 the quantity of a radioisotope is P1 while at some previous time this quantity was P0, the interval between t1 and t0 is given by the law of radioactive decay as Here is the decay constant of the radioisotope, which is the probability of decay per nucleus per unit time. The decay constant is related to the half life t1/2, by t1/2= ln(2)/ Calculations The I-Xe system was first applied in 1975 to estimate the age of the Earth. For all Xe isotopes, the initial isotope composition of iodine in the Earth is given by where is the isotopic ratios of iodine at the time that Earth primarily formed, is the isotopic ratio of iodine at the end of stellar nucleosynthesis, and is the time interval between the end of stellar nucleosynthesis and the formation of the Earth. The estimated iodine-127 concentration in the Bulk Silicate Earth (BSE) (= crust + mantle average) ranges from 7 to 10 parts per billion (ppb) by mass. If the BSE represents Earth's chemical composition, the total 127I in the BSE ranges from 2.26×1017 to 3.23×1017 moles. The meteorite Bjurböle is 4.56 billion years old with an initial 129I/127I ratio of 1.1×10−4, so an equation can be derived as where is the interval between the formation of the Earth and the formation of meteorite Bjurböle. Given the half life of 129I of 15.7 Myr, and assuming that all the initial 129I has decayed to 129Xe, the following equation can be derived: 129Xe in the modern atmosphere is 3.63×1013 grams. The iodine content for BSE lies between 10 and 12 ppb by mass. Consequently, should be 108 Myr, i.e., the Xe-closure age is 108 Myr younger than the age of meteorite Bjurböle. The estimated Xe closure time was ~4.45 billion years ago when the growing Earth started to retain Xe in its atmosphere, which is coincident with ages derived from other geochronology dating methods. Xe closure age problem There are some disputes about using I-Xe dating to estimate the Xe closure time. First, in the early solar system, planetesimals collided and grew into larger bodies that accreted to form the Earth. But there could be a 107 to 108 years time gap in Xe closure time between the Earth's inner and outer regions. Some research support 4.45 Ga probably represents the time when the last giant impactor (Martian-size) hit Earth, but some regard it as the time of core-mantle differentiation. The second problem is that the total inventory of 129Xe on Earth may be larger than that of the atmosphere since the lower mantle hadn't been entirely mixed, which may underestimate 129Xe in the calculation. Last but not least, if Xe gas not been lost from the atmosphere during a long interval of early Earth's history, the chronology based on 129I-129Xe would need revising since 129Xe and 127Xe could be greatly altered. Loss of earth's earliest atmosphere Compared with solar xenon, Earth's atmospheric Xe is enriched in heavy isotopes by 3 to 4% per atomic mass unit (amu). However, the total abundance of xenon gas is depleted by one order of magnitude relative to other noble gases. The elemental depletion while relative enrichment in heavy isotopes is called the "Xenon paradox". A possible explanation is that some processes can specifically diminish xenon rather than other light noble gases (e.g. Krypton) and preferentially remove lighter Xe isotopes. In the last 2 decades, two categories of models have been proposed to solve the xenon paradox. The first assumes that the Earth accreted from porous planetesimals, and isotope fractionation happened due to gravitational separation. However, this model cannot reproduce the abundance and isotopic composition of light noble gases in the atmosphere. The second category supposes a massive impact resulted in an aerodynamic drag on heavier gases. Both the aerodynamic drag and the downward gravitational effect lead to a mass-dependent loss of Xe gases. But following research suggested that Xe isotope mass fractionation shouldn't be a rapid, single event. Research published since 2018 on noble gases preserved in Archean (3.5–3.0 Ga old) samples may provide a solution to the Xe paradox. Isotopically mass fractionated Xe is found in tiny inclusions of ancient seawater in Archean barite and hydrothermal quartz. The distribution of Xe isotopes lies between the primordial solar and the modern atmospheric Xe isotope patterns. The isotopic fractionation gradually increases relative to the solar distribution as Earth evolves over its first 2 billion years. This two billion-year history of evolving Xe fractionation coincides with early solar system conditions including high solar extreme ultraviolet (EUV) radiation and large impacts that could energize large rates of hydrogen escape to space that are big enough to drag out xenon. However, models of neutral xenon atoms escaping cannot resolve the problem that other lighter noble gas elements don't show the signal of depletion or mass-dependent fractionation. For example, because Kr is lighter than Xe, Kr should also have escaped in a neutral wind. Yet the isotopic distribution of atmospheric Kr on Earth is significantly less fractionated than atmospheric Xe. A current explanation is that hydrodynamic escape can preferentially remove lighter atmospheric species and lighter isotopes of Xe in the form of charged ions instead of neutral atoms. Hydrogen is liberated from hydrogen-bearing gases (H2 or CH4) by photolysis in the early Earth atmosphere. Hydrogen is light and can be abundant at the top of the atmosphere and escape. In the polar regions where there are open magnetic field lines, hydrogen ions can drag ionized Xe out from the atmosphere to space even though neutral Xe cannot escape. The mechanism is summarized as below. Xe can be directly photo-ionized by UV radiation in range of Or Xe can be ionized by change exchange with H2 and CO2 through where H+ and CO2+ can come from EUV dissociation. Xe+ is chemically inert in H, H2, or CO2 atmospheres. As a result, Xe+ tends to persist. These ions interact strongly with each other through the Coulomb force and are finally dragged away by strong ancient polar wind. Isotope mass fractionation accumulates as lighter isotopes of Xe+ preferentially escape from the Earth. A preliminary model suggests that Xe can escape in the Archean if the atmosphere contains >1% H2 or >0.5% methane. When O2 levels increased in the atmosphere, Xe+ could exchange positive charge with O2 though From this reaction, Xe escape stopped when the atmosphere became enriched in O2. As a result, Xe isotope fractionation may provide insights into the long history of hydrogen escape that ended with the Great Oxidation Event (GOE). Understanding Xe isotopes is promising to reconstruct hydrogen or methane escape history that irreversibly oxidized the Earth and drove biological evolution toward aerobic ecological systems. Other factors, such as the hydrogen (or methane) concentration becoming too low or EUV radiation from the aging Sun becoming too weak, can also cease the hydrodynamic escape of Xe, but are not mutually exclusive. Organic hazes on Archean Earth could also scavenge isotopically heavy Xe. Ionized Xe can be chemically incorporated into organic materials, going through the terrestrial weathering cycle on the surface. The trapped Xe is mass fractionated by about 1% per amu in heavier isotopes but they may be released again and recover the original unfractionated composition, making them not sufficient to totally resolve Xe paradox. Comparison between Kr and Xe in the atmosphere Observed atmospheric Xe is depleted relative to Chondritic meteorites by a factor of 4 to 20 when compared to Kr. In contrast, the stable isotopes of Kr are barely fractionated. This mechanism is unique to Xe since Kr+ ions are quickly neutralized via Therefore, Kr can be rapidly returned to neutral and wouldn't be dragged away by the charged ion wind in the polar region. Hence Kr is retained in the atmosphere. Relation with Mass Independent Fraction of Sulfur Isotopes (MIF-S) The signal of mass-independent fractionation of sulfur isotopes, known as MIF-S, correlates with the end of Xe isotope fractionation. During the Great Oxidation Event (GOE), the ozone layer formed when O2 rose, accounting for the end of the MIF-S signature. The disappearance of the MIF-S signal has been regarded as changing the redox ratio of Earth's surface reservoirs. However, potential memory effects of MIF-S due to oxidative weathering can lead to large uncertainty on the process and chronology of GOE. Compared to the MIF-S signals, hydrodynamic escape of Xe is not affected by the ozone formation and may be even more sensitive to O2 availability, promising to provide more details about the oxidation history of Earth. Xe Isotopes as mantle tracers Xe isotopes are also promising in tracing mantle dynamics in Earth's evolution. The first explicit recognition of non-atmospheric Xe in terrestrial samples came from the analysis of CO2-well gas in New Mexico, displaying an excess of 129I-derived or primitive source 129Xe and high content in 131-136Xe due to the decay of 238U. At present, the excess of 129Xe and 131-136Xe has been widely observed in mid-ocean ridge basalt (MORBs) and Oceanic island basalt (OIBs). Because 136Xe receives more fissiogenic contribution than other heavy Xe isotopes, 129Xe (decay of 129I) and 136Xe are usually normalized to 130Xe when discussing Xe isotope trends of different mantle sources. MORBs' 129Xe/130Xe and 136Xe/130Xe ratios lie on a trend from atmospheric ratios to higher values and seemingly contaminated by the air. Oceanic island basalt (OIBs) data lies lower than those in MORBs, implying different Xe sources for OIBs and MORBs. The deviations in 129Xe/130Xe ratio between air and MORBs show that mantle degassing occurred before 129I was extinct, otherwise 129Xe/130Xe in the air would be the same as in the mantle. The differences in the 129Xe/130Xe ratio between MORBs and OIBs may indicate that the mantle reservoirs are still not thoroughly mixed. The chemical differences between OIBs and MORBs still await discovery. To obtain mantle Xe isotope ratios, it is necessary to remove contamination by atmospheric Xe, which could start before 2.5 billion years ago. Theoretically, the many non-radiogenic isotopic ratios (124Xe/130Xe, 126Xe/130Xe, and 128Xe/130Xe) can be used to accurately correct for atmospheric contamination if slight differences between air and mantle can be precisely measured. Still, we cannot reach such precision with current techniques. Xe in other planets Mars On Mars, Xe isotopes in the present atmosphere are mass fractionated relative to their primordial composition from in situ measurement of the Curiosity Rover at Gale Crater, Mars. Paleo-atmospheric Xe trapped in the Martian regolith breccia NWA 11220 is mass-dependently fractionated relative to solar Xe by ~16.2‰. The extent of fractionation is comparable for Mars and Earth, which may be compelling evidence that hydrodynamic escape also occurred in the Mars history. The regolith breccia NWA7084 and the >4 Ga orthopyroxene ALH84001 Martian meteorites trap ancient Martian atmospheric gases with little if any Xe isotopic fractionation relative to modern Martian atmospheric Xe. Alternative models for Mars consider that the isotopic fractionation and escape of Mars atmospheric Xe occurred very early in the planet's history and ceased around a few hundred million years after planetary formation rather than continuing during its evolutionary history Venus Xe has not been detected in Venus's atmosphere. 132Xe has an upper limit of 10 parts per billion by volume. The absence of data on the abundance of Xe precludes us from evaluating if the abundance of Xe is close to solar values or if there is Xe paradox on Venus. The lack also prevents us from checking if the isotopic composition has been mass dependently fractionated, as in the case of Earth and Mars. Jupiter Jupiter's atmosphere has 2.5 ± 0.5 times the solar abundance values for Xenon and similarly elevated argon and krypton (2.1 ± 0.5 and 2.7 ± 0.5 times solar values separately). These signals of enrichment are due to these elements coming to Jupiter in very cold (T<30K) icy planetesimals. See also Geochronology Isotopes of Xenon References Xenon Isotopes Geochemistry
Xenon isotope geochemistry
Physics,Chemistry
3,778
20,684
https://en.wikipedia.org/wiki/Instructions%20per%20second
Instructions per second (IPS) is a measure of a computer's processor speed. For complex instruction set computers (CISCs), different instructions take different amounts of time, so the value measured depends on the instruction mix; even for comparing processors in the same family the IPS measurement can be problematic. Many reported IPS values have represented "peak" execution rates on artificial instruction sequences with few branches and no cache contention, whereas realistic workloads typically lead to significantly lower IPS values. Memory hierarchy also greatly affects processor performance, an issue barely considered in IPS calculations. Because of these problems, synthetic benchmarks such as Dhrystone are now generally used to estimate computer performance in commonly used applications, and raw IPS has fallen into disuse. The term is commonly used in association with a metric prefix (k, M, G, T, P, or E) to form kilo instructions per second (kIPS), mega instructions per second (MIPS), giga instructions per second (GIPS) and so on. Formerly TIPS was used occasionally for "thousand IPS". Computing IPS can be calculated using this equation: However, the instructions/cycle measurement depends on the instruction sequence, the data and external factors. Thousand instructions per second (TIPS/kIPS) Before standard benchmarks were available, average speed rating of computers was based on calculations for a mix of instructions with the results given in kilo instructions per second (kIPS). The most famous was the Gibson Mix, produced by Jack Clark Gibson of IBM for scientific applications in 1959. Other ratings, such as the ADP mix which does not include floating point operations, were produced for commercial applications. The thousand instructions per second (kIPS) unit is rarely used today, as most current microprocessors can execute at least a million instructions per second. The Gibson Mix Gibson divided computer instructions into 12 classes, based on the IBM 704 architecture, adding a 13th class to account for indexing time. Weights were primarily based on analysis of seven scientific programs run on the 704, with a small contribution from some IBM 650 programs. The overall score was then the weighted sum of the average execution speed for instructions in each class. Millions of instructions per second (MIPS) The speed of a given CPU depends on many factors, such as the type of instructions being executed, the execution order and the presence of branch instructions (problematic in CPU pipelines). CPU instruction rates are different from clock frequencies, usually reported in Hz, as each instruction may require several clock cycles to complete or the processor may be capable of executing multiple independent instructions simultaneously. MIPS can be useful when comparing performance between processors made with similar architecture (e.g. Microchip branded microcontrollers), but they are difficult to compare between differing CPU architectures. This led to the term "Meaningless Indicator of Processor Speed," or less commonly, "Meaningless Indices of Performance," being popular amongst technical people by the mid-1980s. For this reason, MIPS has become not a measure of instruction execution speed, but task performance speed compared to a reference. In the late 1970s, minicomputer performance was compared using VAX MIPS, where computers were measured on a task and their performance rated against the VAX-11/780 that was marketed as a 1 MIPS machine. (The measure was also known as the VAX Unit of Performance or VUP.) This was chosen because the 11/780 was roughly equivalent in performance to an IBM System/370 model 158–3, which was commonly accepted in the computing industry as running at 1 MIPS. Many minicomputer performance claims were based on the Fortran version of the Whetstone benchmark, giving Millions of Whetstone Instructions Per Second (MWIPS). The VAX 11/780 with FPA (1977) runs at 1.02 MWIPS. Effective MIPS speeds are highly dependent on the programming language used. The Whetstone Report has a table showing MWIPS speeds of PCs via early interpreters and compilers up to modern languages. The first PC compiler was for BASIC (1982) when a 4.8 MHz 8088/87 CPU obtained 0.01 MWIPS. Results on a 2.4 GHz Intel Core 2 Duo (1 CPU 2007) vary from 9.7 MWIPS using BASIC Interpreter, 59 MWIPS via BASIC Compiler, 347 MWIPS using 1987 Fortran, 1,534 MWIPS through HTML/Java to 2,403 MWIPS using a modern C/C++ compiler. For the most early 8-bit and 16-bit microprocessors, performance was measured in thousand instructions per second (1000 kIPS = 1 MIPS). zMIPS refers to the MIPS measure used internally by IBM to rate its mainframe servers (zSeries, IBM System z9, and IBM System z10). Weighted million operations per second (WMOPS) is a similar measurement, used for audio codecs. Timeline of instructions per second CPU results Multi-CPU cluster results See also TOP500 FLOPS - floating-point operations per second SUPS Benchmark (computing) BogoMips (measurement of CPU speed made by the Linux kernel) Instructions per cycle Cycles per instruction Dhrystone (benchmark) - DMIPS integer benchmark Whetstone (benchmark) - floating-point benchmark Million service units (MSU) Computer performance by orders of magnitude Performance per watt Data-rate units References Computer performance Units of frequency
Instructions per second
Mathematics,Technology
1,142
2,014,128
https://en.wikipedia.org/wiki/Mu%20Cephei
Mu Cephei (Latinized from μ Cephei, abbreviated Mu Cep or μ Cep), also known as Herschel's Garnet Star, Erakis, or HD 206936, is a red supergiant or hypergiant star in the constellation Cepheus. It appears garnet red and is located at the edge of the IC 1396 nebula. It is a 4th magnitude star easily visible to the naked eye under good observing conditions. Since 1943, the spectrum of this star has served as a spectral standard by which other stars are classified. Mu Cephei is more than 100,000 times brighter than the Sun, with an absolute visual magnitude of −7.6. It is also one of the largest known stars with a radius around or over 1,000 times that of the sun (), and were it placed in the Sun's position it would engulf the orbit of Mars and Jupiter. History The deep red color of Mu Cephei was noted by William Herschel, who described it as "a very fine deep garnet colour, such as the periodical star ο Ceti". It is thus commonly known as Herschel's "Garnet Star". Mu Cephei was called Garnet sidus by Giuseppe Piazzi in his catalogue. An alternative name, Erakis, used in Antonín Bečvář's star catalogue, is probably due to confusion with Mu Draconis, which was previously called in Arabic. In 1848, English astronomer John Russell Hind discovered that Mu Cephei was variable. This variability was quickly confirmed by German astronomer Friedrich Wilhelm Argelander. Almost continual records of the star's variability have been maintained since 1881. The angular diameter of μ Cephei has been measured interferometrically. One of the most recent measurements gives a diameter of at , modelled as a limb-darkened disk across. However, this later turned out to be the surrounding molecular layer and not the actual star, as the star has an angular diameer of 14.11 ± 0.6 mas. μ Cephei was used as one of the original "dagger stars", those with well-defined spectra that could be used for the classification of other stars, for MK spectral classifications. In 1943 it was the standard star for M2  Ia, updated in 1980 to be the standard star for the new type M2- Ia. Distance The distance to Mu Cephei is not very well known. The Hipparcos satellite was used to measure a parallax of , which corresponds to an estimated distance of . However, this value is close to the margin of error. A determination of the distance based upon a size comparison with Betelgeuse gives an estimate of . Calculation of the distance from the measured angular diameter, surface brightness, and calculated luminosity leads to . Averaging the distances of nearby luminous stars with similar reddening and reliable Gaia Data Release 2 parallaxes gives a distance of . Surroundings Mu Cephei is surrounded by a shell extending out to a distance at least equal to 0.33 times the star's radius with a temperature of . This outer shell appears to contain molecular gases such as CO, H2O, and SiO. Infrared observations suggest the presence of a wide ring of dust and water with an inner radius about twice that of the star itself, extending to about four times the radius of the star. The star is surrounded by a spherical shell of ejected material that extends outward to an angular distance of 6″ with an expansion velocity of . This indicates an age of about 2,000–3,000 years for the shell. Closer to the star, this material shows a pronounced asymmetry, which may be shaped as a torus. Variability Mu Cephei is a variable star and the prototype of the obsolete class of the Mu Cephei variables. It is now considered to be a semiregular variable of type SRc. Its apparent brightness varies erratically between magnitude 3.4 and 5.1. Many different periods have been reported, but they are consistently near 860 days or 4,400 days. Properties A very luminous red supergiant, Mu Cephei is among the largest stars visible to the naked eye, and one of the largest known cool supergiants. It is a runaway star with a peculiar velocity of , and has been described as a hypergiant. The bolometric luminosity, summed over all wavelengths, is calculated from integrating the spectral energy distribution (SED) to be , making μ Cephei one of the most luminous red supergiants in the Milky Way. Its effective temperature of , determined from colour index relations, implies a radius of . Other recent publications give similar effective temperatures. Calculation of the luminosity from a visual and infrared colour relation give and a corresponding radius of . An estimate made based on its angular diameter and an assumed distance of gives it a radius of , however the angular diameter used later turned out to be the diameter of the molecular layer around the star. The radius has been estimated to be in 2010 based on the star's effective temperature of and the luminosity estimate. A 2019 paper measurement based on the distance gives the star a lower luminosity below and a correspondingly lower radius of , and as well as a lower temperature of . These parameters are all consistent with those estimated for Betelgeuse. The initial mass of Mu Cephei has been estimated from its position relative to theoretical stellar evolutionary tracks to be between and . The star currently has a mass loss rate of per year. Supernova Mu Cephei is nearing death. It has begun to fuse helium into carbon, whereas a main sequence star fuses hydrogen into helium. When a supergiant star has converted elements in its core to iron, the core collapses to produce a supernova and the star is destroyed, leaving behind a vast gaseous cloud and a small, dense remnant. For a star as massive as Mu Cephei the remnant is likely to be a black hole. The most massive red supergiants will evolve back to blue supergiants, Luminous blue variables, or Wolf-Rayet stars before their cores collapse, and Mu Cephei appears to be massive enough for this to happen. A post-red supergiant will produce a type IIn or type II-b supernova, while a Wolf Rayet star will produce a type Ib or Ic supernova. Components There are several faint stars within two arc-minutes of Mu Cephei, and listed in multiple star catalogues. See also List of most massive stars VV Cephei VY Canis Majoris References External links M-type supergiants M-type hypergiants Semiregular variable stars Runaway stars Cepheus (constellation) Cephei, Mu Durchmusterung objects 206936 107259 8316 Herschel's Garnet Star Emission-line stars TIC objects Population I stars
Mu Cephei
Astronomy
1,421
36,142,464
https://en.wikipedia.org/wiki/Chlorosulfolipid
Chlorosulfolipids are a class of naturally occurring molecules that are characterized as being stereochemically complex. These polychlorinated structures have been isolated from the freshwater alga Ochromonas danica, and are proposed to serve a structural role within the membranes of this species. The high extent of chlorination in these natural products is suspected to be influenced by the concentration of chlorine ion in the surrounding environment. In addition to being integral components of algal membranes, chlorosulfolipids are also known to inhibit protein kinases. Furthermore, some of these complex molecules that have been isolated from toxic mussels are associated with diarrhetic shellfish poisoning. The lipid malhamensilipin A, isolated by the groups of Slate and Gerwick in 1994, displayed both antimicrobial activity as well as an inhibition of the pp60 protein tyrosine kinase. Biosynthesis of danicalipin A Initially, docosanoic acid (behenic acid) (2) is constructed via the fatty acid synthesis pathway. Elovson demonstrated that the C-14 secondary hydroxyl group of molecule 3 was incorporated by oxidation of the fatty acid with molecular oxygen, as opposed to alkene hydration with water. The next step involves the enzyme-mediated transfer of the sulfate group from 3’-phosphoadenosine 5’-phosphosulfate (PAPS) to the diol to form molecule 4. Walsh has demonstrated that the halogenation of unactivated methyl groups can be catalyzed by a newly discovered class of α-ketoglutarate-dependent non-heme iron halogenases, suggesting a similar enzyme family could play a role in chlorosulfolipid chlorination. The stepwise chlorination occurs via an order-independent radical mechanism. References Lipids Organochlorides Anionic surfactants
Chlorosulfolipid
Chemistry
401
3,192,531
https://en.wikipedia.org/wiki/Pi%20Persei
π Persei, Latinized as Pi Persei, is a single star in the northern constellation of Perseus. It has the traditional name Gorgonea Secunda , the second of three Gorgons in the mythology of the hero Perseus. This star has a white hue and is faintly visible to the naked eye with an apparent visual magnitude of +4.7. It is located at a distance of approximately 303 light years from the Sun based on parallax, and is moving further away with a radial velocity of +14 km/s. This object is an A-type main-sequence star with a stellar classification of A2Vn, where the 'n' suffix indicates broad (nebulous) lines due to rapid rotation. It is spinning with a projected rotational velocity of 186 km/s, which is creating an equatorial bulge that is 6% wider than the polar radius. The star is 272 million years old with double the mass of the Sun. It has 4.8 times the Sun's radius and is radiating 170 times the luminosity of the Sun from its photosphere at an effective temperature of 9,290 K. References A-type main-sequence stars Perseus (constellation) Persei, Pi BD+39 681 Persei, 22 018411 013879 0879 Gorgonea Secunda
Pi Persei
Astronomy
280
6,307,858
https://en.wikipedia.org/wiki/Deviant%20sexual%20intercourse
Deviant sexual intercourse or deviate sexual intercourse is, in some U.S. states, a legal term for "any act of sexual gratification involving the sex organs of one person and the mouth or anus of another, anus to mouth or involving invasion of the anus or vagina of one person by a foreign object manipulated by another person". Model Penal Code In the United States of America, deviate sexual intercourse has been popularized since its usage in the U.S. Model Penal Code. The MPC defines, U.S. states Typically, the act itself (whether consensual or not) used to be a crime, but the term is now used to describe forcible or otherwise involuntary acts that differ from the crime of rape (sometimes deviant sexual intercourse is included in the definition of rape), in the way that indecent assault might be used in other states and countries. Texas & Kentucky In the United States, the term has replaced sodomy in the criminal codes of some states, including Texas and Kentucky. Pennsylvania As an example, Section 3101 of the Pennsylvania Consolidated Statutes defines "deviate sexual intercourse" as "Sexual intercourse per os or per anus between human beings and any form of sexual intercourse with an animal. The term also includes penetration, however slight, of the genitals or anus of another person with a foreign object for any purpose other than good faith medical, hygienic or law enforcement procedures." On March 31, 1995, "Section 3124. Voluntary deviate sexual intercourse" was repealed; it was no longer considered a criminal offense to engage in such activity voluntarily. SCOTUS The consensual practice of anal or oral sex was legalised by the 2003 U.S. Supreme Court case of Lawrence v. Texas. References Sex crimes
Deviant sexual intercourse
Biology
375
2,879,746
https://en.wikipedia.org/wiki/Epsilon%20Arietis
Epsilon Arietis (ε Ari, ε Arietis) is the Bayer designation for a visual binary star system in the northern constellation of Aries. It has a combined apparent visual magnitude of 4.63 and can be seen with the naked eye, although the two components are too close together to be resolved without a telescope. With an annual parallax shift of 9.81 mas, the distance to this system can be estimated as , give or take a 30 light-year margin of error. It is located behind the dark cloud MBM12. The brighter member of this pair has an apparent magnitude of 5.2. At an angular separation of from the brighter component, along a position angle of , is the magnitude 5.5 companion. Both are A-type main sequence stars with a stellar classification of A2 Vs. (The 's' suffix indicates that the absorption lines in the spectrum are distinctly narrow.) In the 2009 Catalogue of Ap, HgMn and Am stars, the two stars have a classification of A3 Ti, indicating they are Ap stars with an anomalous abundance of titanium. Within the measurement margin of error, their projected rotational velocities are deemed identical at 60 km/s. Name This star system, along with δ Ari, ζ Ari, π Ari, and ρ3 Ari, were Al Bīrūnī's Al Buṭain (ألبطين), the dual of Al Baṭn, the Belly. According to the catalogue of stars in the Technical Memorandum 33-507 - A Reduced Star Catalog Containing 537 Named Stars, Al Buṭain were the title for five stars :δ Ari as Botein, π Ari as Al Buṭain I, ρ3 Ari as Al Buṭain II, ε Ari as Al Buṭain III and ζ Ari as Al Buṭain IV In Chinese astronomy, Epsilon Arietis may be or may be part of Tso Kang (from Cantonese zogang, Mandarin pronunciation zuǒgēng). References External links HR 887 Image Epsilon Arietis 018519 Double stars 013914 Arietis, Epsilon Aries (constellation) A-type main-sequence stars Arietis, 48 Durchmusterung objects 0887
Epsilon Arietis
Astronomy
455