id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
27,733,786
https://en.wikipedia.org/wiki/Algorithmic%20complexity%20attack
An algorithmic complexity attack (ACA) is a form of attack in which an attacker sends a pattern of requests to a computer system that triggers the worst-case performance of the algorithms it uses. In turn, this may exhaust the resources the system uses. Examples of such attacks include ReDOS, zip bombs and exponential entity expansion attacks. References Related works Vahidi, Ardalan. “Crowdsourcing Phase and Timing of Pre-Timed Traffic Signals in the Presence of Queues: Algorithms and Back-End System Architecture.” Ieeexplore, 1 Nov. 2019, https://ieeexplore.ieee.org/abstract/document/7323843. Kiner, Emil, and Satya Konduru. “How Google Cloud Blocked the Largest Layer 7 DDoS Attack yet, 46 Million Rps.” Google Cloud Blog, 18 Aug. 2022, cloud.google.com/blog/products/identity-security/how-google-cloud-blocked-largest-layer-7-ddos-attack-at-46-million-rps.
Algorithmic complexity attack
Technology
226
63,016,742
https://en.wikipedia.org/wiki/PSAT-2
PSAT-2 is an experimental amateur radio satellite from the U.S. Naval Academy, which was developed in collaboration with the Technical University of Brno in Brno, Czech Republic. AMSAT North America's OSCAR number administrator assigned number 104 to this satellite; in the amateur radio community it is therefore also called Navy-OSCAR 104, short NO-104. Mission PSAT-2 was launched on June 25, 2019 with a Falcon Heavy from Kennedy Space Center, Florida, United States, as part of Mission STP-2 (Space Test Program 2) as one of 24 satellites. In August 2019, the VHF payload failed and control of the satellite was lost. However, after nearly two years of downtime, the payload mysteriously reactivated and control was regained. Frequencies The following frequencies for the satellite were coordinated by the International Amateur Radio Union: 145.825 MHz - Uplink and downlink APRS digipeater, 1200 bd (once again functional as of 2021) 435.350 MHz - Downlink PSK31 and SSTV 29.4815 MHz - Uplink PSK31 See also OSCAR References External links PSAT2 - Amateur Radio Communications Transponders. APRS PSAT2 SSTV camera and transponder homepage, pictures and tlm archive. Satellites orbiting Earth Amateur radio satellites Spacecraft launched in 2019
PSAT-2
Astronomy
271
5,828
https://en.wikipedia.org/wiki/Cryptozoology
Cryptozoology is a pseudoscience and subculture that searches for and studies unknown, legendary, or extinct animals whose present existence is disputed or unsubstantiated, particularly those popular in folklore, such as Bigfoot, the Loch Ness Monster, Yeti, the chupacabra, the Jersey Devil, or the Mokele-mbembe. Cryptozoologists refer to these entities as cryptids, a term coined by the subculture. Because it does not follow the scientific method, cryptozoology is considered a pseudoscience by mainstream science: it is neither a branch of zoology nor of folklore studies. It was originally founded in the 1950s by zoologists Bernard Heuvelmans and Ivan T. Sanderson. Scholars have noted that the subculture rejected mainstream approaches from an early date, and that adherents often express hostility to mainstream science. Scholars studying cryptozoologists and their influence (including cryptozoology's association with Young Earth creationism) noted parallels in cryptozoology and other pseudosciences such as ghost hunting and ufology, and highlighted uncritical media propagation of cryptozoologist claims. Terminology, history, and approach As a field, cryptozoology originates from the works of Bernard Heuvelmans, a Belgian zoologist, and Ivan T. Sanderson, a Scottish zoologist. Notably, Heuvelmans published On the Track of Unknown Animals (French: ) in 1955, a landmark work among cryptozoologists that was followed by numerous other similar works. In addition, Sanderson published a series of books that contributed to the developing hallmarks of cryptozoology, including Abominable Snowmen: Legend Come to Life (1961). Heuvelmans himself traced cryptozoology to the work of Anthonie Cornelis Oudemans, who theorized that a large unidentified species of seal was responsible for sea serpent reports. Cryptozoology is 'the study of hidden animals' (from Ancient Greek: κρυπτός, kryptós "hidden, secret"; Ancient Greek ζῷον, zōion "animal", and λόγος, logos, i.e. "knowledge, study"). The term dates from 1959 or before— Heuvelmans attributes the coinage of the term cryptozoology to Sanderson. Following cryptozoology, the term cryptid was coined in 1983 by cryptozoologist J. E. Wall in the summer issue of the International Society of Cryptozoology newsletter. According to Wall "[It has been] suggested that new terms be coined to replace sensational and often misleading terms like 'monster'. My suggestion is 'cryptid', meaning a living thing having the quality of being hidden or unknown ... describing those creatures which are (or may be) subjects of cryptozoological investigation." The Oxford English Dictionary defines the noun cryptid as "an animal whose existence or survival to the present day is disputed or unsubstantiated; any animal of interest to a cryptozoologist". While used by most cryptozoologists, the term cryptid is not used by academic zoologists. In a textbook aimed at undergraduates, academics Caleb W. Lack and Jacques Rousseau note that the subculture's focus on what it deems to be "cryptids" is a pseudoscientific extension of older belief in monsters and other similar entities from the folkloric record, yet with a "new, more scientific-sounding name: cryptids". While biologists regularly identify new species, cryptozoologists often focus on creatures from the folkloric record. Most famously, these include the Loch Ness Monster, Champ (folklore), Bigfoot, the chupacabra, as well as other "imposing beasts that could be labeled as monsters". In their search for these entities, cryptozoologists may employ devices such as motion-sensitive cameras, night-vision equipment, and audio-recording equipment. While there have been attempts to codify cryptozoological approaches, unlike biologists, zoologists, botanists, and other academic disciplines, however, "there are no accepted, uniform, or successful methods for pursuing cryptids". Some scholars have identified precursors to modern cryptozoology in certain medieval approaches to the folkloric record, and the psychology behind the cryptozoology approach has been the subject of academic study. Few cryptozoologists have a formal science education, and fewer still have a science background directly relevant to cryptozoology. Adherents often misrepresent the academic backgrounds of cryptozoologists. According to writer Daniel Loxton and paleontologist Donald Prothero, "[c]ryptozoologists have often promoted 'Professor Roy Mackal, PhD.' as one of their leading figures and one of the few with a legitimate doctorate in biology. What is rarely mentioned, however, is that he had no training that would qualify him to undertake competent research on exotic animals. This raises the specter of 'credential mongering', by which an individual or organization feints a person's graduate degree as proof of expertise, even though his or her training is not specifically relevant to the field under consideration." Besides Heuvelmans, Sanderson, and Mackal, other notable cryptozoologists with academic backgrounds include Grover Krantz, Karl Shuker, and Richard Greenwell. Historically, notable cryptozoologists have often identified instances featuring "irrefutable evidence" (such as Sanderson and Krantz), only for the evidence to be revealed as the product of a hoax. This may occur during a closer examination by experts or upon confession of the hoaxer. Expeditions Cryptozoologists have often led unsuccessful expeditions to find evidence of cryptids. Bigfoot researcher René Dahinden led searches into caves to find evidence of sasquatch, as early sasquatch legends claimed they lived in rocky areas. Despite the failure of these searches, he spent years trying to find proof of bigfoot. Lensgrave Adam Christoffer Knuth led an expedition into Lake Tele in the Congo to find the Mokele-mbembe in 2018. While the expedition was a failure, they discovered a new species of green algae. Young Earth creationism A subset of cryptozoology promotes the pseudoscience of Young Earth creationism, rejecting conventional science in favor of a literal Biblical interpretation and promoting concepts such as "living dinosaurs". Science writer Sharon A. Hill observes that the Young Earth creationist segment of cryptozoology is "well-funded and able to conduct expeditions with a goal of finding a living dinosaur that they think would invalidate evolution". Anthropologist Jeb J. Card says that "[c]reationists have embraced cryptozoology and some cryptozoological expeditions are funded by and conducted by creationists hoping to disprove evolution." In a 2013 interview, paleontologist Donald Prothero notes an uptick in creationist cryptozoologists. He observes that "[p]eople who actively search for Loch Ness monsters or Mokele Mbembe do it entirely as creationist ministers. They think that if they found a dinosaur in the Congo it would overturn all of evolution. It wouldn't. It would just be a late-occurring dinosaur, but that's their mistaken notion of evolution." Citing a 2013 exhibit at the Petersburg, Kentucky-based Creation Museum, which claimed that dragons were once biological creatures who walked the earth alongside humanity and is broadly dedicated to Young Earth creationism, religious studies academic Justin Mullis notes that "[c]ryptozoology has a long and curious history with Young Earth Creationism, with this new exhibit being just one of the most recent examples". Academic Paul Thomas analyzes the influence and connections between cryptozoology in his 2020 study of the Creation Museum and the creationist theme park Ark Encounter. Thomas comments that, "while the Creation Museum and the Ark Encounter are flirting with pseudoarchaeology, coquettishly whispering pseudoarchaeological rhetoric, they are each fully in bed with cryptozoology" and observes that "[y]oung-earth creationists and cryptozoologists make natural bed fellows. As with pseudoarchaeology, both young-earth creationists and cryptozoologists bristle at the rejection of mainstream secular science and lament a seeming conspiracy to prevent serious consideration of their claims." Lack of critical media coverage Media outlets have often uncritically disseminated information from cryptozoologist sources, including newspapers that repeat false claims made by cryptozoologists or television shows that feature cryptozoologists as monster hunters (such as the popular and purportedly nonfiction American television show MonsterQuest, which aired from 2007 to 2010). Media coverage of purported "cryptids" often fails to provide more likely explanations, further propagating claims made by cryptozoologists. Reception and pseudoscience There is a broad consensus among academics that cryptozoology is a pseudoscience. The subculture is regularly criticized for reliance on anecdotal information and because in the course of investigating animals that most scientists believe are unlikely to have existed, cryptozoologists do not follow the scientific method. No academic course of study nor university degree program grants the status of cryptozoologist and the subculture is primarily the domain of individuals without training in the natural sciences. Anthropologist Jeb J. Card summarizes cryptozoology in a survey of pseudoscience and pseudoarchaeology: Card notes that "cryptozoologists often show their disdain and even hatred for professional scientists, including those who enthusiastically participated in cryptozoology", which he traces back to Heuvelmans's early "rage against critics of cryptozoology". He finds parallels with cryptozoology and other pseudosciences, such as ghost hunting and ufology, and compares the approach of cryptozoologists to colonial big-game hunters, and to aspects of European imperialism. According to Card, "[m]ost cryptids are framed as the subject of indigenous legends typically collected in the heyday of comparative folklore, though such legends may be heavily modified or worse. Cryptozoology's complicated mix of sympathy, interest, and appropriation of indigenous culture (or non-indigenous construction of it) is also found in New Age circles and dubious "Indian burial grounds" and other legends [...] invoked in hauntings such as the "Amityville" hoax [...]". In a 2011 foreword for The American Biology Teacher, then National Association of Biology Teachers president Dan Ward uses cryptozoology as an example of "technological pseudoscience" that may confuse students about the scientific method. Ward says that "Cryptozoology [...] is not valid science or even science at all. It is monster hunting." Historian of science Brian Regal includes an entry for cryptozoology in his Pseudoscience: A Critical Encyclopedia (2009). Regal says that "as an intellectual endeavor, cryptozoology has been studied as much as cryptozoologists have sought hidden animals". In a 1992 issue of Folklore, folklorist Véronique Campion-Vincent says: Campion-Vincent says that "four currents can be distinguished in the study of mysterious animal appearances": "Forteans" ("compiler[s] of anomalies" such as via publications like the Fortean Times), "occultists" (which she describes as related to "Forteans"), "folklorists", and "cryptozoologists". Regarding cryptozoologists, Campion-Vincent says that "this movement seems to deserve the appellation of parascience, like parapsychology: the same corpus is reviewed; many scientists participate, but for those who have an official status of university professor or researcher, the participation is a private hobby". In her Encyclopedia of American Folklore, academic Linda Watts says that "folklore concerning unreal animals or beings, sometimes called monsters, is a popular field of inquiry" and describes cryptozoology as an example of "American narrative traditions" that "feature many monsters". In his analysis of cryptozoology, folklorist Peter Dendle says that "cryptozoology devotees consciously position themselves in defiance of mainstream science" and that: In a paper published in 2013, Dendle refers to cryptozoologists as "contemporary monster hunters" that "keep alive a sense of wonder in a world that has been very thoroughly charted, mapped, and tracked, and that is largely available for close scrutiny on Google Earth and satellite imaging" and that "on the whole the devotion of substantial resources for this pursuit betrays a lack of awareness of the basis for scholarly consensus (largely ignoring, for instance, evidence of evolutionary biology and the fossil record)." According to historian Mike Dash, few scientists doubt there are thousands of unknown animals, particularly invertebrates, awaiting discovery; however, cryptozoologists are largely uninterested in researching and cataloging newly discovered species of ants or beetles, instead focusing their efforts towards "more elusive" creatures that have often defied decades of work aimed at confirming their existence. Paleontologist George Gaylord Simpson (1984) lists cryptozoology among examples of human gullibility, along with creationism: Paleontologist Donald Prothero (2007) cites cryptozoology as an example of pseudoscience and categorizes it, along with Holocaust denial and UFO abductions claims, as aspects of American culture that are "clearly baloney". In Scientifical Americans: The Culture of Amateur Paranormal Researchers (2017), Hill surveys the field and discusses aspects of the subculture, noting internal attempts at creating more scientific approaches and the involvement of Young Earth creationists and a prevalence of hoaxes. She concludes that many cryptozoologists are "passionate and sincere in their belief that mystery animals exist. As such, they give deference to every report of a sighting, often without critical questioning. As with the ghost seekers, cryptozoologists are convinced that they will be the ones to solve the mystery and make history. With the lure of mystery and money undermining diligent and ethical research, the field of cryptozoology has serious credibility problems." Organizations There have been several organizations, of varying types, dedicated or related to cryptozoology. These include: International Fortean Organization – a network of professional Fortean researchers and writers based in the United States International Society of Cryptozoology – an American organisation that existed from 1982 to 1998 Kosmopoisk – a Russian organisation whose interests include cryptozoology and Ufology The Centre for Fortean Zoology- an English organization centered around hunting for unknown animals Museums and exhibitions The zoological and cryptozoological collection and archive of Bernard Heuvelmans is held at the Musée Cantonal de Zoologie in Lausanne and consists of around "1,000 books, 25,000 files, 25,000 photographs, correspondence, and artifacts". In 2006, the Bates College Museum of Art held the "Cryptozoology: Out of Time Place Scale" exhibition, which compared cryptozoological creatures with recently extinct animals like the thylacine and extant taxa like the coelacanth, once thought long extinct (living fossils). The following year, the American Museum of Natural History put on a mixed exhibition of imaginary and extinct animals, including the elephant bird Aepyornis maximus and the great ape Gigantopithecus blacki, under the name "Mythic Creatures: Dragons, Unicorns and Mermaids". In 2003, cryptozoologist Loren Coleman opened the International Cryptozoology Museum in Portland, Maine. The museum houses more than 3000 cryptozoology related artifacts. See also Ethnozoology Fearsome critters, fabulous beasts that were said to inhabit the timberlands of North America Folk belief List of cryptids, a list of cryptids notable within cryptozoology List of cryptozoologists, a list of notable cryptozoologists Scientific skepticism References Sources Bartholomew, Robert E. 2012. The Untold Story of Champ: A Social History of America's Loch Ness Monster. State University of New York Press. Campion-Vincent, Véronique. 1992. "Appearances of Beasts and Mystery-cats in France". Folklore 103.2 (1992): 160–183. Card, Jeb J. 2016. "Steampunk Inquiry: A Comparative Vivisection of Discovery Pseudoscience" in Card, Jeb J. and Anderson, David S. Lost City, Found Pyramid: Understanding Alternative Archaeologies and Pseudoscientific Practices, pp. 24–25. University of Alabama Press. Church, Jill M. (2009). Cryptozoology. In H. James Birx. Encyclopedia of Time: Science, Philosophy, Theology & Culture, Volume 1. SAGE Publications. pp. 251–252. Dash, Mike. 2000. Borderlands: The Ultimate Exploration of the Unknown. Overlook Press. Dendle, Peter. 2006. "Cryptozoology in the Medieval and Modern Worlds". Folklore, Vol. 117, No. 2 (Aug., 2006), pp. 190–206. Taylor & Francis. Dendle, Peter. 2013. "Monsters and the Twenty-First Century" in The Ashgate Research Companion to Monsters and the Monstrous. Ashgate Publishing. Hill, Sharon A. 2017. Scientifical Americans: The Culture of Amateur Paranormal Researchers. McFarland. Lack, Caleb W. and Jacques Rousseau. 2016. Critical Thinking, Science, and Pseudoscience: Why We Can't Trust Our Brains. Springer. Lee, Jeffrey A. 2000. The Scientific Endeavor: A Primer on Scientific Principles and Practice. Benjamin Cummings. Loxton, Daniel and Donald Prothero. 2013. Abominable Science: Origins of the Yeti, Nessie, and other Famous Cryptids. Columbia University Press. Mullis, Justin. 2019. "Cryptofiction! Science Fiction and the Rise of Cryptozoology" in Caterine, Darryl & John W. Morehead (ed.). 2019. The Paranormal and Popular Culture: A Postmodern Religious Landscape, pp. 240–252. Routledge. . Mullis, Justin. 2021. "Thomas Jefferson: The First Cryptozoologist?". In Joseph P. Laycock & Natasha L. Mikles (eds). Religion, Culture, and the Monstrous: Of Gods and Monsters, pp. 185–197. Lexington Books. Nagel, Brian. 2009. Pseudoscience: A Critical Encyclopedia. ABC-CLIO. Paxton, C.G.M. 2011. "Putting the 'ology' into cryptozoology." Biofortean Notes. Vol. 7, pp. 7–20, 310. Prothero, Donald R. 2007. Evolution: What the Fossils Say and Why It Matters. Columbia University Press. Radford, Benjamin. 2014. "Bigfoot at 50: Evaluating a Half-Century of Bigfoot Evidence" in Farha, Bryan (ed.). Pseudoscience and Deception: The Smoke and Mirrors of Paranormal Claims. University Press of America. Regal, Brian. 2011a. "Cryptozoology" in McCormick, Charlie T. and Kim Kennedy (ed.). Folklore: An Encyclopedia of Beliefs, Customs, Tales, Music, and Art, pp. 326–329. 2nd edition. ABC-CLIO. . Regal, Brian. 2011b. Sasquatch: Crackpots, Eggheads, and Cryptozoology. Springer. . Roesch, Ben S & John L. Moore. (2002). Cryptozoology. In Michael Shermer (ed.). The Skeptic Encyclopedia of Pseudoscience: Volume One. ABC-CLIO. pp. 71–78. Shea, Rachel Hartigan. 2013. "The Science Behind Bigfoot and Other Monsters".National Geographic, September 9, 2013. Online. Shermer, Michael. 2003. "Show Me the Body" in Scientific American, issue 288 (5), p. 27. Online. Simpson, George Gaylord (1984). "Mammals and Cryptozoology". Proceedings of the American Philosophical Society. Vol. 128, No. 1 (Mar. 30, 1984), pp. 1–19. American Philosophical Society. Thomas, Paul. 2020. Storytelling the Bible at the Creation Museum, Ark Encounter, and Museum of the Bible. Bloomsbury Publishing. Uscinski, Joseph. 2020. Conspiracy Theories: A Primer. Rowman & Littlefield Publishers. Wall, J. E. 1983. The ISC Newsletter, vol. 2, issue 10, p. 10. International Society of Cryptozoology. Ward, Daniel. 2011. "From the President". The American Biology Teacher, 73.8 (2011): 440–440. Watts, Linda S. 2007. Encyclopedia of American Folklore. Facts on File. External links Forteana Pseudoscience Subcultures Young Earth creationism Zoology
Cryptozoology
Biology
4,403
945,780
https://en.wikipedia.org/wiki/Bellcrank
A bellcrank is a type of crank that changes motion through an angle. The angle can range from 0 to 360 degrees, but 90-degree and 180-degree bellcranks are most common. The name comes from its first use, changing the vertical pull on a rope to a horizontal pull on the striker of a bell to sound it. Design A typical 90-degree bellcrank consists of an L-shaped crank pivoted where the two arms of the L meet. Moving rods or cables are attached to the outer ends of the L. When one is pulled, the L rotates around the pivot point, pulling on the other rod. A typical 180-degree bellcrank consists of a straight bar that pivots at or near its center. When one rod is pulled or pushed, the bar rotates around the pivot point, pulling or pushing on the other rod. Changing the length of the bellcrank's arms changes the mechanical advantage of the system. Many applications do not change the direction of motion but instead amplify a force "in line", which a bellcrank can do in a limited space. There is a tradeoff between range of motion, linearity of motion, and size. The greater the angle traversed by the crank, the more the motion ratio changes, and the more non-linear the motion becomes. Applications Aircraft Bellcranks are often used in aircraft flight control systems to connect the pilot's controls to the control surfaces. For example, on light aircraft, the rudder often has a bellcrank (also called a control horn) whose pivot point is the rudder hinge. A cable connects one of the pilot's rudder pedal to one side of the bellcrank. When the pilot pushes the rudder pedal, the cable pulls the bellcrank, causing the rudder to rotate. The opposite rudder pedal is connected to the other end of the bellcrank to rotate the rudder in the opposite direction. Architectural Bellcrank mechanisms were installed at the top of entryway stairs in multi-unit Victorian and Edwardian homes ( to 1930), particularly in the San Francisco Bay Area, to allow residents to open and close the doors remotely so they would not need to walk down the stairs to welcome guests. Automotive Bellcranks are also seen in automotive applications, such as in the linkage connecting the throttle pedal to the carburetor or connecting the brake pedal to the master cylinder. In vehicle suspensions, bellcranks are used in pullrod and pushrod suspensions in cars or in the Christie suspension in tanks. More vertical suspension designs such as MacPherson struts may not be feasible in some vehicle designs due to space, aerodynamic, or other design constraints; bellcranks translate the vertical motion of the wheel into horizontal motion, allowing the suspension to be mounted transversely or longitudinally within the vehicle. Bicycles Bellcranks are used in some internally geared hub assemblies to select gears. The motion from a Bowden cable is translated by a bellcrank to a push rod, which selects which portion of the epicyclic gears are driven by the bicycle's rear sprocket. References External links daerospace.com. Mechanical engineering Linkages (mechanical)
Bellcrank
Physics,Engineering
679
77,768,091
https://en.wikipedia.org/wiki/Masaaki%20Morisita
(27 January 1913 - 25 February 1997) was a Japanese ecologist and professor emeritus of animal ecology at Kyoto University. He is known as the father of population ecology in Japan. Morisita's overlap index and Morisita's index of dispersion are named after him. In addition to his work on statistical ecology, he also studied the natural history of Japanese ants. With several other myrmecologists, he produced a complete catalogue of ants in Japan. Three species of ants are named after him (Pyramica morisitai, Proceratium morisitai , and Lasius morisitai). Biography Morisita was born in Osaka and spent his high school years in Kōchi. He obtained a bachelor's of agriculture at Kyoto University in 1932, and later a doctorate of science in 1950. He studied the water strider Gerris lacustris as part of his graduate studies. After graduating, he worked as a professor in the department of biology at Kyushu University. He later became a professor in the zoology department of Kyoto University. In 1976, he retired and was named professor emeritus. He published the book Studies on Methods of Estimating Population Density, Biomass, and Productivity in Terrestrial Animals in 1977. He was awarded the Zoological Society of Japan Award in 1964, and the Order of the Rising Sun, 3rd Class in 1986. After his death, the Morisita Memorial & Research Foundation was named in his honor. Name Like many academics from Kyoto University, Morisita had a preference for the spelling of his name in foreign language papers, writing it as "Morisita" using Kunrei-shiki romanization, rather than the English-based Hepburn romanization "Morishita". References External links Morisita Masaaki Research Memorial Museum archives (in Japanese) 1913 births 1997 deaths Kyoto University alumni Ecologists Japanese zoologists Myrmecologists Scientists from Osaka
Masaaki Morisita
Environmental_science
390
617,003
https://en.wikipedia.org/wiki/Autoradiograph
An autoradiograph is an image on an X-ray film or nuclear emulsion produced by the pattern of decay emissions (e.g., beta particles or gamma rays) from a distribution of a radioactive substance. Alternatively, the autoradiograph is also available as a digital image (digital autoradiography), due to the recent development of scintillation gas detectors or rare-earth phosphorimaging systems. The film or emulsion is apposed to the labeled tissue section to obtain the autoradiograph (also called an autoradiogram). The auto- prefix indicates that the radioactive substance is within the sample, as distinguished from the case of historadiography or microradiography, in which the sample is marked using an external source. Some autoradiographs can be examined microscopically for localization of silver grains (such as on the interiors or exteriors of cells or organelles) in which the process is termed micro-autoradiography. For example, micro-autoradiography was used to examine whether atrazine was being metabolized by the hornwort plant or by epiphytic microorganisms in the biofilm layer surrounding the plant. Applications In biology, this technique may be used to determine the tissue (or cell) localization of a radioactive substance, either introduced into a metabolic pathway, bound to a receptor or enzyme, or hybridized to a nucleic acid. Applications for autoradiography are broad, ranging from biomedical to environmental sciences to industry. Receptor autoradiography The use of radiolabeled ligands to determine the tissue distributions of receptors is termed either in vivo or in vitro receptor autoradiography if the ligand is administered into the circulation (with subsequent tissue removal and sectioning) or applied to the tissue sections, respectively. Once the receptor density is known, in vitro autoradiography can also be used to determine the anatomical distribution and affinity of a radiolabeled drug towards the receptor. For in vitro autoradiography, radioligand was directly applying on frozen tissue sections without administration to the subject. Thus it cannot follow the distribution, metabolism and degradation situation completely in the living body. But because target in the cryosections is widely exposed and can direct contact with radioligand, in vitro autoradiography is still a quick and easy method to screen drug candidates, PET and SPECT ligands. The ligands are generally labeled with 3H (tritium), 18F (fluorine), 11C (carbon) or 125I (radioiodine). Compare to in vitro, ex vivo autoradiography were performed after administration of radioligand in the body, which can decrease the artifacts and are closer to the inner environment. The distribution of RNA transcripts in tissue sections by the use of radiolabeled, complementary oligonucleotides or ribonucleic acids ("riboprobes") is called in situ hybridization histochemistry. Radioactive precursors of DNA and RNA, [3H]-thymidine and [3H]-uridine respectively, may be introduced to living cells to determine the timing of several phases of the cell cycle. RNA or DNA viral sequences can also be located in this fashion. These probes are usually labeled with 32P, 33P, or 35S. In the realm of behavioral endocrinology, autoradiography can be used to determine hormonal uptake and indicate receptor location; an animal can be injected with a radiolabeled hormone, or the study can be conducted in vitro. Rate of DNA replication The rate of DNA replication in a mouse cell growing in vitro was measured by autoradiography as 33 nucleotides per second. The rate of phage T4 DNA elongation in phage-infected E. coli was also measured by autoradiography as 749 nucleotides per second during the period of exponential DNA increase at . Detection of protein phosphorylation Phosphorylation means the posttranslational addition of a phosphate group to specific amino acids of proteins, and such modification can lead to a drastic change in the stability or the function of a protein in the cell. Protein phosphorylation can be detected on an autoradiograph, after incubating the protein in vitro with the appropriate kinase and γ-32P-ATP. The radiolabeled phosphate of latter is incorporated into the protein which is isolated via SDS-PAGE and visualized on an autoradiograph of the gel. (See figure 3. of a recent study showing that CREB-binding protein is phosphorylated by HIPK2.) Detection of sugar movement in plant tissue In plant physiology, autoradiography can be used to determine sugar accumulation in leaf tissue. Sugar accumulation, as it relates to autoradiography, can described the phloem-loading strategy used in a plant. For example, if sugars accumulate in the minor veins of a leaf, it is expected that the leaves have few plasmodesmatal connections which is indicative of apoplastic movement, or an active phloem-loading strategy. Sugars, such as sucrose, fructose, or mannitol, are radiolabeled with [14-C], and then absorbed into leaf tissue by simple diffusion. The leaf tissue is then exposed to autoradiographic film (or emulsion) to produce an image. Images will show distinct vein patterns if sugar accumulation is concentrated in leaf veins (apoplastic movement), or images will show a static-like pattern if sugar accumulation is uniform throughout the leaf (symplastic movement). Other techniques This autoradiographic approach contrasts to techniques such as PET and SPECT where the exact 3-dimensional localization of the radiation source is provided by careful use of coincidence counting, gamma counters and other devices. Krypton-85 is used to inspect aircraft components for small defects. Krypton-85 is allowed to penetrate small cracks, and then its presence is detected by autoradiography. The method is called "krypton gas penetrant imaging". The gas penetrates smaller openings than the liquids used in dye penetrant inspection and fluorescent penetrant inspection. Historical events The task of radioactive decontamination following the Baker nuclear test at Bikini Atoll during Operation Crossroads in 1946 was far more difficult than the U.S. Navy had prepared for. Though the task's futility became apparent and the danger to cleanup crews mounted, Colonel Stafford Warren, in charge of radiation safety, had difficulty persuading Vice Admiral William H. P. Blandy to abandon the cleanup and with it the surviving target ships. On August 10, Warren showed Blandy an autoradiograph made by a surgeonfish from the lagoon that was left on a photographic plate overnight. The film was exposed by alpha radiation produced from the fish's scales, evidence that plutonium, mimicking calcium, had been distributed throughout the fish. Blandy promptly ordered that all further decontamination work be discontinued. Warren wrote home, "A self X ray of a fish ... did the trick." References General references Original publication by sole inventor Askins, Barbara S. (1 November 1976). "Photographic image intensification by autoradiography". Applied Optics. 15 (11): 2860–2865. Bibcode:1976ApOpt..15.2860A. doi:10.1364/ao.15.002860. Inline citations Further reading "Patent US4101780 Treating silver with a radioactive sulfur compound such as thiourea or derivatives". Google Patents. Retrieved 26 June 2014. Radiobiology Radiography
Autoradiograph
Chemistry,Biology
1,594
592,687
https://en.wikipedia.org/wiki/Network%20security
Network security are security controls, policies, processes and practices adopted to prevent, detect and monitor unauthorized access, misuse, modification, or denial of a computer network and network-accessible resources. Network security involves the authorization of access to data in a network, which is controlled by the network administrator. Users choose or are assigned an ID and password or other authenticating information that allows them access to information and programs within their authority. Network security covers a variety of computer networks, both public and private, that are used in everyday jobs: conducting transactions and communications among businesses, government agencies and individuals. Networks can be private, such as within a company, and others which might be open to public access. Network security is involved in organizations, enterprises, and other types of institutions. It does as its title explains: it secures the network, as well as protecting and overseeing operations being done. The most common and simple way of protecting a network resource is by assigning it a unique name and a corresponding password. Network security concept Network security starts with authentication, commonly with a username and a password. Since this requires just one detail authenticating the user name—i.e., the password—this is sometimes termed one-factor authentication. With two-factor authentication, something the user 'has' is also used (e.g., a security token or 'dongle', an ATM card, or a mobile phone); and with three-factor authentication, something the user 'is' is also used (e.g., a fingerprint or retinal scan). Once authenticated, a firewall enforces access policies such as what services are allowed to be accessed by the network users. Though effective to prevent unauthorized access, this component may fail to check potentially harmful content such as computer worms or Trojans being transmitted over the network. Anti-virus software or an intrusion prevention system (IPS) help detect and inhibit the action of such malware. An anomaly-based intrusion detection system may also monitor the network like wireshark traffic and may be logged for audit purposes and for later high-level analysis. Newer systems combining unsupervised machine learning with full network traffic analysis can detect active network attackers from malicious insiders or targeted external attackers that have compromised a user machine or account. Communication between two hosts using a network may be encrypted to maintain security and privacy. Honeypots, essentially decoy network-accessible resources, may be deployed in a network as surveillance and early-warning tools, as the honeypots are not normally accessed for legitimate purposes. Honeypots are placed at a point in the network where they appear vulnerable and undefended, but they Network security involves the authorization of access to data in a network, which is controlled by the network administrator. Users choose or are assigned an ID ...are actually isolated and monitored. Techniques used by the attackers that attempt to compromise these decoy resources are studied during and after an attack to keep an eye on new exploitation techniques. Such analysis may be used to further tighten security of the actual network being protected by the honeypot. A honeypot can also direct an attacker's attention away from legitimate servers. A honeypot encourages attackers to spend their time and energy on the decoy server while distracting their attention from the data on the real server. Similar to a honeypot, a honeynet is a network set up with intentional vulnerabilities. Its purpose is also to invite attacks so that the attacker's methods can be studied and that information can be used to increase network security. A honeynet typically contains one or more honeypots. Previous research on network security was mostly about using tools to secure transactions and information flow, and how well users knew about and used these tools. However, more recently, the discussion has expanded to consider information security in the broader context of the digital economy and society. This indicates that it's not just about individual users and tools; it's also about the larger culture of information security in our digital world. Security management Security management for networks is different for all kinds of situations. A home or small office may only require basic security while large businesses may require high-maintenance and advanced software and hardware to prevent malicious attacks from hacking and spamming. In order to minimize susceptibility to malicious attacks from external threats to the network, corporations often employ tools which carry out network security verifications]. Andersson and Reimers (2014) found that employees often do not see themselves as part of their organization's information security effort and often take actions that impede organizational changes. Types of attack Networks are subject to attacks from malicious sources. Attacks can be from two categories: "Passive" when a network intruder intercepts data traveling through the network, and "Active" in which an intruder initiates commands to disrupt the network's normal operation or to conduct reconnaissance and lateral movements to find and gain access to assets available via the network. Types of attacks include: Passive Network Active: Network virus (router viruses) Data modification See also References Further reading Case Study: Network Clarity , SC Magazine 2014 Cisco. (2011). What is network security?. Retrieved from cisco.com Security of the Internet (The Froehlich/Kent Encyclopedia of Telecommunications vol. 15. Marcel Dekker, New York, 1997, pp. 231–255.) Introduction to Network Security , Matt Curtin, 1997. Security Monitoring with Cisco Security MARS, Gary Halleen/Greg Kellogg, Cisco Press, Jul. 6, 2007. Self-Defending Networks: The Next Generation of Network Security, Duane DeCapite, Cisco Press, Sep. 8, 2006. Security Threat Mitigation and Response: Understanding CS-MARS, Dale Tesch/Greg Abelar, Cisco Press, Sep. 26, 2006. Securing Your Business with Cisco ASA and PIX Firewalls, Greg Abelar, Cisco Press, May 27, 2005. Deploying Zone-Based Firewalls, Ivan Pepelnjak, Cisco Press, Oct. 5, 2006. Network Security: PRIVATE Communication in a PUBLIC World, Charlie Kaufman | Radia Perlman | Mike Speciner, Prentice-Hall, 2002. Network Infrastructure Security, Angus Wong and Alan Yeung, Springer, 2009. Cybersecurity engineering
Network security
Technology,Engineering
1,278
36,847,391
https://en.wikipedia.org/wiki/57%20Cygni
57 Cygni is a close binary star system in the constellation Cygnus, located about 530 light years from Earth. It is visible to the naked eye as a blue-white hued star with a baseline apparent visual magnitude of 4.80. The pair have a magnitude difference of 0.34. This system is moving closer to the Earth with a heliocentric radial velocity of −21 km/s. This is a double-lined spectroscopic binary with an orbital period of 2.85 days and an eccentricity of 0.15. They show a steady change in their longitude of periastron, showing an apsidal period of . The system does not form an eclipsing binary, having the orbital inclination of around 48°. Both components are B-type main-sequence stars with a stellar classification of B5 V. References B-type main-sequence stars Spectroscopic binaries Cygnus (constellation) Durchmusterung objects Cygni, 57 199081 103089 8001 Suspected variables
57 Cygni
Astronomy
213
13,056,002
https://en.wikipedia.org/wiki/Saab%20Direct%20Ignition
Saab Direct Ignition is a capacitor discharge ignition developed by Saab Automobile, then known as Saab-Scania, and Mecel AB during the 1980s. It was first shown in 1985 and put into series production in the Saab 9000 in 1988. One of the first instances of using the system was for a Formula Three racing engine (on B202 basis) developed with the help of engine builder John Nicholson, first shown in the spring of 1985. The system has been revised several times over the years. The ignition system together with the ignition coils form a single transformer oil filled cassette (or two cassettes in the case of a V6 engine) which is placed directly on the spark plugs, without the need for a distributor. It was later implemented with the Saab Trionic Engine Management Systems as one of the first ion sensing ignition system on a production car. The system puts a low voltage over the spark plugs when they are not fired to measure ionization in the cylinders. The ionic current measurement is used to replace the ordinary knock sensor and misfire measurement function. Direct Ignition Cassette The spark plugs are directly coupled to the "DIC" (or "IDM") which houses the ignition coils and electronics that measure cylinder ionization for use by the Trionic engine management system. See also Direct & Distributorless Ignition Saab H engine Saab 900NG (2nd Generation 900, 1994-1998) Saab 9-3 (1st Generation, 1999-2002) References Ignition systems Engine technology Automotive technology tradenames Saab engines
Saab Direct Ignition
Technology
320
426,738
https://en.wikipedia.org/wiki/Radar%20speed%20gun
A radar speed gun, also known as a radar gun, speed gun, or speed trap gun, is a device used to measure the speed of moving objects. It is commonly used by police to check the speed of moving vehicles while conducting traffic enforcement, and in professional sports to measure speeds such as those of baseball pitches, tennis serves, and cricket bowls. A radar speed gun is a Doppler radar unit that may be handheld, vehicle-mounted, or static. It measures the speed of the objects at which it is pointed by detecting a change in frequency of the returned radar signal caused by the Doppler effect, whereby the frequency of the returned signal is increased in proportion to the object's speed of approach if the object is approaching, and lowered if the object is receding. Such devices are frequently used for speed limit enforcement, although more modern LIDAR speed gun instruments, which use pulsed laser light instead of radar, began to replace radar guns during the first decade of the twenty-first century, because of limitations associated with small radar systems. History The radar speed gun was invented by John L. Barker Sr., and Ben Midlock, who developed radar for the military while working for the Automatic Signal Company (later Automatic Signal Division of LFE Corporation) in Norwalk, Connecticut during World War II. Originally, Automatic Signal was approached by Grumman to solve the specific problem of terrestrial landing gear damage on the Consolidated PBY Catalina amphibious aircraft. Barker and Midlock cobbled a Doppler radar unit from coffee cans soldered shut to make microwave resonators. The unit was installed at the end of the runway at Grumman's Bethpage, New York facility, and aimed directly upward to measure the sink rate of landing PBYs. After the war, Barker and Midlock tested radar on the Merritt Parkway. In 1947, the system was tested by the Connecticut State Police in Glastonbury, Connecticut, initially for traffic surveys and issuing warnings to drivers for excessive speed. Starting in February 1949, the state police began to issue speeding tickets based on the speed recorded by the radar device. In 1948, radar was also used in Garden City, New York. Mechanism Doppler effect Radar speed guns use Doppler radar to perform speed measurements. Radar speed guns, like other types of radar, consist of a radio transmitter and receiver. They send out a radio signal in a narrow beam, then receive the same signal back after it bounces off the target object. Due to a phenomenon called the Doppler effect, if the object is moving toward or away from the gun, the frequency of the reflected radio waves when they come back is different from the transmitted waves. When the object is approaching the radar, the frequency of the return waves is higher than the transmitted waves; when the object is moving away, the frequency is lower. From that difference, the radar speed gun can calculate the speed of the object from which the waves have been bounced. This speed is given by the following equation: where is the speed of light, is the emitted frequency of the radio waves, and is the difference in frequency between the radio waves that are emitted and those received back by the gun. This equation holds precisely only when object speeds are low compared to that of light, but in everyday situations, this is the case and the velocity of an object is directly proportional to this difference in frequency. Stationary radar After the returning waves are received, a signal with a frequency equal to this difference is created by mixing the received radio signal with a little of the transmitted signal. Just as two different musical notes played together create a beat note at the difference in frequency between them, so when these two radio signals are mixed they create a "beat" signal (called a heterodyne). An electrical circuit then measures this frequency using a digital counter to count the number of cycles in a fixed time period, and displays the number on a digital display as the object's speed. Since this type of speed gun measures the difference in speed between a target and the gun itself, the gun must be stationary in order to give a correct reading. If a measurement is made from a moving car, it will give the difference in speed between the two vehicles, not the speed of the target relative to the road, so a different system has been designed to work from moving vehicles. Moving radar In so-called "moving radar", the radar antenna receives reflected signals from both the target vehicle and stationary background objects such as the road surface, nearby road signs, guard rails and streetlight poles. Instead of comparing the frequency of the signal reflected from the target with the transmitted signal, it compares the target signal with this background signal. The frequency difference between these two signals gives the true speed of the target vehicle. Design considerations Modern radar speed guns normally operate at X, K, Ka, and (in Europe) Ku bands. Radar guns that operate using the X band (8 to 12 GHz) frequency range are becoming less common because they produce a strong and easily detectable beam. Also, most automatic doors utilize radio waves in the X band range and can possibly affect the readings of police radar. As a result, K band (18 to 27 GHz) and Ka band (27 to 40 GHz) are most commonly used by police agencies. Some motorists install radar detectors which can alert them to the presence of a speed trap ahead, and the microwave signals from radar may also change the quality of reception of AM and FM radio signals when tuned to a weak station. For these reasons, hand-held radar typically includes an on-off trigger and the radar is only turned on when the operator is about to make a measurement. Radar detectors are illegal in some areas. Limitations Traffic radar comes in many models. Hand-held units are mostly battery powered, and for the most part are used as stationary speed enforcement tools. Stationary radar can be mounted in police vehicles and may have one or two antennae. Moving radar is employed, as the name implies, when a police vehicle is in motion and can be very sophisticated, able to track vehicles approaching and receding, both in front of and behind the patrol vehicle and also able to track multiple targets at once. It can also track the fastest vehicle in the selected radar beam, front or rear. However, there are a number of limitations to the use of radar speed guns. For example, user training and certification are required so that a radar operator can use the equipment effectively, with trainees being required to consistently visually estimate vehicle speed within +/-2 mph of actual target speed, for example if the target's actual speed is 30 mph then the operator must be able to consistently visually estimate the target speed as falling between 28 and 32 mph. Stationary traffic enforcement radar must occupy a location above or to the side of the road, so the user must understand trigonometry to accurately estimate vehicle speed as the direction changes while a single vehicle moves within the field of view. Actual vehicle speed and radar measurement thus are rarely the same due to what is known as the cosine effect, however, for all practical purposes this difference in actual speed and measured speed is inconsequential, generally being less than 1 mph difference, as police are trained to position the radar to minimize this inaccuracy and when present the error is always in the favor of the driver reporting a lower than actual speed. Additionally, the placement of the radar can be important as well to avoid large reflective surfaces near the radar. Such reflective surfaces can create a multi-path scenario where the radar beam can be reflected off of the unintended reflective target and find another target and return thereby causing a reading that can be confused for the traffic being monitored. However, MythBusters did an episode on trying to get the gun to have incorrect readings by changing the surface of the passing object and found no significant effect. Radar speed guns do not differentiate between targets in traffic, and proper operator training is essential for accurate speed enforcement. This inability to differentiate among targets in the radar's field of view is the primary reason for the operator being required to consistently and accurately visually estimate target speeds to within +/-2 mph, so that, for example if there are seven targets in the radar's field of view and the operator is able to visually estimate the speed of six of those targets as approximately 40 mph and visually estimate the speed of one of those targets as approximately 55 mph and the radar unit is displaying a reading of 56 mph it becomes clear which target's speed the unit is measuring. In moving radar operation, another potential limitation occurs when the radar's patrol speed locks onto other moving targets rather than the actual ground speed. This can occur if the position of the radar is too close to a larger reflective target such as a tractor trailer. To help alleviate this the use of secondary speed inputs from the vehicle's CAN bus, VSS signal, or the use of a GPS-measured speed can help to reduce errors by giving a secondary speed to compare the measured speed against. Size The primary limitation of hand held and mobile radar devices is size. An antenna diameter of less than several feet limits directionality, which can only partly be compensated for by increasing the frequency of the wave. Size limitations can cause hand-held and mobile radar devices to produce measurements from multiple objects within the field of view of the user. The antenna on some of the most common hand-held devices is only in diameter. The beam of energy produced by an antenna of this size using X-band frequencies occupies a cone that extends about 22 degrees surrounding the line of sight, 44 degrees in total width. This beam is called the main lobe. There is also a side lobe extending from 22 to 66 degrees away from the line of sight, and other lobes as well, but side lobes are about 20 times (13 dB) less sensitive than the main lobe, although they will detect moving objects close by. The primary field of view is about 130 degrees wide. K-band reduces this field of view to about 65 degrees by increasing the frequency of the wave. Ka-band reduces this further to about 40 degrees. Side lobe detections can be eliminated using side lobe blanking which narrows the field of view, but the additional antennas and complex circuitry impose size and price constraints that limit this to applications for the military, air traffic control, and weather agencies. Mobile weather radar is mounted on semi-trailer trucks in order to narrow the beam. Distance A second limitation for hand-held devices is that they have to use continuous-wave radar to make them light enough to be mobile. Speed measurements are only reliable when the distance at which a specific measurement has been recorded is known. Distance measurements require pulsed operation or cameras when more than one moving object is within the field of view. Continuous-wave radar may be aimed directly at a vehicle 100 yards away but produce a speed measurement from a second vehicle 1 mile away when pointed down a straight roadway. Once again falling back on the training and certification requirement for consistent and accurate visual estimation so that operators can be certain which object's speed the device has measured without distance information, which is unavailable with continuous wave radar. Some sophisticated devices may produce different speed measurements from multiple objects within the field of view. This is used to allow the speed-gun to be used from a moving vehicle, where a moving and a stationary object must be targeted simultaneously, and some of the most sophisticated units are capable of displaying up to four separate target speeds while operating in moving mode once again emphasizing the importance of the operators' ability to consistently and accurately visually estimate speed. Environment The environment and locality in which a measurement is taken can also play a role. Using a hand-held radar to scan traffic on an empty road while standing in the shade of a large tree, for example, might risk detecting the motion of the leaves and branches if the wind is blowing hard (side lobe detection). There may be an unnoticed airplane overhead, particularly if there is an airport nearby, which again emphasizes the importance of proper operator training. Associated cameras Conventional radar gun limitations can be corrected with a camera aimed along the line of sight. Cameras are associated with automated ticketing machines (known in the UK as speed cameras) where the radar is used to trigger a camera. The radar speed threshold is set at or above the maximum legal vehicle speed. The radar triggers the camera to take several pictures when a nearby object exceeds this speed. Two pictures are required to determine vehicle speed using roadway survey markings. This can be reliable for traffic in city environments when multiple moving objects are within the field of view. It is the camera, however, and its timing information, in this case, that determines the speed of an individual vehicle, the radar gun simply alerting the camera to start recording. Newer instruments Laser devices, such as a LIDAR speed gun, are capable of producing reliable range and speed measurements in typical urban and suburban traffic environments without the site survey limitation and cameras. This is reliable in city traffic because LIDAR has directionality similar to a typical firearm because the beam is shaped more like a pencil that produces measurement only from the object it has been aimed at. See also LIDAR detector Radar detector References American inventions Traffic enforcement systems Measuring instruments Radar Traffic law
Radar speed gun
Technology,Engineering
2,681
70,449,944
https://en.wikipedia.org/wiki/Legal%20gender
Legal gender, or legal sex, is a sex or gender that is recognized under the law. Biological sex, sex reassignment and gender identity are used to determine legal gender. The details vary by jurisdiction. Legal gender identity is fundamental to many legal rights and obligations, including access to healthcare, work, and family relationships, as well as issues of personal identification and documentation. The complexities involved in determining legal gender, despite the seeming simplicity of the underlying principles, highlight the dynamic interaction between biological characteristics, self-identified gender identity, societal norms, and changing legal standards. Because of this, the study of legal gender is a complex field that is influenced by cultural, historical, and legal factors. As such, a thorough investigation is necessary to fully understand the subject's implications and breadth within a range of legal systems and societies. History In European societies, Roman law, post-classical canon law, and later common law, referred to a person's sex as male, female or hermaphrodite, with legal rights as male or female depending on the characteristics that appeared most dominant. Under Roman law, a hermaphrodite had to be classed as either male or female. The 12th-century Decretum Gratiani states that "Whether an hermaphrodite may witness a testament, depends on which sex prevails". The foundation of common law, the 16th Century Institutes of the Lawes of England, described how a hermaphrodite could inherit "either as male or female, according to that kind of sexe which doth prevaile." Legal cases where legal sex was placed in doubt have been described over the centuries. In 1930, Lili Elbe received sexual reassignment surgery and an ovary transplant and changed her legal gender as female. In 1931, Dora Richter received removal of the penis and vaginoplasty. A few weeks after Lili Elbe had her final surgery including uterus transplant and vaginoplasty. Immune rejection from transplanted uterus caused her death. In May 1933, the Institute for Sexual Research was attacked by Nazis, losing any surviving records about Richter. Toni Ebel and her partner , who were both other German sexual reassignment surgery recipients, were forced to separate in 1942 after harassment from their neighbors. After World War II, transgender issues received public attention again. Legislation in the 1950s and 60s primarily focused on criminalizing homosexuality and enforcing heteronormative gender roles, leading to disproportionate police harassment and arrests of gender non-conforming individuals. Christine Jorgensen was unable to marry a man because her birth certificate listed her as male. Some transgender people changed their birth certificates, but the validity of these documents were challenged. In the United Kingdom, Sir Ewan Forbes' case recognized the process of legal gender change. However. legal gender change was not recognized in Corbett v Corbett.The 1969 Stonewall Uprising marked a pivotal moment in the gay rights movement, sparking protests and marches globally and underscoring ongoing discrimination and violence against LGBT individuals. Today, many jurisdictions allow transgender individuals to change their legal gender, but some jurisdictions require sterilization, childlessness or an unmarried status for legal gender change. In some cases, gender-affirming surgery is a requirement for legal recognition. See also References 10. Davidson, M. (2022) Transgender Legal Battles: A Timeline, JSTOR Daily. Available at: https://daily.jstor.org/transgender-legal-battles-a-timeline/. 11. Morris, B. (2023, March 16). A brief history of lesbian, gay, bisexual, and transgender social movements. American Psychological Association. https://www.apa.org/topics/lgbtq/history ‌12. Trans rights progress in Asia hits barricade of tradition, legal maze. (n.d.). Nikkei Asia. https://asia.nikkei.com/Spotlight/Asia-Insight/Trans-rights-progress-in-Asia-hits-barricade-of-tradition-legal-maze ‌13. O’Connor, A. M., Seunik, M., Radi, B., Matthyse, L., Gable, L., Huffstetler, H. E., & Meier, B. M. (2022). Transcending the Gender Binary under International Law: Advancing Health-Related Human Rights for Trans* Populations. Journal of Law, Medicine & Ethics, 50(3), 409–424. https://doi.org/10.1017/jme.2022.84 ‌14. Divan, V., Cortez, C., Smelyanskaya, M., & Keatley, J. (2016). Transgender social inclusion and equality: A pivotal path to development. Journal of the International AIDS Society, 19(3). https://doi.org/10.7448/ias.19.3.20803 ‌15. Gerritse, K., Hartman, L. A., Bremmer, M. A., Kreukels, B. P. C., & Molewijk, B. C. (2021). Decision-making approaches in transgender healthcare: conceptual analysis and ethical implications. Medicine, Health Care and Philosophy. https://doi.org/10.1007/s11019-021-10023-6 16. Jain, D., & DasGupta, D. (2021). Law, gender identity, and the uses of human rights: The paradox of recognition in South Asia. Journal of Human Rights, 20(1), 110–126. https://doi.org/10.1080/14754835.2020.1845129 17. "Japan Passes Law to 'Promote Understanding' of LGBT People | Human Rights Watch". 2023-07-12. Retrieved 2024-04-24. 18. Victory for Transgender Rights in Japan | Human Rights Watch". 2023-10-25. Retrieved 2024-04-24. 19. Li Ka Hang, Vanessa (2023-10-30). "Rethinking Gender Recognition in Hong Kong and the Way Forward". Journal of Law and Jurisprudence. 12 (1). doi:10.14324/111.444.2052-1871.1556. ISSN 2052-1871. 20. Bhattacharya, Shamayeta; Ghosh, Debarchana; Purkayastha, Bandana (2022-10-07). "'Transgender Persons (Protection of Rights) Act' of India: An Analysis of Substantive Access to Rights of a Transgender Community". Journal of Human Rights Practice. 14 (2): 676–697. doi:10.1093/jhuman/huac004. ISSN 1757-9627. PMC 9555747. PMID 36246149. 21. "Gender Reassignment". dph.illinois.gov. Retrieved 2024-04-24. 22. thisisloyal.com, Loyal |. "The Impact of 2024 Anti-Transgender Legislation on Youth". Williams Institute. Retrieved 2024-04-24. Gender Transgender law
Legal gender
Biology
1,524
64,332,082
https://en.wikipedia.org/wiki/2-Methoxyestriol
2-Methoxyestriol (2-MeO-E3) is an endogenous estrogen metabolite. It is specifically a metabolite of estriol and 2-hydroxyestriol. It has negligible affinity for the estrogen receptors and no estrogenic activity. However, 2-methoxyestriol does have some non-estrogen receptor-mediated cholesterol-lowering effects. See also 2-Methoxyestradiol 2-Methoxyestrone 4-Methoxyestradiol 4-Methoxyestrone References Estranes Ethers Hypolipidemic agents Human metabolites Hydroxyarenes Sterols
2-Methoxyestriol
Chemistry
151
1,256,948
https://en.wikipedia.org/wiki/Schilling%20test
The Schilling test was a medical investigation used for patients with vitamin B (cobalamin) deficiency. The purpose of the test was to determine how well a patient is able to absorb B12 from their intestinal tract. The test is now considered obsolete and is rarely performed, and is no longer available at many medical centers. It is named for Robert F. Schilling. Process The Schilling test has multiple stages. As noted below, it can be done at any time after vitamin B supplementation and body store replacement, and some clinicians recommend that in severe deficiency cases, at least several weeks of vitamin repletion be done before the test (more than one B shot, and also oral folic acid), in order to ensure that impaired absorption of B (with or without intrinsic factor) is not occurring due to damage to the intestinal mucosa from the B and folate deficiency themselves. Stage 1: oral vitamin B plus intramuscular vitamin B12 (without IF) In the first part of the test, the patient is given radiolabeled vitamin B to drink or eat. The most commonly used radiolabels are 57Co and 58Co. An intramuscular injection of unlabeled vitamin B is given an hour later. This is not enough to replete or saturate body stores of B. The purpose of the single injection is to temporarily saturate B receptors in the liver with enough normal vitamin B to prevent radioactive vitamin B binding in body tissues (especially in the liver), so that if absorbed from the G.I. tract, it will pass into the urine. The patient's urine is then collected over the next 24 hours to assess the absorption. Normally, the ingested radiolabeled vitamin B will be absorbed into the body. Since the body already has liver receptors for transcobalamin/vitamin B saturated by the injection, much of the ingested vitamin B will be excreted in the urine. A normal result shows at least 10% of the radiolabeled vitamin B in the urine over the first 24 hours. In patients with pernicious anemia or with deficiency due to impaired absorption, less than 10% of the radiolabeled vitamin B is detected. The normal test will result in a higher amount of the radiolabeled cobalamin in the urine because it would have been absorbed by the intestinal epithelium, but passed into the urine because all hepatic B12 receptors were occupied. An abnormal result is caused by less of the labeled cobalamin to appear in the urine because it will remain in the intestine and be passed into the feces. Stage 2: vitamin B and intrinsic factor If an abnormality is found, i.e. the B12 in the urine is only present in low levels, the test is repeated, this time with additional oral intrinsic factor. If this second urine collection is normal, this shows a lack of intrinsic factor production. This is by definition pernicious anemia. A low result on the second test implies abnormal intestinal absorption (malabsorption), which could be caused by coeliac disease, biliary disease, Whipple's disease, small bowel bacterial overgrowth syndrome, fish tapeworm infestation (Diphyllobothrium latum), or liver disease. Malabsorption of B can be caused by intestinal dysfunction from a low vitamin level in-and-of-itself (see below), causing test result confusion if repletion has not been done for some days previously. Stage 3: vitamin B and antibiotics This stage is useful for identifying patients with bacterial overgrowth syndrome. The physician will provide a course of 2 weeks of antibiotics to eliminate any possible bacterial overgrowth and repeat the test to check whether radio-labeled Vitamin B12 would be found in urine or not. Stage 4: vitamin B and pancreatic enzymes This stage, in which pancreatic enzymes are administered, can be useful in identifying patients with pancreatic insufficiency. The physician will give 3 days of pancreatic enzymes followed by repeating the test to check if radio-labeled Vitamin B12 would be detected in urine. Combined stage 1 and stage 2 In some versions of the Schilling test, B can be given both with and without intrinsic factor at the same time, using different cobalt radioisotopes 57Co and 58Co, which have different radiation signatures, in order to differentiate the two forms of B. This is performed with the 'Dicopac' kitset. This allows for only a single radioactive urine collection. Complications Note that the B shot which begins the Schilling test is enough to go a considerable way toward treating B deficiency, so the test is also a partial treatment for B deficiency. Also, the classic Schilling test can be performed at any time, even after full B repletion and correction of the anemia, and it will still show if the cause of the B deficiency was intrinsic-factor related. In fact, some clinicians have suggested that folate and B replacement for several weeks be normally performed before a Schilling test is done, since folate and B deficiencies are both known to interfere with intestinal cell function, and thus cause malabsorption of B on their own, even if intrinsic factor is being made. This state would then tend to cause a false-positive test for both simple B and intrinsic factor-related B malabsorption. Several weeks of vitamin replacement are necessary, before epithelial damage to the G.I. tract from B deficiency is corrected. Many labs have stopped performing the Schilling test, due to lack of production of the cobalt radioisotopes and labeled-B test substances. Also, injection replacement of B has become relatively inexpensive, and can be self-administered by patients, as well as megadose oral B. Since these are the same treatments which would be administered for most causes of B malabsorption even if the exact cause were identified, the diagnostic test may be omitted without damage to the patient (so long as follow-up treatment and occasional serum B testing is not allowed to lapse). It is possible for use of other radiopharmaceuticals to interfere with interpretation of the test. Diagnoses References External links Chemical pathology Vitamin B12 Diagnostic gastroenterology Obsolete medical procedures Nuclear medicine procedures
Schilling test
Chemistry,Biology
1,314
48,030,331
https://en.wikipedia.org/wiki/NGC%206221
NGC 6221 (also known as PGC 59175) is a barred spiral galaxy located in the constellation Ara. In de Vaucouleurs' galaxy morphological classification scheme, it is classified as SB(s)bc and was discovered by British astronomer John Herschel on 3 May 1835. NGC 6221 is located at about 69 million light years from Earth. Galaxy group information NGC 6221 is part of galaxy group NGC 6221/15, which includes spiral galaxy NGC 6215 and three dwarf galaxies. Interactions between NGC 6221 and NGC 6215 form a of neutral hydrogen gas over a projected distance of 100 kpc; Dwarf 3 of the three dwarf galaxies may have formed from the bridging gas. Supernovae Two supernovae have been observed in NGC 6221: SN 1990W (type Ib/Ic, mag. 15) was discovered by Robert Evans on 16 August 1990. SN 2024pxg (type II, mag. 15.1) was discovered by the Distance Less Than 40 Mpc Survey (DLT40) on 23 July 2024. See also List of NGC objects (6001–7000) New General Catalogue References External links Barred spiral galaxies Ara (constellation) 6221 59175
NGC 6221
Astronomy
255
16,502,684
https://en.wikipedia.org/wiki/George%20M.%20Low%20Award
The George M. Low Award is an annual award given by NASA to its subcontractors in recognition of quality and performance. NASA characterizes it as a "premier award". NASA's chief of safety and mission assurance, Terrence Wilcutt, called it "our recognition for their management's leadership and employee commitment to the highest standards in performance." The award was named after George M. Low, a NASA leader and former administrator who spearheaded efforts to improve quality and mitigate risk after the disastrous Apollo 1 fire. He provided management and direction for the Mercury, Gemini, Apollo, and advanced crewed missions programs. Recipients 2012 - URS Federal Services; ATA Engineering, Inc. 2011 - Sierra Lobo, Inc.; Teledyne Brown Engineering 2010 - Analytical Mechanics Associates, Inc.; Neptec Design Group; Jacobs Technology, Inc.; ATK Aerospace Systems 2009 - United Space Alliance; Applied Geo Technologies 2008 - ARES Corporation; Oceaneering International 2007 - ASRC Aerospace Corporation; Pratt & Whitney Rocketdyne; Sierra Lobo; Lockheed Martin 2006 - Teledyne Brown Engineering; Barrios Technologies 2005 - SGT Inc; ATK Thiokol; QSS Group, Inc.; BTAS, Inc 2004 - BTAS, Inc. (Business Technologies and Solutions), ERC, Inc; Northrop Grumman; SGS; Alliance Spacesystems, Inc.; Titan Corporation 2003 - Marotta Controls; Lockheed Martin; Boeing 2002 - Analytical Services & Materials; Jacobs Sverdrup; ManTech; RS Information Systems; Williams International 2001 - Native American Services; Raytheon; Swales Aerospace 2000 - Advanced Technology; Boeing; Computer Sciences Corporation; Jackson and Tull 1999 - Barrios Technology; Kay and Associates; Raytheon; Thiokol Corporation 1997-1998 - Advanced Technology, AlliedSignal, BST Systems, DynCorp, ILC Dover 1996-1997 - Dynamic Engineering, Inc.; Hummer Associates; Boeing North American; Scientific and Commercial Systems Corporation; Hamilton Standard Space Systems International; Unisys Corporation 1995-1996 - Hamilton Standard Space Systems International 1994-1995 - Unisys Corporation 1992 - IBM; Honeywell 1991 - Thiokol Corporation; Grumman Corporation 1990 - Rockwell International Corporation; Marotta Scientific Controls 1989 - Lockheed Corporation 1988 - Rockwell International Corporation 1987 - IBM; Martin Marietta Corporation See also List of engineering awards List of awards named after people References External links George M. Low Award Aerospace engineering awards American science and technology awards Awards and decorations of NASA
George M. Low Award
Engineering
525
14,564,960
https://en.wikipedia.org/wiki/Active%20metabolite
An active metabolite, or pharmacologically active metabolite is a biologically active metabolite of a xenobiotic substance, such as a drug or environmental chemical. Active metabolites may produce therapeutic effects, as well as harmful effects. Metabolites of drugs An active metabolite results when a drug is metabolized by the body into a modified form which produces effects in the body. Usually these effects are similar to those of the parent drug but weaker, although they can still be significant (see e.g. 11-hydroxy-THC, morphine-6-glucuronide). Certain drugs such as codeine and tramadol have metabolites (morphine and O-desmethyltramadol respectively) that are stronger than the parent drug and in these cases the metabolite may be responsible for much of the therapeutic action of the parent drug. Sometimes, however, metabolites may produce toxic effects and patients must be monitored carefully to ensure they do not build up in the body. This is an issue with some well-known drugs, such as pethidine (meperidine) and dextropropoxyphene. Prodrugs Sometimes drugs are formulated in an inactive form that is designed to break down inside the body to form the active drug. These are called prodrugs. The reasons for this type of formulation may be because the drug is more stable during manufacture and storage as the prodrug form, or because the prodrug is better absorbed by the body or has superior pharmacokinetics (e.g., lisdexamphetamine). References Further reading Pharmacokinetics Metabolism
Active metabolite
Chemistry,Biology
356
63,449,981
https://en.wikipedia.org/wiki/Davenport%E2%80%93Schinzel%20Sequences%20and%20Their%20Geometric%20Applications
Davenport–Schinzel Sequences and Their Geometric Applications is a book in discrete geometry. It was written by Micha Sharir and Pankaj K. Agarwal, and published by Cambridge University Press in 1995, with a paperback reprint in 2010. Topics Davenport–Schinzel sequences are named after Harold Davenport and Andrzej Schinzel, who applied them to certain problems in the theory of differential equations. They are finite sequences of symbols from a given alphabet, constrained by forbidding pairs of symbols from appearing in alternation more than a given number of times (regardless of what other symbols might separate them). In a Davenport–Schinzel sequence of order , the longest allowed alternations have length . For instance, a Davenport–Schinzel sequence of order three could have two symbols and that appear either in the order or , but longer alternations like would be forbidden. The length of such a sequence, for a given choice of , can be only slightly longer than its number of distinct symbols. This phenomenon has been used to prove corresponding near-linear bounds on various problems in discrete geometry, for instance showing that the unbounded face of an arrangement of line segments can have complexity that is only slightly superlinear. The book is about this family of results, both on bounding the lengths of Davenport–Schinzel sequences and on their applications to discrete geometry. The first three chapters of the book provide bounds on the lengths of Davenport–Schinzel sequences whose superlinearity is described in terms of the inverse Ackermann function . For instance, the length of a Davenport–Schinzel sequence of order three, with symbols, can be at most , as the second chapter shows; the third concerns higher orders. The fourth chapter applies this theory to line segments, and includes a proof that the bounds proven using these tools are tight: there exist systems of line segments whose arrangement complexity matches the bounds on Davenport–Schinzel sequence length. The remaining chapters concern more advanced applications of these methods. Three chapters concern arrangements of curves in the plane, algorithms for arrangements, and higher-dimensional arrangements, following which the final chapter (comprising a large fraction of the book) concerns applications of these combinatorial bounds to problems including Voronoi diagrams and nearest neighbor search, the construction of transversal lines through systems of objects, visibility problems, and robot motion planning. The topic remains an active area of research and the book poses many open questions. Audience and reception Although primarily aimed at researchers, this book (and especially its earlier chapters) could also be used as the textbook for a graduate course in its material. Reviewer Peter Hajnal calls it "very important to any specialist in computational geometry" and "highly recommended to anybody who is interested in this new topic at the border of combinatorics, geometry, and algorithm theory". References Combinatorics on words Discrete geometry Mathematics books 1995 non-fiction books
Davenport–Schinzel Sequences and Their Geometric Applications
Mathematics
596
14,947,222
https://en.wikipedia.org/wiki/International%20Conference%20on%20Language%20Resources%20and%20Evaluation
The International Conference on Language Resources and Evaluation is an international conference organised by the ELRA Language Resources Association every other year (on even years) with the support of institutions and organisations involved in Natural language processing. The series of LREC conferences was launched in Granada in 1998. History of conferences The survey of the LREC conferences over the period 1998-2013 was presented during the 2014 conference in Reykjavik as a closing session. It appears that the number of papers and signatures is increasing over time. The average number of authors per paper is higher as well. The percentage of new authors is between 68% and 78%. The distribution between male (65%) and female (35%) authors is stable over time. The most frequent technical term is "annotation", then comes "part-of-speech". The LRE Map The LRE Map was introduced at LREC 2010 and is now a regular feature of the LREC submission process for both the conference papers and the workshop papers. At the submission stage, the authors are asked to provide some basic information about all the resources (in a broad sense, i.e. including tools, standards and evaluation packages), either used or created, described in their papers. All these descriptors are then gathered in a global matrix called the LRE Map. This feature has been extended to several other conferences. References External links Conference website European Language Resources Association web site Natural language processing Computer science conferences
International Conference on Language Resources and Evaluation
Technology
298
71,325,287
https://en.wikipedia.org/wiki/4-Isopropenylphenol
4-Isopropenylphenol is an organic compound with the formula . The molecule consists of a 2-propenyl group (CH2=C-CH3) affixed to the 4 position of phenol. The compound is an intermediate in the production of bisphenol A (BPA), 2.7 Mkg/y of which are produced annually (2007). It is also generated by the recycling of o,p-BPA, a byproduct of the production of the p,p-isomer of BPA. Synthesis and reactions The high-temperature hydrolysis of BPA gives the title compound together with phenol: The compound can also be produced by catalytic dehydrogenation of 4-isopropylphenol. 4-Isopropenylphenol undergoes O-protonation by sulfuric acid, giving the carbocation, which undergoes a variety of dimerization reactions. References 4-Hydroxyphenyl compounds Commodity chemicals Isopropenyl compounds
4-Isopropenylphenol
Chemistry
215
57,721,868
https://en.wikipedia.org/wiki/Mass-flux%20fraction
The mass-flux fraction (or Hirschfelder-Curtiss variable or Kármán-Penner variable) is the ratio of mass-flux of a particular chemical species to the total mass flux of a gaseous mixture. It includes both the convectional mass flux and the diffusional mass flux. It was introduced by Joseph O. Hirschfelder and Charles F. Curtiss in 1948 and later by Theodore von Kármán and Sol Penner in 1954. The mass-flux fraction of a species i is defined as where is the mass fraction is the mass average velocity of the gaseous mixture is the average velocity with which the species i diffuse relative to is the density of species i is the gas density. It satisfies the identity , similar to the mass fraction, but the mass-flux fraction can take both positive and negative values. This variable is used in steady, one-dimensional combustion problems in place of the mass fraction. For one-dimensional ( direction) steady flows, the conservation equation for the mass-flux fraction reduces to , where is the mass production rate of species i. References Chemical properties Dimensionless numbers of chemistry Combustion
Mass-flux fraction
Chemistry
229
8,625,600
https://en.wikipedia.org/wiki/Phytane
Phytane is the isoprenoid alkane formed when phytol, a chemical substituent of chlorophyll, loses its hydroxyl group. When phytol loses one carbon atom, it yields pristane. Other sources of phytane and pristane have also been proposed than phytol. Pristane and phytane are common constituents in petroleum and have been used as proxies for depositional redox conditions, as well as for correlating oil and its source rock (i.e. elucidating where oil formed). In environmental studies, pristane and phytane are target compounds for investigating oil spills. Chemistry Phytane is a non-polar organic compound that is a clear and colorless liquid at room temperature. It is a head-to-tail linked regular isoprenoid with chemical formula C20H42. Phytane has many structural isomers. Among them, crocetane is a tail-to-tail linked isoprenoid and often co-elutes with phytane during gas chromatography (GC) due to its structural similarity. Phytane also has many stereoisomers because of its three stereo carbons, C-6, C-10 and C-14. Whereas pristane has two stereo carbons, C-6 and C-10. Direct measurement of these isomers has not been reported using gas chromatography. The substituent of phytane is phytanyl. Phytanyl groups are frequently found in archaeal membrane lipids of methanogenic and halophilic archaea (e.g., in archaeol). Phytene is the singly unsaturated version of phytane. Phytene is also found as the functional group phytyl in many organic molecules of biological importance such as chlorophyll, tocopherol (vitamin E), and phylloquinone (vitamin K1). Phytene's corresponding alcohol is phytol. Geranylgeranene is the fully unsaturated form of phytane, and its corresponding substituent is geranylgeranyl. Sources The major source of phytane and pristane is thought to be chlorophyll. Chlorophyll is one of the most important photosynthetic pigments in plants, algae, and cyanobacteria, and is the most abundant tetrapyrrole in the biosphere. Hydrolysis of chlorophyll a, b, d, and f during diagenesis in marine sediments, or during invertebrate feeding releases phytol, which is then converted to phytane or pristane. Another possible source of phytane and pristane is archaeal ether lipids. Laboratory studies show that thermal maturation of methanogenic archaea generates pristane and phytane from diphytanyl glyceryl ethers (archaeols). In addition, pristane can be derived from tocopherols and methyltrimethyltridecylchromans (MTTCs). Preservation In suitable environments, biomolecules like chlorophyll can be transformed and preserved in recognizable forms as biomarkers. Conversion during diagenesis often causes the chemical loss of functional groups like double bonds and hydroxyl groups. Studies suggested that pristane and phytane are formed via diagenesis of phytol under different redox conditions. Pristane can be formed in oxic (oxidizing) conditions by phytol oxidation to phytenic acid, which may then undergo decarboxylation to pristene, before finally being reduced to pristane. In contrast, phytane is likely from reduction and dehydration of phytol (via dihydrophytol or phytene) under relatively anoxic conditions. However, various biotic and abiotic processes may control the diagenesis of chlorophyll and phytol, and the exact reactions are more complicated and not strictly-correlated to redox conditions. In thermally immature sediments, pristane and phytane has a configuration dominated by 6R,10S stereochemistry (equivalent to 6S, 10R), which is inherited from C-7 and C-11 in phytol. During thermal maturation, isomerization at C-6 and C-10 leads to a mixture of 6R, 10S, 6S, 10S, and 6R, 10R. Geochemical parameters Pristane/Phytane ratio Pristane/phytane (Pr/Ph) is the ratio of abundances of pristane and phytane. It is a proxy for redox conditions in the depositional environments. The Pr/Ph index is based on the assumption that pristane is formed from phytol by an oxidative pathway, while phytane is generated through various reductive pathways. In non-biodegraded crude oil, Pr/Ph less than 0.8 indicates saline to hypersaline conditions associated with evaporite and carbonate deposition, whereas organic-lean terrigenous, fluvial,and deltaic sediments under oxic to suboxic conditions usually generate crude oil with Pr/Ph above 3. Pr/Ph is commonly applied because pristane and phytane are measured easily using gas chromatography. However, the index should be used with caution, as pristane and phytane may not result from degradation of the same precursor (see *Source*). Also, pristane, but not phytane, can be produced in reducing environments by clay-catalysed degradation of phytol and subsequent reduction. Additionally, during catagenesis, Pr/Ph tends to increase. This variation may be due to preferential release of sulfur-bound phytols from source rocks during early maturation. Pristane/nC17 and phytane/nC18 ratios Pristane/n-heptadecane (Pr/nC17) and phytane/n-octadecane (Ph/C18) are sometimes used to correlate oil and its source rock (i.e. to elucidate where oil formed). Oils from rocks deposited under open-ocean conditions showed Pr/nC17< 0.5, while those from inland peat swamp had ratios greater than 1. The ratios should be used with caution for several reasons. Both Pr/nC17and Ph/nC18 decrease with thermal maturity of petroleum because isoprenoids are less thermally stable than linear alkanes. In contrast, biodegradation increases these ratios because aerobic bacteria generally attack linear alkanes before the isoprenoids. Therefore, biodegraded oil is similar to low-maturity non-degraded oil in the sense of exhibiting low abundance of n-alkanes relative to pristane and phytane. Biodegradation scale Pristane and phytane are more resistant to biodegradation than n-alkanes, but less so than steranes and hopanes. The substantial depletion and complete elimination of pristane and phytane correspond to a Biomarker Biodegradation Scale of 3 and 4, respectively. Compound specific isotope analyses Carbon isotopes The carbon isotopic composition of pristane and phytane generally reflects the kinetic isotope fractionation that occurs during photosynthesis. For example, δ13C(PDB) of phytane in marine sediments and oils has been used to reconstruct ancient atmospheric CO2levels, which affects the carbon isotopic fractionation associated with photosynthesis, over the past 500 million years. In this study, partial pressure of CO2 reached more than 1000 ppm at maxima compared to 410 ppm today. Carbon isotope compositions of pristane and phytane in crude oil can also help to constrain their source. Pristane and phytane from a common precursor should have δ13C values differing by no more than 0.3‰. Hydrogen isotopes Hydrogen isotope composition of phytol in marine phytoplankton and algae starts out as highly depleted, with δD (VSMOW) ranging from -360 to -280‰. Thermal maturation preferentially releases light isotopes, causing and pristane and phytane to become progressively heavier with maturation. Case study: limitation of Pr/Ph as a redox indicator Inferences from Pr/Ph on the redox potential of source sediments should always be supported by other geochemical and geological data, such as sulfur content or the C35 homohopane index (i.e. the abundance of C35 homohopane relative to that of C31-C35 homohopanes). For example, the Baghewala-1 oil from India has low Pr/Ph (0.9), high sulfur (1.2 wt.%) and high C35 homohopane index, which are consistent with anoxia during deposition of the source rock. However, drawing conclusion on the oxic state of depositional environments only from Pr/Ph ratio can be misleading because salinity often controls the Pr/Ph in hypersaline environments. In another example, the decrease in Pr/Ph during deposition of the PermianKupferschiefer sequence in Germany is in coincidence with an increase in trimethylated 2-methyl-2-(4,8,12-trimethyltridecyl)chromans, an aromatic compound believed to be markers of salinity. Therefore, this decrease in Pr/Ph should indicate an increase in salinity, instead of an increase in anoxia. See also Phytol Pristane Biomarker Crocetane Archaeol Tocopherols Sterane Hopane References Alkanes Diterpenes
Phytane
Chemistry
2,098
26,883,388
https://en.wikipedia.org/wiki/Antimicrobial%20copper-alloy%20touch%20surfaces
Antimicrobial copper-alloy touch surfaces can prevent frequently touched surfaces from serving as reservoirs for the spread of pathogenic microbes. This is especially true in healthcare facilities, where harmful viruses, bacteria, and fungi colonize and persist on doorknobs, push plates, handrails, tray tables, tap (faucet) handles, IV poles, HVAC systems, and other equipment. These microbes can sometimes survive on surfaces for more than 30 days. Coppertouch Australia commissioned the Doherty Institute at the Melbourne University Australia to test its Antimicrobial Copper adhesive film. Lab tests proved a 96% kill rate of Influenza A virus with the film as compared to non treated surfaces. The surfaces of copper and its alloys, such as brass and bronze, are antimicrobial. They have an inherent ability to kill a wide range of harmful microbes relatively rapidly – often within two hours or less – and with a high degree of efficiency. These antimicrobial properties have been demonstrated by an extensive body of research. The research also suggests that if touch surfaces are made with copper alloys, the reduced transmission of disease-causing organisms can reduce patient infections in hospital intensive care units (ICU) by as much as 58%. Several companies have developed methods for utilizing the antimicrobial functionality of copper on existing high-touch surfaces. LuminOre and Aereus Technologies both utilize cold-spray antimicrobial copper coating technology to apply antimicrobial coatings to surfaces. Evidence As of 2019 a number of studies have found that copper surfaces may help prevent infection in the healthcare environment. Microorganisms are known to survive on inanimate surfaces for extended periods of time. Hand and surface disinfection practices are a primary measure against the spread of infection. Since approximately 80% of infectious diseases are known to be transmitted by touch, and pathogens found in healthcare facilities can survive on inanimate surfaces for days or months, the microbial burden of frequently touched surfaces is believed to play a significant role in infection causality. EPA registrations On February 29, 2008, the United States Environmental Protection Agency (EPA) approved the registrations of five different groups of copper alloys as "antimicrobial materials" with public health benefits. The EPA registrations now cover 479 different compositions of copper alloys within six groups (an up-to-date list of all approved alloys is available). All of the alloys have minimum nominal copper concentrations of 60%. The results of the EPA-supervised antimicrobial studies demonstrating copper's strong antimicrobial efficacies across a wide range of alloys have been published. Microbes tested and killed in EPA laboratory tests The bacteria destroyed by copper alloys in the EPA-supervised antimicrobial efficiency tests include: Escherichia coli O157:H7, a foodborne pathogen associated with large-scale food recalls. Methicillin-resistant Staphylococcus aureus (MRSA), one of the most virulent strains of antibiotic-resistant bacteria and a common culprit of hospital- and community-acquired infections. Staphylococcus aureus, the most common of all bacterial staphylococcus (i.e., Staph) infections that cause life-threatening disease, including pneumonia and meningitis. Enterobacter aerogenes, a pathogenic bacterium commonly found in hospitals that causes opportunistic skin infections and impacts other body tissues. Pseudomonas aeruginosa, a bacterium in immunocompromised individuals that infects the pulmonary and urinary tracts, blood and skin. Vancomycin-resistant Enterococcus (VRE), a pathogenic bacterium that is the second leading cause of hospital-acquired infections. EPA test protocols for copper alloy surfaces The registrations are based on studies supervised by EPA which found that copper alloys kill more than 99.9% of disease-causing bacteria within just two hours when cleaned regularly (i.e., the metals are free of dirt or grime that may impede the bacteria's contact with the copper surface). To attain the EPA registrations, the copper alloy groups had to demonstrate strong antimicrobial efficacies according to all of the following rigorous tests: efficiency as a sanitizer: This test protocol measures surviving bacteria on alloy surfaces after two hours. Residual self-sanitizing activity: This test protocol measures surviving bacteria on alloy surfaces before and after six wet and dry wear cycles over 24 hours in a standard wear apparatus. Continuous reduction of bacterial contamination: This test protocol measures the number of bacteria that survive on a surface after it has been re-inoculated eight times over a 24-hour period without intermediate cleaning or wiping. EPA registered antimicrobial copper alloys The alloy groups tested and approved were C11000, C51000, C70600, C26000, C75200, and C28000. The EPA registration numbers for the six groups of alloys are as follows: Claims granted by EPA in antimicrobial copper alloy registrations The following claims are now legally permitted when marketing EPA-registered antimicrobial copper alloys in the U.S.: The registrations state that "antimicrobial copper alloys may be used in hospitals, other healthcare facilities, and various public, commercial and residential buildings." Product stewardship requirements of EPA As a condition of registration established by EPA, the Copper Development Association (CDA) in the U.S. is responsible for the product stewardship of antimicrobial copper alloy products. CDA must ensure that manufacturers promote these products in an appropriate manner. Manufacturers must only promote the proper use and care of these products and must specifically emphasize that the use of these products is a supplement and not a substitute to routine hygienic practices. EPA mandated that all advertising and marketing materials for antimicrobial copper products contain the following statement: Antimicrobial copper alloys are intended to provide supplemental antimicrobial action in between routine cleaning of environmental or touch surfaces in healthcare settings, as well as in public buildings and the home. Users must also understand that in order for antimicrobial copper alloys to remain effective, they cannot be coated in any way. CDA is currently implementing an outreach program through written communications, a product stewardship website, and through a Working Group which meets periodically to expand educational efforts. More than 100 different potential product applications were cited in the registrations for their potential public health benefits. EPA warranty statement The EPA warranty statement is worded as follows: Note: With the exception of the product name and the percentage of active ingredient, the EPA-approved Master Labels for the six groups of registered alloys are identical. Antimicrobial copper products Many antimicrobial copper alloy products have been approved for registration in healthcare facilities, public and commercial buildings, residences, mass transit facilities, laboratories, and play area equipment in the US. A complete list of registered products is available from EPA. See also Antimicrobial properties of copper Copper alloys in aquaculture References Copper in health Antimicrobials Medical hygiene Disinfectants
Antimicrobial copper-alloy touch surfaces
Chemistry,Biology
1,468
2,498,855
https://en.wikipedia.org/wiki/Supermatrix
In mathematics and theoretical physics, a supermatrix is a Z2-graded analog of an ordinary matrix. Specifically, a supermatrix is a 2×2 block matrix with entries in a superalgebra (or superring). The most important examples are those with entries in a commutative superalgebra (such as a Grassmann algebra) or an ordinary field (thought of as a purely even commutative superalgebra). Supermatrices arise in the study of super linear algebra where they appear as the coordinate representations of a linear transformations between finite-dimensional super vector spaces or free supermodules. They have important applications in the field of supersymmetry. Definitions and notation Let R be a fixed superalgebra (assumed to be unital and associative). Often one requires R be supercommutative as well (for essentially the same reasons as in the ungraded case). Let p, q, r, and s be nonnegative integers. A supermatrix of dimension (r|s)×(p|q) is a matrix with entries in R that is partitioned into a 2×2 block structure with r+s total rows and p+q total columns (so that the submatrix X00 has dimensions r×p and X11 has dimensions s×q). An ordinary (ungraded) matrix can be thought of as a supermatrix for which q and s are both zero. A square supermatrix is one for which (r|s) = (p|q). This means that not only is the unpartitioned matrix X square, but the diagonal blocks X00 and X11 are as well. An even supermatrix is one for which the diagonal blocks (X00 and X11) consist solely of even elements of R (i.e. homogeneous elements of parity 0) and the off-diagonal blocks (X01 and X10) consist solely of odd elements of R. An odd supermatrix is one for which the reverse holds: the diagonal blocks are odd and the off-diagonal blocks are even. If the scalars R are purely even there are no nonzero odd elements, so the even supermatices are the block diagonal ones and the odd supermatrices are the off-diagonal ones. A supermatrix is homogeneous if it is either even or odd. The parity, |X|, of a nonzero homogeneous supermatrix X is 0 or 1 according to whether it is even or odd. Every supermatrix can be written uniquely as the sum of an even supermatrix and an odd one. Algebraic structure Supermatrices of compatible dimensions can be added or multiplied just as for ordinary matrices. These operations are exactly the same as the ordinary ones with the restriction that they are defined only when the blocks have compatible dimensions. One can also multiply supermatrices by elements of R (on the left or right), however, this operation differs from the ungraded case due to the presence of odd elements in R. Let Mr|s×p|q(R) denote the set of all supermatrices over R with dimension (r|s)×(p|q). This set forms a supermodule over R under supermatrix addition and scalar multiplication. In particular, if R is a superalgebra over a field K then Mr|s×p|q(R) forms a super vector space over K. Let Mp|q(R) denote the set of all square supermatices over R with dimension (p|q)×(p|q). This set forms a superring under supermatrix addition and multiplication. Furthermore, if R is a commutative superalgebra, then supermatrix multiplication is a bilinear operation, so that Mp|q(R) forms a superalgebra over R. Addition Two supermatrices of dimension (r|s)×(p|q) can be added just as in the ungraded case to obtain a supermatrix of the same dimension. The addition can be performed blockwise since the blocks have compatible sizes. It is easy to see that the sum of two even supermatrices is even and the sum of two odd supermatrices is odd. Multiplication One can multiply a supermatrix with dimensions (r|s)×(p|q) by a supermatrix with dimensions (p|q)×(k|l) as in the ungraded case to obtain a matrix of dimension (r|s)×(k|l). The multiplication can be performed at the block level in the obvious manner: Note that the blocks of the product supermatrix Z = XY are given by If X and Y are homogeneous with parities |X| and |Y| then XY is homogeneous with parity |X| + |Y|. That is, the product of two even or two odd supermatrices is even while the product of an even and odd supermatrix is odd. Scalar multiplication Scalar multiplication for supermatrices is different than the ungraded case due to the presence of odd elements in R. Let X be a supermatrix. Left scalar multiplication by α ∈ R is defined by where the internal scalar multiplications are the ordinary ungraded ones and denotes the grade involution in R. This is given on homogeneous elements by Right scalar multiplication by α is defined analogously: If α is even then and both of these operations are the same as the ungraded versions. If α and X are homogeneous then α⋅X and X⋅α are both homogeneous with parity |α| + |X|. Furthermore, if R is supercommutative then one has As linear transformations Ordinary matrices can be thought of as the coordinate representations of linear maps between vector spaces (or free modules). Likewise, supermatrices can be thought of as the coordinate representations of linear maps between super vector spaces (or free supermodules). There is an important difference in the graded case, however. A homomorphism from one super vector space to another is, by definition, one that preserves the grading (i.e. maps even elements to even elements and odd elements to odd elements). The coordinate representation of such a transformation is always an even supermatrix. Odd supermatrices correspond to linear transformations that reverse the grading. General supermatrices represent an arbitrary ungraded linear transformation. Such transformations are still important in the graded case, although less so than the graded (even) transformations. A supermodule M over a superalgebra R is free if it has a free homogeneous basis. If such a basis consists of p even elements and q odd elements, then M is said to have rank p|q. If R is supercommutative, the rank is independent of the choice of basis, just as in the ungraded case. Let Rp|q be the space of column supervectors—supermatrices of dimension (p|q)×(1|0). This is naturally a right R-supermodule, called the right coordinate space. A supermatrix T of dimension (r|s)×(p|q) can then be thought of as a right R-linear map where the action of T on Rp|q is just supermatrix multiplication (this action is not generally left R-linear which is why we think of Rp|q as a right supermodule). Let M be free right R-supermodule of rank p|q and let N be a free right R-supermodule of rank r|s. Let (ei) be a free basis for M and let (fk) be a free basis for N. Such a choice of bases is equivalent to a choice of isomorphisms from M to Rp|q and from N to Rr|s. Any (ungraded) linear map can be written as a (r|s)×(p|q) supermatrix relative to the chosen bases. The components of the associated supermatrix are determined by the formula The block decomposition of a supermatrix T corresponds to the decomposition of M and N into even and odd submodules: Operations Many operations on ordinary matrices can be generalized to supermatrices, although the generalizations are not always obvious or straightforward. Supertranspose The supertranspose of a supermatrix is the Z2-graded analog of the transpose. Let be a homogeneous (r|s)×(p|q) supermatrix. The supertranspose of X is the (p|q)×(r|s) supermatrix where At denotes the ordinary transpose of A. This can be extended to arbitrary supermatrices by linearity. Unlike the ordinary transpose, the supertranspose is not generally an involution, but rather has order 4. Applying the supertranspose twice to a supermatrix X gives If R is supercommutative, the supertranspose satisfies the identity Parity transpose The parity transpose of a supermatrix is a new operation without an ungraded analog. Let be a (r|s)×(p|q) supermatrix. The parity transpose of X is the (s|r)×(q|p) supermatrix That is, the (i,j) block of the transposed matrix is the (1−i,1−j) block of the original matrix. The parity transpose operation obeys the identities as well as where st denotes the supertranspose operation. Supertrace The supertrace of a square supermatrix is the Z2-graded analog of the trace. It is defined on homogeneous supermatrices by the formula where tr denotes the ordinary trace. If R is supercommutative, the supertrace satisfies the identity for homogeneous supermatrices X and Y. Berezinian The Berezinian (or superdeterminant) of a square supermatrix is the Z2-graded analog of the determinant. The Berezinian is only well-defined on even, invertible supermatrices over a commutative superalgebra R. In this case it is given by the formula where det denotes the ordinary determinant (of square matrices with entries in the commutative algebra R0). The Berezinian satisfies similar properties to the ordinary determinant. In particular, it is multiplicative and invariant under the supertranspose. It is related to the supertrace by the formula References Matrices Super linear algebra
Supermatrix
Physics,Mathematics
2,261
3,510,274
https://en.wikipedia.org/wiki/Centrosymmetry
In crystallography, a centrosymmetric point group contains an inversion center as one of its symmetry elements. In such a point group, for every point (x, y, z) in the unit cell there is an indistinguishable point (-x, -y, -z). Such point groups are also said to have inversion symmetry. Point reflection is a similar term used in geometry. Crystals with an inversion center cannot display certain properties, such as the piezoelectric effect and the frequency doubling effect (second-harmonic generation). In addition, in such crystals, one-photon absorption (OPA) and two-photon absorption (TPA) processes are mutually exclusive, i.e., they do not occur simultaneously, and provide complementary information. The following space groups have inversion symmetry: the triclinic space group 2, the monoclinic 10-15, the orthorhombic 47-74, the tetragonal 83-88 and 123-142, the trigonal 147, 148 and 162-167, the hexagonal 175, 176 and 191-194, the cubic 200-206 and 221-230. Point groups lacking an inversion center (non-centrosymmetric) can be polar, chiral, both, or neither. A polar point group is one whose symmetry operations leave more than one common point unmoved. A polar point group has no unique origin because each of those unmoved points can be chosen as one. One or more unique polar axes could be made through two such collinear unmoved points. Polar crystallographic point groups include 1, 2, 3, 4, 6, m, mm2, 3m, 4mm, and 6mm. A chiral (often also called enantiomorphic) point group is one containing only proper (often called "pure") rotation symmetry. No inversion, reflection, roto-inversion or roto-reflection (i.e., improper rotation) symmetry exists in such point group. Chiral crystallographic point groups include 1, 2, 3, 4, 6, 222, 422, 622, 32, 23, and 432. Chiral molecules such as proteins crystallize in chiral point groups. The remaining non-centrosymmetric crystallographic point groups , 2m, , m2, 3m are neither polar nor chiral. See also Centrosymmetric matrix Rule of mutual exclusion References Symmetry ru:Центральная симметрия
Centrosymmetry
Physics,Mathematics
530
31,443,389
https://en.wikipedia.org/wiki/Zoghman%20Mebkhout
Zoghman Mebkhout (born 1949 ) (زغمان مبخوت) is a French-Algerian mathematician. He is known for his work in algebraic analysis, geometry and representation theory, more precisely on the theory of D-modules. Career Mebkhout is currently a research director at the French National Centre for Scientific Research and in 2002 Zoghman received the Servant Medal from the CNRS a prize given every two years with an amount of €10,000. Notable works In September 1979 Mebkhout presented the Riemann–Hilbert correspondence, which is a generalization of Hilbert's twenty-first problem to higher dimensions. The original setting was for Riemann surfaces, where it was about the existence of regular differential equations with prescribed monodromy groups. In higher dimensions, Riemann surfaces are replaced by complex manifolds of dimension > 1. Certain systems of partial differential equations (linear and having very special properties for their solutions) and possible monodromies of their solutions correspond. An independent proof of this result was presented by Masaki Kashiwara in April 1980. Zoghman is now largely known as a specialist in D-modules theory. Recognition Zoghman is one of the first modern international-caliber North-African mathematicians. A symposium in Spain was held on his sixtieth birthday. He was invited to the Institute for Advanced Study and gave a recent talk at Institut Fourier. In his quasi-autobiographical text Récoltes et semailles Alexander Grothendieck wrote extensively about what he for a time thought of as gross mistreatment of Mebkhout, in particular in the context of attribution of credit for the formulation and proof of the Riemann-Hilbert correspondence. However, in May 1986, after being contacted by a number of mathematicians involved in the matter, Grothendieck retracted his former viewpoints (that had been based on direct testimony of Mebkhout) in a number of additions to the manuscript, which for some reason were not included in the eventually published version of the book. References Differential equations Representation theory 1949 births Algerian mathematicians Living people People from Naâma Province 21st-century Algerian people
Zoghman Mebkhout
Mathematics
444
57,738,246
https://en.wikipedia.org/wiki/Janez%20Lawson
Janez Yvonne Lawson Bordeaux (February 22, 1930 – November 24, 1990) was an American chemical engineer who became one of NASA's computers. She was the first African-American hired into a technical position at Jet Propulsion Laboratory. She programmed the IBM 701. Early life and education Lawson was born on February 22, 1930, in Santa Monica, California. Her parents were Hilliard Lawson and Bernice Lawson. She attended Belmont High School and graduated in 1948. Lawson completed a bachelor's degree in chemical engineering at the University of California, Los Angeles in 1952. She was a straight-A student and President of the Delta Sigma Theta sorority. Career Despite her qualifications, Lawson could not get work as a chemical engineer because of her race and gender. She saw an advertisement for a job as a computer in Pasadena. There was discussion about whether or not she should get the job, but Macie Roberts stood up for her. Lawson got the job, and in 1953 was one of the first Jet Propulsion Laboratory employees to be sent to a training course at IBM. Lawson was the first African-American hired into a technical position at Jet Propulsion Laboratory. She was promoted to mathematician in 1954. She became skilled at programming during the course, using a keypunch and learning speedcoding. Lawson lived in Los Angeles and would commute for over an hour to the Jet Propulsion Laboratory every day. Lawson joined the Ramo-Wooldridge Corporation in the late 1950s. References 1930 births 1990 deaths 20th-century African-American women 20th-century African-American scientists 20th-century American engineers 20th-century American mathematicians 20th-century American women mathematicians African-American engineers African-American women engineers 20th-century American women engineers American chemical engineers American computer programmers Delta Sigma Theta members Human computers Jet Propulsion Laboratory faculty Mathematicians from California NASA people Engineers from Los Angeles University of California, Los Angeles alumni 20th-century American chemists
Janez Lawson
Technology
384
77,371,907
https://en.wikipedia.org/wiki/Caloboletus%20conifericola
Caloboletus conifericola, commonly known as the dark bitter bolete, is a species of mushroom in the family Boletaceae. It is found in the Pacific Northwest. Taxonomy Caloboletus conifericola was first described by Alfredo Vizzini in 2014. Description The cap of Caloboletus conifericola is grayish-brown to olive gray and about 3-10 inches (7-25 cm) across. The stipe is about 2-10 inches (5-15 cm) long and about 1-2 inches wide at the top. It starts out wider at the base, but more or less evens out as the mushroom grows older. The pore surface is yellow, and the mushroom oxidizes blue when bruised. Similar species Caloboletus conifericola can be confused with Caloboletus calopus and Caloboletus frustuosus. Caloboletus calopus has a more reticulated stipe than C. conifericola, and C. frustulosus has a more cracked cap. Habitat and ecology Caloboletus conifericola is found in moss and leaf litter under conifer trees, especially grand fir and western hemlock. It is found fruiting during early fall, soon after the rains come. See also List of North American Boletes References marshii Fungus species Inedible fungi Fungi of North America Fungi described in the 21st century
Caloboletus conifericola
Biology
298
57,676,290
https://en.wikipedia.org/wiki/NGC%203336
NGC 3336 is a barred spiral galaxy located about 190 million light-years away in the constellation Hydra. It was discovered by astronomer John Herschel on March 24, 1835. NGC 3336 is a member of the Hydra Cluster. One supernova has been observed in NGC 3336: SN1984S (type unknown, mag. 16.8) was discovered by Paul Wild on 23 December 1984. See also NGC 3307 List of NGC objects (3001–4000) References External links Hydra Cluster Hydra (constellation) Barred spiral galaxies 3336 031754 Astronomical objects discovered in 1835 Discoveries by John Herschel
NGC 3336
Astronomy
127
8,564,378
https://en.wikipedia.org/wiki/Genesi
Genesi is an international group of technology and consulting companies in the United States, Mexico and Germany. It is most widely known for designing and manufacturing ARM architecture and Power ISA-based computing devices. The Genesi Group consists of Genesi USA Inc., Genesi Americas LLC, Genesi Europe UG, Red Efika, bPlan GmbH and the affiliated non-profit organization Power2People. Genesi is an official Linaro partner and its software development team has been instrumental in moving Linux on the ARM architecture towards a wider adoption of the hard-float application binary interface, which is incompatible with most existing applications but provides enormous performance gains for many use cases. Products The main products of Genesi are ARM-based computers that were designed to be inexpensive, quiet and highly energy efficient, and a custom Open Firmware compliant firmware. All products can run a multitude of operating systems. Current products Aura - A comprehensive abstraction layer for embedded and desktop devices, with UEFI and IEEE1275. Desktop systems with AGP or PCI/PCI Express may take advantage of an embedded x86/BIOS emulator providing boot functionality for standard graphics cards. EFIKA MX53 EFIKA MX6 Discontinued products EFIKA MX Smarttop - A highly energy efficient and compact computing device (complete system) powered by a Freescale ARM iMX515 CPU. EFIKA MX Smartbook - A 10" smartbook (complete system) powered by the Freescale ARM iMX515 CPU. High Density Blade - PowerPC based high density blade server. Home Media Center - PowerPC based digital video recorder. EFIKA 5200B - A small Open Firmware-based motherboard powered by a Freescale MPC5200B SoC processor with 128 MB RAM, a 44-pin ATA connector for a 2.5" hard drive, sound in/out, USB, Ethernet, serial port, and a PCI slot. Open Client - thin clients available with Freescale's Power Architecture or ARM SoCs. Pegasos - An Open Firmware-based MicroATX motherboard powered by a PowerPC G3/G4 microprocessor, featuring PCI slots, AGP, Ethernet, USB, DDR and FireWire. Open Desktop Workstation – A Pegasos II based computer featuring a Freescale PowerPC 7447 processor. Complete specifications for the hardware are available through Genesi's PowerDeveloper.org website. Community support Genesi designed and maintains PowerDeveloper, an online platform for Genesi products and ARM products from other manufacturers. Via the PowerDeveloper Projects programs, hundreds of systems have been provided to the PowerDeveloper community so far, thereby supporting open source development in many countries. Linux distributions that directly benefited from the programs include but are not limited to Crux, Debian, Raspbian, Fedora, Gentoo, openSuSE and Ubuntu. Genesi once funded the development of the MorphOS operating system but shifted its focus towards Linux in 2004. However, Genesi remains the main supporter of the operating system and continues to actively support its user and developer communities via the MorphZone social platform, which features discussion forums, a digital library, a software repository and a bounty system. External links Genesi USA Inc. Power2People PowerDeveloper MorphZone Genesi Group Genesi Americas Red Efika bplan Notes ARM architecture Computer companies of the United States Computer hardware companies Amiga companies
Genesi
Technology
723
3,214
https://en.wikipedia.org/wiki/Amplifier%20figures%20of%20merit
In electronics, the figures of merit of an amplifier are numerical measures that characterize its properties and performance. Figures of merit can be given as a list of specifications that include properties such as gain, bandwidth, noise and linearity, among others listed in this article. Figures of merit are important for determining the suitability of a particular amplifier for an intended use. Gain The gain of an amplifier is the ratio of output to input power or amplitude, and is usually measured in decibels. When measured in decibels it is logarithmically related to the power ratio: G(dB)=10 log(Pout /Pin). RF amplifiers are often specified in terms of the maximum power gain obtainable, while the voltage gain of audio amplifiers and instrumentation amplifiers will be more often specified. For example, an audio amplifier with a gain given as 20 dB will have a voltage gain of ten. The use of voltage gain figure is appropriate when the amplifier's input impedance is much higher than the source impedance, and the load impedance higher than the amplifier's output impedance. If two equivalent amplifiers are being compared, the amplifier with higher gain settings would be more sensitive as it would take less input signal to produce a given amount of power. Bandwidth The bandwidth of an amplifier is the range of frequencies for which the amplifier gives "satisfactory performance". The definition of "satisfactory performance" may be different for different applications. However, a common and well-accepted metric is the half-power points (i.e. frequency where the power goes down by half its peak value) on the output vs. frequency curve. Therefore, bandwidth can be defined as the difference between the lower and upper half power points. This is therefore also known as the bandwidth. Bandwidths (otherwise called "frequency responses") for other response tolerances are sometimes quoted (, etc.) or "plus or minus 1dB" (roughly the sound level difference people usually can detect). The gain of a good quality full-range audio amplifier will be essentially flat between 20 Hz to about 20 kHz (the range of normal human hearing). In ultra-high-fidelity amplifier design, the amplifier's frequency response should extend considerably beyond this (one or more octaves either side) and might have points < 10 Hz and > . Professional touring amplifiers often have input and/or output filtering to sharply limit frequency response beyond ; too much of the amplifier's potential output power would otherwise be wasted on infrasonic and ultrasonic frequencies, and the danger of AM radio interference would increase. Modern switching amplifiers need steep low pass filtering at the output to get rid of high-frequency switching noise and harmonics. The range of frequency over which the gain is equal to or greater than 70.7% of its maximum gain is termed as bandwidth. Efficiency Efficiency is a measure of how much of the power source is usefully applied to the amplifier's output. Class A amplifiers are very inefficient, in the range of 10–20% with a max efficiency of 25% for direct coupling of the output. Inductive coupling of the output can raise their efficiency to a maximum of 50%. Drain efficiency is the ratio of output RF power to input DC power when primary input DC power has been fed to the drain of a field-effect transistor. Based on this definition, the drain efficiency cannot exceed 25% for a class A amplifier that is supplied drain bias current through resistors (because RF signal has its zero level at about 50% of the input DC). Manufacturers specify much higher drain efficiencies, and designers are able to obtain higher efficiencies by providing current to the drain of the transistor through an inductor or a transformer winding. In this case the RF zero level is near the DC rail and will swing both above and below the rail during operation. While the voltage level is above the DC rail current is supplied by the inductor. Class B amplifiers have a very high efficiency but are impractical for audio work because of high levels of distortion (See: Crossover distortion). In practical design, the result of a tradeoff is the class AB design. Modern Class AB amplifiers commonly have peak efficiencies between 30 and 55% in audio systems and 50-70% in radio frequency systems with a theoretical maximum of 78.5%. Commercially available Class D switching amplifiers have reported efficiencies as high as 90%. Amplifiers of Class C-F are usually known to be very high-efficiency amplifiers. RCA manufactured an AM broadcast transmitter employing a single class-C low-mu triode with an RF efficiency in the 90% range. More efficient amplifiers run cooler, and often do not need any cooling fans even in multi-kilowatt designs. The reason for this is that the loss of efficiency produces heat as a by-product of the energy lost during the conversion of power. In more efficient amplifiers there is less loss of energy so in turn less heat. In RF linear Power Amplifiers, such as cellular base stations and broadcast transmitters, special design techniques can be used to improve efficiency. Doherty designs, which use a second output stage as a "peak" amplifier, can lift efficiency from the typical 15% up to 30-35% in a narrow bandwidth. Envelope Tracking designs are able to achieve efficiencies of up to 60%, by modulating the supply voltage to the amplifier in line with the envelope of the signal. Linearity An ideal amplifier would be a totally linear device, but real amplifiers are only linear within limits. When the signal drive to the amplifier is increased, the output also increases until a point is reached where some part of the amplifier becomes saturated and cannot produce any more output; this is called clipping, and results in distortion. In most amplifiers a reduction in gain takes place before hard clipping occurs; the result is a compression effect, which (if the amplifier is an audio amplifier) sounds much less unpleasant to the ear. For these amplifiers, the 1 dB compression point is defined as the input power (or output power) where the gain is 1 dB less than the small signal gain. Sometimes this non linearity is deliberately designed in to reduce the audible unpleasantness of hard clipping under overload. Ill effects of non-linearity can be reduced with negative feedback. Linearization is an emergent field, and there are many techniques, such as feed forward, predistortion, postdistortion, in order to avoid the undesired effects of the non-linearities. Noise This is a measure of how much noise is introduced in the amplification process. Noise is an undesirable but inevitable product of the electronic devices and components; also, much noise results from intentional economies of manufacture and design time. The metric for noise performance of a circuit is noise figure or noise factor. Noise figure is a comparison between the output signal to noise ratio and the thermal noise of the input signal. Output dynamic range Output dynamic range is the range, usually given in dB, between the smallest and largest useful output levels. The lowest useful level is limited by output noise, while the largest is limited most often by distortion. The ratio of these two is quoted as the amplifier dynamic range. More precisely, if S = maximal allowed signal power and N = noise power, the dynamic range DR is DR = (S + N ) /N. In many switched mode amplifiers, dynamic range is limited by the minimum output step size. Slew rate Slew rate is the maximum rate of change of the output, usually quoted in volts per second (or microsecond). Many amplifiers are ultimately slew rate limited (typically by the impedance of a drive current having to overcome capacitive effects at some point in the circuit), which sometimes limits the full power bandwidth to frequencies well below the amplifier's small-signal frequency response. Rise time The rise time, tr, of an amplifier is the time taken for the output to change from 10% to 90% of its final level when driven by a step input. For a Gaussian response system (or a simple RC roll off), the rise time is approximated by: tr * BW = 0.35, where tr is rise time in seconds and BW is bandwidth in Hz. Settling time and ringing The time taken for the output to settle to within a certain percentage of the final value (for instance 0.1%) is called the settling time, and is usually specified for oscilloscope vertical amplifiers and high-accuracy measurement systems. Ringing refers to an output variation that cycles above and below an amplifier's final value and leads to a delay in reaching a stable output. Ringing is the result of overshoot caused by an underdamped circuit. Overshoot In response to a step input, the overshoot is the amount the output exceeds its final, steady-state value. Stability Stability is an issue in all amplifiers with feedback, whether that feedback is added intentionally or results unintentionally. It is especially an issue when applied over multiple amplifying stages. Stability is a major concern in RF and microwave amplifiers. The degree of an amplifier's stability can be quantified by a so-called stability factor. There are several different stability factors, such as the Stern stability factor and the Linvil stability factor, which specify a condition that must be met for the absolute stability of an amplifier in terms of its two-port parameters. See also Audio system measurements Low-noise amplifier References External links Efficiency of Microwave Devices RF Power Amplifier Testing Electronic amplifiers
Amplifier figures of merit
Technology
1,957
62,217,733
https://en.wikipedia.org/wiki/Paul%20J.%20Tesar
Paul J. Tesar is an American developmental biologist. He is the Dr. Donald and Ruth Weber Goodman Professor of Innovative Therapeutics at Case Western Reserve University School of Medicine. His research is focused on regenerative medicine. Early life and education Tesar was born in Cleveland, Ohio. He graduated with a BSc in biology from Case Western Reserve University in 2003. As part of the National Institutes of Health Oxford-Cambridge Scholar Program, he earned a PhD in 2007. Career While a graduate student, Tesar published a paper describing epiblast-derived stem cells, a new type of pluripotent stem cell, research for which he received both the Beddington Medal of the British Society for Developmental Biology and the Harold M. Weintraub Award of the Fred Hutchinson Cancer Research Center. In 2010 he returned to Case Western Reserve University School of Medicine to teach. In 2014 he was appointed to the Dr. Donald and Ruth Weber Goodman chair in innovative therapeutics. Research Tesar developed methods to generate and grow oligodendrocytes and oligodendrocyte progenitor cells (OPCs) from pluripotent stem cells and skin cells. He also made human brain organoids containing human myelin, called oligocortical spheroids. Tesar identified drugs that stimulate myelin regeneration and reverse paralysis in mice with multiple sclerosis. Tesar also identified CRISPR and antisense oligonucleotide therapeutics that restored myelination and extended the lifespan of mice with Pelizaeus–Merzbacher disease. Awards Beddington Medal from the British Society for Developmental Biology Harold M. Weintraub Award Outstanding Young Investigator Award, International Society for Stem Cell Research Senior Member of the National Academy of Inventors, References American medical researchers Stem cell researchers Living people Case Western Reserve University faculty 1981 births Case Western Reserve University School of Medicine alumni Alumni of the University of Oxford
Paul J. Tesar
Biology
389
981,655
https://en.wikipedia.org/wiki/Integer%20square%20root
In number theory, the integer square root (isqrt) of a non-negative integer is the non-negative integer which is the greatest integer less than or equal to the square root of , For example, Introductory remark Let and be non-negative integers. Algorithms that compute (the decimal representation of) run forever on each input which is not a perfect square. Algorithms that compute do not run forever. They are nevertheless capable of computing up to any desired accuracy . Choose any and compute . For example (setting ): Compare the results with It appears that the multiplication of the input by gives an accuracy of decimal digits. To compute the (entire) decimal representation of , one can execute an infinite number of times, increasing by a factor at each pass. Assume that in the next program () the procedure is already defined and — for the sake of the argument — that all variables can hold integers of unlimited magnitude. Then will print the entire decimal representation of . // Print sqrt(y), without halting void sqrtForever(unsigned int y) { unsigned int result = isqrt(y); printf("%d.", result); // print result, followed by a decimal point while (true) // repeat forever ... { y = y * 100; // theoretical example: overflow is ignored result = isqrt(y); printf("%d", result % 10); // print last digit of result } } The conclusion is that algorithms which compute are computationally equivalent to algorithms which compute . Basic algorithms The integer square root of a non-negative integer can be defined as For example, because . Algorithm using linear search The following C programs are straightforward implementations. Linear search using addition In the program above (linear search, ascending) one can replace multiplication by addition, using the equivalence // Integer square root // (linear search, ascending) using addition unsigned int isqrt(unsigned int y) { unsigned int L = 0; unsigned int a = 1; unsigned int d = 3; while (a <= y) { a = a + d; // (a + 1) ^ 2 d = d + 2; L = L + 1; } return L; } Algorithm using binary search Linear search sequentially checks every value until it hits the smallest where . A speed-up is achieved by using binary search instead. The following C-program is an implementation. // Integer square root (using binary search) unsigned int isqrt(unsigned int y) { unsigned int L = 0; unsigned int M; unsigned int R = y + 1; while (L != R - 1) { M = (L + R) / 2; if (M * M <= y) L = M; else R = M; } return L; } Numerical example For example, if one computes using binary search, one obtains the sequence This computation takes 21 iteration steps, whereas linear search (ascending, starting from ) needs steps. Algorithm using Newton's method One way of calculating and is to use Heron's method, which is a special case of Newton's method, to find a solution for the equation , giving the iterative formula The sequence converges quadratically to as . Stopping criterion One can prove that is the largest possible number for which the stopping criterion ensures in the algorithm above. In implementations which use number formats that cannot represent all rational numbers exactly (for example, floating point), a stopping constant less than 1 should be used to protect against round-off errors. Domain of computation Although is irrational for many , the sequence contains only rational terms when is rational. Thus, with this method it is unnecessary to exit the field of rational numbers in order to calculate , a fact which has some theoretical advantages. Using only integer division For computing for very large integers n, one can use the quotient of Euclidean division for both of the division operations. This has the advantage of only using integers for each intermediate value, thus making the use of floating point representations of large numbers unnecessary. It is equivalent to using the iterative formula By using the fact that one can show that this will reach within a finite number of iterations. In the original version, one has for , and for . So in the integer version, one has and until the final solution is reached. For the final solution , one has and , so the stopping criterion is . However, is not necessarily a fixed point of the above iterative formula. Indeed, it can be shown that is a fixed point if and only if is not a perfect square. If is a perfect square, the sequence ends up in a period-two cycle between and instead of converging. Example implementation in C // Square root of integer unsigned int int_sqrt(unsigned int s) { // Zero yields zero // One yields one if (s <= 1) return s; // Initial estimate (must be too high) unsigned int x0 = s / 2; // Update unsigned int x1 = (x0 + s / x0) / 2; while (x1 < x0) // Bound check { x0 = x1; x1 = (x0 + s / x0) / 2; } return x0; } Numerical example For example, if one computes the integer square root of using the algorithm above, one obtains the sequence In total 13 iteration steps are needed. Although Heron's method converges quadratically close to the solution, less than one bit precision per iteration is gained at the beginning. This means that the choice of the initial estimate is critical for the performance of the algorithm. When a fast computation for the integer part of the binary logarithm or for the bit-length is available (like e.g. std::bit_width in C++20), one should better start at which is the least power of two bigger than . In the example of the integer square root of , , , and the resulting sequence is In this case only four iteration steps are needed. Digit-by-digit algorithm The traditional pen-and-paper algorithm for computing the square root is based on working from higher digit places to lower, and as each new digit pick the largest that will still yield a square . If stopping after the one's place, the result computed will be the integer square root. Using bitwise operations If working in base 2, the choice of digit is simplified to that between 0 (the "small candidate") and 1 (the "large candidate"), and digit manipulations can be expressed in terms of binary shift operations. With * being multiplication, << being left shift, and >> being logical right shift, a recursive algorithm to find the integer square root of any natural number is: def integer_sqrt(n: int) -> int: assert n >= 0, "sqrt works for only non-negative inputs" if n < 2: return n # Recursive call: small_cand = integer_sqrt(n >> 2) << 1 large_cand = small_cand + 1 if large_cand * large_cand > n: return small_cand else: return large_cand # equivalently: def integer_sqrt_iter(n: int) -> int: assert n >= 0, "sqrt works for only non-negative inputs" if n < 2: return n # Find the shift amount. See also [[find first set]], # shift = ceil(log2(n) * 0.5) * 2 = ceil(ffs(n) * 0.5) * 2 shift = 2 while (n >> shift) != 0: shift += 2 # Unroll the bit-setting loop. result = 0 while shift >= 0: result = result << 1 large_cand = ( result + 1 ) # Same as result ^ 1 (xor), because the last bit is always 0. if large_cand * large_cand <= n >> shift: result = large_cand shift -= 2 return result Traditional pen-and-paper presentations of the digit-by-digit algorithm include various optimizations not present in the code above, in particular the trick of pre-subtracting the square of the previous digits which makes a general multiplication step unnecessary. See for an example. Karatsuba square root algorithm The Karatsuba square root algorithm is a combination of two functions: a public function, which returns the integer square root of the input, and a recursive private function, which does the majority of the work. The public function normalizes the actual input, passes the normalized input to the private function, denormalizes the result of the private function, and returns that. The private function takes a normalized input, divides the input bits in half, passes the most-significant half of the input recursively to the private function, and performs some integer operations on the output of that recursive call and the least-significant half of the input to get the normalized output, which it returns. For big-integers of "50 to 1,000,000 digits", Burnikel-Ziegler Karatsuba division and Karatsuba multiplication are recommended by the algorithm's creator. An example algorithm for 64-bit unsigned integers is below. The algorithm: Normalizes the input inside . Calls , which requires a normalized input. Calls with the most-significant half of the normalized input's bits, which will already be normalized as the most-significant bits remain the same. Continues on recursively until there's an algorithm that's faster when the number of bits is small enough. then takes the returned integer square root and remainder to produce the correct results for the given normalized . then denormalizes the result. /// Performs a Karatsuba square root on a `u64`. pub fn u64_isqrt(mut n: u64) -> u64 { if n <= u32::MAX as u64 { // If `n` fits in a `u32`, let the `u32` function handle it. return u32_isqrt(n as u32) as u64; } else { // The normalization shift satisfies the Karatsuba square root // algorithm precondition "a₃ ≥ b/4" where a₃ is the most // significant quarter of `n`'s bits and b is the number of // values that can be represented by that quarter of the bits. // // b/4 would then be all 0s except the second most significant // bit (010...0) in binary. Since a₃ must be at least b/4, a₃'s // most significant bit or its neighbor must be a 1. Since a₃'s // most significant bits are `n`'s most significant bits, the // same applies to `n`. // // The reason to shift by an even number of bits is because an // even number of bits produces the square root shifted to the // left by half of the normalization shift: // // sqrt(n << (2 * p)) // sqrt(2.pow(2 * p) * n) // sqrt(2.pow(2 * p)) * sqrt(n) // 2.pow(p) * sqrt(n) // sqrt(n) << p // // Shifting by an odd number of bits leaves an ugly sqrt(2) // multiplied in. const EVEN_MAKING_BITMASK: u32 = !1; let normalization_shift = n.leading_zeros() & EVEN_MAKING_BITMASK; n <<= normalization_shift; let (s, _) = u64_normalized_isqrt_rem(n); let denormalization_shift = normalization_shift / 2; return s >> denormalization_shift; } } /// Performs a Karatsuba square root on a normalized `u64`, returning the square /// root and remainder. fn u64_normalized_isqrt_rem(n: u64) -> (u64, u64) { const HALF_BITS: u32 = u64::BITS >> 1; const QUARTER_BITS: u32 = u64::BITS >> 2; const LOWER_HALF_1_BITS: u64 = (1 << HALF_BITS) - 1; debug_assert!( n.leading_zeros() <= 1, "Input is not normalized: {n} has {} leading zero bits, instead of 0 or 1.", n.leading_zeros() ); let hi = (n >> HALF_BITS) as u32; let lo = n & LOWER_HALF_1_BITS; let (s_prime, r_prime) = u32_normalized_isqrt_rem(hi); let numerator = ((r_prime as u64) << QUARTER_BITS) | (lo >> QUARTER_BITS); let denominator = (s_prime as u64) << 1; let q = numerator / denominator; let u = numerator % denominator; let mut s = (s_prime << QUARTER_BITS) as u64 + q; let mut r = (u << QUARTER_BITS) | (lo & ((1 << QUARTER_BITS) - 1)); let q_squared = q * q; if r < q_squared { r += 2 * s - 1; s -= 1; } r -= q_squared; return (s, r); } In programming languages Some programming languages dedicate an explicit operation to the integer square root calculation in addition to the general case or can be extended by libraries to this end. See also Methods of computing square roots Notes References External links Number theoretic algorithms Number theory Root-finding algorithms
Integer square root
Mathematics
3,002
37,445,416
https://en.wikipedia.org/wiki/AN/MSQ-35%20Bomb%20Scoring%20Central
The Reeves AN/MSQ-35 Bomb Scoring Central was a United States Air Force dual radar system with computerized plotting board. It was used by the 1st Combat Evaluation Group to evaluate the accuracy of Strategic Air Command bomber crews. Description The central had a 20 rpm Acquisition Radar System with a variable "fan-shaped beam" in elevation and an Interrogator Set AN/TPX-27 for identification friend or foe. The central's trailer van for operations had the separate AN/MSQ-54 Bomb Scoring Set with an automatic tracking radar group (OA-450/FSA-4 Receiver-Transmitter Control Group), a computer group with analog vacuum tube circuitry and on the roof, the antenna group. A communications group provided a link for receiving the aircraft's signal at simulated bomb release, and additional vehicles included a V-280 maintenance van and a V-287 flatbed trailer for transporting the radar/IFF antennas. Operation In addition to providing a conventional plot of the aircraft bomb run, the computer group automated the previously manual "bomb plot" in which technicians used time-of-flight charts for air drag and drew lengths of line from the release point to the vacuum impact point and then to an estimate impact point for drag and crosswind. By automatically using "ballistic data" such as time-of-fall to determine vacuum impact point, the AN/MSQ-35 computed bomb "score data [that was] printed out on tape" from a paper roll. Development and training The X-band Western Electric M-33 Fire Control System "was utilized as a basic building block" for the X-band AN/MSQ-35, and "techniques established during [the] AN/USQ-9 development program were utilized in the AN/MSQ-35 production program." In 1962 the Reeves Instrument Corporation production facility hosted the first class for AN/MSQ-35 operators, and the central's field testing was in early 1963 at White Sands Missile Range. Deployment The AN/MSQ-35 military school was initially at the Aberdeen Proving Ground (where the M-33 school had been located) and the central eventually became the instructional radar used at the 40-week Keesler Air Force Base technical training school for Automatic Tracking Radar Specialists (AUTOTRACK). The Final Engineering Report for the AN/MSQ-35 was published in December 1965 after it was used as the basis for developing the 1965 Reeves AN/MSQ-77 Bomb Directing Central with integrating computer for Combat Skyspot ground-directed bombing in the then-ongoing Vietnam War. See also List of military electronics of the United States References Aviation ground support equipment Computer systems of the United States Air Force Radars of the United States Air Force 1962 in military history 1965 in military history Computer-related introductions in 1962 Analog computers Ballistics Ground radars Military equipment introduced in the 1960s Military electronics of the United States Aerial warfare ground equipment
AN/MSQ-35 Bomb Scoring Central
Physics
598
31,737,675
https://en.wikipedia.org/wiki/Omnitruncated%205-simplex%20honeycomb
In five-dimensional Euclidean geometry, the omnitruncated 5-simplex honeycomb or omnitruncated hexateric honeycomb is a space-filling tessellation (or honeycomb). It is composed entirely of omnitruncated 5-simplex facets. The facets of all omnitruncated simplectic honeycombs are called permutahedra and can be positioned in n+1 space with integral coordinates, permutations of the whole numbers (0,1,..,n). A5* lattice The A lattice (also called A) is the union of six A5 lattices, and is the dual vertex arrangement to the omnitruncated 5-simplex honeycomb, and therefore the Voronoi cell of this lattice is an omnitruncated 5-simplex. ∪ ∪ ∪ ∪ ∪ = dual of Related polytopes and honeycombs Projection by folding The omnitruncated 5-simplex honeycomb can be projected into the 3-dimensional omnitruncated cubic honeycomb by a geometric folding operation that maps two pairs of mirrors into each other, sharing the same 3-space vertex arrangement: See also Regular and uniform honeycombs in 5-space: 5-cube honeycomb 5-demicube honeycomb 5-simplex honeycomb Notes References Norman Johnson Uniform Polytopes, Manuscript (1991) Kaleidoscopes: Selected Writings of H. S. M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] (1.9 Uniform space-fillings) (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] Honeycombs (geometry) 6-polytopes
Omnitruncated 5-simplex honeycomb
Physics,Chemistry,Materials_science
445
26,561
https://en.wikipedia.org/wiki/Rank%20%28linear%20algebra%29
In linear algebra, the rank of a matrix is the dimension of the vector space generated (or spanned) by its columns. This corresponds to the maximal number of linearly independent columns of . This, in turn, is identical to the dimension of the vector space spanned by its rows. Rank is thus a measure of the "nondegenerateness" of the system of linear equations and linear transformation encoded by . There are multiple equivalent definitions of rank. A matrix's rank is one of its most fundamental characteristics. The rank is commonly denoted by or ; sometimes the parentheses are not written, as in . Main definitions In this section, we give some definitions of the rank of a matrix. Many definitions are possible; see Alternative definitions for several of these. The column rank of is the dimension of the column space of , while the row rank of is the dimension of the row space of . A fundamental result in linear algebra is that the column rank and the row rank are always equal. (Three proofs of this result are given in , below.) This number (i.e., the number of linearly independent rows or columns) is simply called the rank of . A matrix is said to have full rank if its rank equals the largest possible for a matrix of the same dimensions, which is the lesser of the number of rows and columns. A matrix is said to be rank-deficient if it does not have full rank. The rank deficiency of a matrix is the difference between the lesser of the number of rows and columns, and the rank. The rank of a linear map or operator is defined as the dimension of its image:where is the dimension of a vector space, and is the image of a map. Examples The matrix has rank 2: the first two columns are linearly independent, so the rank is at least 2, but since the third is a linear combination of the first two (the first column plus the second), the three columns are linearly dependent so the rank must be less than 3. The matrix has rank 1: there are nonzero columns, so the rank is positive, but any pair of columns is linearly dependent. Similarly, the transpose of has rank 1. Indeed, since the column vectors of are the row vectors of the transpose of , the statement that the column rank of a matrix equals its row rank is equivalent to the statement that the rank of a matrix is equal to the rank of its transpose, i.e., . Computing the rank of a matrix Rank from row echelon forms A common approach to finding the rank of a matrix is to reduce it to a simpler form, generally row echelon form, by elementary row operations. Row operations do not change the row space (hence do not change the row rank), and, being invertible, map the column space to an isomorphic space (hence do not change the column rank). Once in row echelon form, the rank is clearly the same for both row rank and column rank, and equals the number of pivots (or basic columns) and also the number of non-zero rows. For example, the matrix given by can be put in reduced row-echelon form by using the following elementary row operations: The final matrix (in reduced row echelon form) has two non-zero rows and thus the rank of matrix is 2. Computation When applied to floating point computations on computers, basic Gaussian elimination (LU decomposition) can be unreliable, and a rank-revealing decomposition should be used instead. An effective alternative is the singular value decomposition (SVD), but there are other less computationally expensive choices, such as QR decomposition with pivoting (so-called rank-revealing QR factorization), which are still more numerically robust than Gaussian elimination. Numerical determination of rank requires a criterion for deciding when a value, such as a singular value from the SVD, should be treated as zero, a practical choice which depends on both the matrix and the application. Proofs that column rank = row rank Proof using row reduction The fact that the column and row ranks of any matrix are equal forms is fundamental in linear algebra. Many proofs have been given. One of the most elementary ones has been sketched in . Here is a variant of this proof: It is straightforward to show that neither the row rank nor the column rank are changed by an elementary row operation. As Gaussian elimination proceeds by elementary row operations, the reduced row echelon form of a matrix has the same row rank and the same column rank as the original matrix. Further elementary column operations allow putting the matrix in the form of an identity matrix possibly bordered by rows and columns of zeros. Again, this changes neither the row rank nor the column rank. It is immediate that both the row and column ranks of this resulting matrix is the number of its nonzero entries. We present two other proofs of this result. The first uses only basic properties of linear combinations of vectors, and is valid over any field. The proof is based upon Wardlaw (2005). The second uses orthogonality and is valid for matrices over the real numbers; it is based upon Mackiw (1995). Both proofs can be found in the book by Banerjee and Roy (2014). Proof using linear combinations Let be an matrix. Let the column rank of be , and let be any basis for the column space of . Place these as the columns of an matrix . Every column of can be expressed as a linear combination of the columns in . This means that there is an matrix such that . is the matrix whose th column is formed from the coefficients giving the th column of as a linear combination of the columns of . In other words, is the matrix which contains the multiples for the bases of the column space of (which is ), which are then used to form as a whole. Now, each row of is given by a linear combination of the rows of . Therefore, the rows of form a spanning set of the row space of and, by the Steinitz exchange lemma, the row rank of cannot exceed . This proves that the row rank of is less than or equal to the column rank of . This result can be applied to any matrix, so apply the result to the transpose of . Since the row rank of the transpose of is the column rank of and the column rank of the transpose of is the row rank of , this establishes the reverse inequality and we obtain the equality of the row rank and the column rank of . (Also see Rank factorization.) Proof using orthogonality Let be an matrix with entries in the real numbers whose row rank is . Therefore, the dimension of the row space of is . Let be a basis of the row space of . We claim that the vectors are linearly independent. To see why, consider a linear homogeneous relation involving these vectors with scalar coefficients : where . We make two observations: (a) is a linear combination of vectors in the row space of , which implies that belongs to the row space of , and (b) since , the vector is orthogonal to every row vector of and, hence, is orthogonal to every vector in the row space of . The facts (a) and (b) together imply that is orthogonal to itself, which proves that or, by the definition of , But recall that the were chosen as a basis of the row space of and so are linearly independent. This implies that . It follows that are linearly independent. Now, each is obviously a vector in the column space of . So, is a set of linearly independent vectors in the column space of and, hence, the dimension of the column space of (i.e., the column rank of ) must be at least as big as . This proves that row rank of is no larger than the column rank of . Now apply this result to the transpose of to get the reverse inequality and conclude as in the previous proof. Alternative definitions In all the definitions in this section, the matrix is taken to be an matrix over an arbitrary field . Dimension of image Given the matrix , there is an associated linear mapping defined by The rank of is the dimension of the image of . This definition has the advantage that it can be applied to any linear map without need for a specific matrix. Rank in terms of nullity Given the same linear mapping as above, the rank is minus the dimension of the kernel of . The rank–nullity theorem states that this definition is equivalent to the preceding one. Column rank – dimension of column space The rank of is the maximal number of linearly independent columns of ; this is the dimension of the column space of (the column space being the subspace of generated by the columns of , which is in fact just the image of the linear map associated to ). Row rank – dimension of row space The rank of is the maximal number of linearly independent rows of ; this is the dimension of the row space of . Decomposition rank The rank of is the smallest integer such that can be factored as , where is an matrix and is a matrix. In fact, for all integers , the following are equivalent: the column rank of is less than or equal to , there exist columns of size such that every column of is a linear combination of , there exist an matrix and a matrix such that (when is the rank, this is a rank factorization of ), there exist rows of size such that every row of is a linear combination of , the row rank of is less than or equal to . Indeed, the following equivalences are obvious: . For example, to prove (3) from (2), take to be the matrix whose columns are from (2). To prove (2) from (3), take to be the columns of . It follows from the equivalence that the row rank is equal to the column rank. As in the case of the "dimension of image" characterization, this can be generalized to a definition of the rank of any linear map: the rank of a linear map is the minimal dimension of an intermediate space such that can be written as the composition of a map and a map . Unfortunately, this definition does not suggest an efficient manner to compute the rank (for which it is better to use one of the alternative definitions). See rank factorization for details. Rank in terms of singular values The rank of equals the number of non-zero singular values, which is the same as the number of non-zero diagonal elements in Σ in the singular value decomposition Determinantal rank – size of largest non-vanishing minor The rank of is the largest order of any non-zero minor in . (The order of a minor is the side-length of the square sub-matrix of which it is the determinant.) Like the decomposition rank characterization, this does not give an efficient way of computing the rank, but it is useful theoretically: a single non-zero minor witnesses a lower bound (namely its order) for the rank of the matrix, which can be useful (for example) to prove that certain operations do not lower the rank of a matrix. A non-vanishing -minor ( submatrix with non-zero determinant) shows that the rows and columns of that submatrix are linearly independent, and thus those rows and columns of the full matrix are linearly independent (in the full matrix), so the row and column rank are at least as large as the determinantal rank; however, the converse is less straightforward. The equivalence of determinantal rank and column rank is a strengthening of the statement that if the span of vectors has dimension , then of those vectors span the space (equivalently, that one can choose a spanning set that is a subset of the vectors): the equivalence implies that a subset of the rows and a subset of the columns simultaneously define an invertible submatrix (equivalently, if the span of vectors has dimension , then of these vectors span the space and there is a set of coordinates on which they are linearly independent). Tensor rank – minimum number of simple tensors The rank of is the smallest number such that can be written as a sum of rank 1 matrices, where a matrix is defined to have rank 1 if and only if it can be written as a nonzero product of a column vector and a row vector . This notion of rank is called tensor rank; it can be generalized in the separable models interpretation of the singular value decomposition. Properties We assume that is an matrix, and we define the linear map by as above. The rank of an matrix is a nonnegative integer and cannot be greater than either or . That is, A matrix that has rank is said to have full rank; otherwise, the matrix is rank deficient. Only a zero matrix has rank zero. is injective (or "one-to-one") if and only if has rank (in this case, we say that has full column rank). is surjective (or "onto") if and only if has rank (in this case, we say that has full row rank). If is a square matrix (i.e., ), then is invertible if and only if has rank (that is, has full rank). If is any matrix, then If is an matrix of rank , then If is an matrix of rank , then The rank of is equal to if and only if there exists an invertible matrix and an invertible matrix such that where denotes the identity matrix. Sylvester’s rank inequality: if is an matrix and is , then This is a special case of the next inequality. The inequality due to Frobenius: if , and are defined, then Subadditivity: when and are of the same dimension. As a consequence, a rank- matrix can be written as the sum of rank-1 matrices, but not fewer. The rank of a matrix plus the nullity of the matrix equals the number of columns of the matrix. (This is the rank–nullity theorem.) If is a matrix over the real numbers then the rank of and the rank of its corresponding Gram matrix are equal. Thus, for real matrices This can be shown by proving equality of their null spaces. The null space of the Gram matrix is given by vectors for which If this condition is fulfilled, we also have If is a matrix over the complex numbers and denotes the complex conjugate of and the conjugate transpose of (i.e., the adjoint of ), then Applications One useful application of calculating the rank of a matrix is the computation of the number of solutions of a system of linear equations. According to the Rouché–Capelli theorem, the system is inconsistent if the rank of the augmented matrix is greater than the rank of the coefficient matrix. If on the other hand, the ranks of these two matrices are equal, then the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables. Otherwise the general solution has free parameters where is the difference between the number of variables and the rank. In this case (and assuming the system of equations is in the real or complex numbers) the system of equations has infinitely many solutions. In control theory, the rank of a matrix can be used to determine whether a linear system is controllable, or observable. In the field of communication complexity, the rank of the communication matrix of a function gives bounds on the amount of communication needed for two parties to compute the function. Generalization There are different generalizations of the concept of rank to matrices over arbitrary rings, where column rank, row rank, dimension of column space, and dimension of row space of a matrix may be different from the others or may not exist. Thinking of matrices as tensors, the tensor rank generalizes to arbitrary tensors; for tensors of order greater than 2 (matrices are order 2 tensors), rank is very hard to compute, unlike for matrices. There is a notion of rank for smooth maps between smooth manifolds. It is equal to the linear rank of the derivative. Matrices as tensors Matrix rank should not be confused with tensor order, which is called tensor rank. Tensor order is the number of indices required to write a tensor, and thus matrices all have tensor order 2. More precisely, matrices are tensors of type (1,1), having one row index and one column index, also called covariant order 1 and contravariant order 1; see Tensor (intrinsic definition) for details. The tensor rank of a matrix can also mean the minimum number of simple tensors necessary to express the matrix as a linear combination, and that this definition does agree with matrix rank as here discussed. See also Matroid rank Nonnegative rank (linear algebra) Rank (differential topology) Multicollinearity Linear dependence Notes References Sources Further reading Kaw, Autar K. Two Chapters from the book Introduction to Matrix Algebra: 1. Vectors and System of Equations Mike Brookes: Matrix Reference Manual. Linear algebra
Rank (linear algebra)
Mathematics
3,503
37,694
https://en.wikipedia.org/wiki/Seed
In botany, a seed is a plant embryo and nutrient reserve enclosed in a seed coat, a protective outer covering called a testa. More generally, the term "seed" means anything that can be sown, which may include seed and husk or tuber. Seeds are the product of the ripened ovule, after the embryo sac is fertilized by sperm from pollen, forming a zygote. The embryo within a seed develops from the zygote and grows within the mother plant to a certain size before growth is halted. The formation of the seed is the defining part of the process of reproduction in seed plants (spermatophytes). Other plants such as ferns, mosses and liverworts, do not have seeds and use water-dependent means to propagate themselves. Seed plants now dominate biological niches on land, from forests to grasslands both in hot and cold climates. In the flowering plants, the ovary ripens into a fruit which contains the seed and serves to disseminate it. Many structures commonly referred to as "seeds" are actually dry fruits. Sunflower seeds are sometimes sold commercially while still enclosed within the hard wall of the fruit, which must be split open to reach the seed. Different groups of plants have other modifications, the so-called stone fruits (such as the peach) have a hardened fruit layer (the endocarp) fused to and surrounding the actual seed. Nuts are the one-seeded, hard-shelled fruit of some plants with an indehiscent seed, such as an acorn or hazelnut. History The first land plants evolved around 468 million years ago, and reproduced using spores. The earliest seed bearing plants to appear were the gymnosperms, which have no ovaries to contain the seeds. They arose during the late Devonian period (416 million to 358 million years ago). From these early gymnosperms, seed ferns evolved during the Carboniferous period (359 to 299 million years ago); they had ovules that were borne in a cupule, which consisted of groups of enclosing branches likely used to protect the developing seed. Published literature about seed storage, viability and its hygrometric dependence began in the early 19th century, influential works being: 1832 seed storage guide in Augustin Pyramus de Candolle's Conservation des Graines, part of his 3-volume Physiologie végétale, ou Exposition des forces et des fonctions vitales des végétaux (1832, v. 2, pp. 618–626, Paris); (translated title, "Plant physiology, or Exposition of the vital forces and functions of plants") 1846 viability studies by Augustin de Candolle, published in "Sur la durée relative de la faculté de germer des graines appartenant à diverses familles" (Annales des Sciences Naturelles; Botanique, 1846, III 6: 373–382); (translated title, "On the relative duration of the ability to germinate seeds belonging to various families") 1897 seed hygrometric studies by Victor Jodin (Annales Agronomiques, October 1897) 1912's Henry B. Guppy's 528 page "Studies in Seeds and Fruits- An Investigation with the Balance" (1912, London, England); subsequently reviewed in Science (June 1914, Washington, D.C.) Development Angiosperm seeds are "enclosed seeds", produced in a hard or fleshy structure called a fruit that encloses them for protection. Some fruits have layers of both hard and fleshy material. In gymnosperms, no special structure develops to enclose the seeds, which begin their development "naked" on the bracts of cones. However, the seeds do become covered by the cone scales as they develop in some species of conifer. Angiosperm (flowering plants) seeds consist of three genetically distinct constituents: (1) the embryo formed from the zygote, (2) the endosperm, which is normally triploid, (3) the seed coat from tissue derived from the maternal tissue of the ovule. In angiosperms, the process of seed development begins with double fertilization, which involves the fusion of two male gametes with the egg cell and the central cell to form the primary endosperm and the zygote. Right after fertilization, the zygote is mostly inactive, but the primary endosperm divides rapidly to form the endosperm tissue. This tissue becomes the food the young plant will consume until the roots have developed after germination. Ovule After fertilization, the ovules develop into the seeds. The ovule consists of a number of components: The funicle (funiculus, funiculi) or seed stalk which attaches the ovule to the placenta and hence ovary or fruit wall, at the pericarp. The nucellus, the remnant of the megasporangium and main region of the ovule where the megagametophyte develops. The micropyle, a small pore or opening in the apex of the integument of the ovule where the pollen tube usually enters during the process of fertilization. The chalaza, the base of the ovule opposite the micropyle, where integument and nucellus are joined. The shape of the ovules as they develop often affects the final shape of the seeds. Plants generally produce ovules of four shapes: the most common shape is called anatropous, with a curved shape. Orthotropous ovules are straight with all the parts of the ovule lined up in a long row producing an uncurved seed. Campylotropous ovules have a curved megagametophyte often giving the seed a tight "C" shape. The last ovule shape is called amphitropous, where the ovule is partly inverted and turned back 90 degrees on its stalk (the funicle or funiculus). In the majority of flowering plants, the zygote's first division is transversely oriented in regards to the long axis, and this establishes the polarity of the embryo. The upper or chalazal pole becomes the main area of growth of the embryo, while the lower or micropylar pole produces the stalk-like suspensor that attaches to the micropyle. The suspensor absorbs and manufactures nutrients from the endosperm that are used during the embryo's growth. Embryo The main components of the embryo are: The cotyledons, the seed leaves, attached to the embryonic axis. There may be one (Monocotyledons), or two (Dicotyledons). The cotyledons are also the source of nutrients in the non-endospermic dicotyledons, in which case they replace the endosperm, and are thick and leathery. In endospermic seeds, the cotyledons are thin and papery. Dicotyledons have the point of attachment opposite one another on the axis. The epicotyl, the embryonic axis above the point of attachment of the cotyledon(s). The plumule, the tip of the epicotyl, and has a feathery appearance due to the presence of young leaf primordia at the apex, and will become the shoot upon germination. The hypocotyl, the embryonic axis below the point of attachment of the cotyledon(s), connecting the epicotyl and the radicle, being the stem-root transition zone. The radicle, the basal tip of the hypocotyl, grows into the primary root. Monocotyledonous plants have two additional structures in the form of sheaths. The plumule is covered with a coleoptile that forms the first leaf while the radicle is covered with a coleorhiza that connects to the primary root and adventitious roots form the sides. Here the hypocotyl is a rudimentary axis between radicle and plumule. The seeds of corn are constructed with these structures; pericarp, scutellum (single large cotyledon) that absorbs nutrients from the endosperm, plumule, radicle, coleoptile, and coleorhiza – these last two structures are sheath-like and enclose the plumule and radicle, acting as a protective covering. Seed coat The maturing ovule undergoes marked changes in the integuments, generally a reduction and disorganization but occasionally a thickening. The seed coat forms from the two integuments or outer layers of cells of the ovule, which derive from tissue from the mother plant, the inner integument forms the tegmen and the outer forms the testa. (The seed coats of some monocotyledon plants, such as the grasses, are not distinct structures, but are fused with the fruit wall to form a pericarp.) The testae of both monocots and dicots are often marked with patterns and textured markings, or have wings or tufts of hair. When the seed coat forms from only one layer, it is also called the testa, though not all such testae are homologous from one species to the next. The funiculus abscisses (detaches at fixed point – abscission zone), the scar forming an oval depression, the hilum. Anatropous ovules have a portion of the funiculus that is adnate (fused to the seed coat), and which forms a longitudinal ridge, or raphe, just above the hilum. In bitegmic ovules (e.g. Gossypium described here) both inner and outer integuments contribute to the seed coat formation. With continuing maturation the cells enlarge in the outer integument. While the inner epidermis may remain a single layer, it may also divide to produce two to three layers and accumulates starch, and is referred to as the colourless layer. By contrast, the outer epidermis becomes tanniferous. The inner integument may consist of eight to fifteen layers. As the cells enlarge, and starch is deposited in the outer layers of the pigmented zone below the outer epidermis, this zone begins to lignify, while the cells of the outer epidermis enlarge radially and their walls thicken, with nucleus and cytoplasm compressed into the outer layer. these cells which are broader on their inner surface are called palisade cells. In the inner epidermis, the cells also enlarge radially with plate like thickening of the walls. The mature inner integument has a palisade layer, a pigmented zone with 15–20 layers, while the innermost layer is known as the fringe layer. Gymnosperms In gymnosperms, which do not form ovaries, the ovules and hence the seeds are exposed. This is the basis for their nomenclature – naked seeded plants. Two sperm cells transferred from the pollen do not develop the seed by double fertilization, but one sperm nucleus unites with the egg nucleus and the other sperm is not used. Sometimes each sperm fertilizes an egg cell and one zygote is then aborted or absorbed during early development. The seed is composed of the embryo (the result of fertilization) and tissue from the mother plant, which also form a cone around the seed in coniferous plants such as pine and spruce. Shape and appearance Seeds are very diverse, and as such there are many terms are used to describe them. Terms to describe shape Bean-shaped () – resembling a kidney, with lobed ends on either side of the hilum Square or Oblong – angular, with all sides being either equal, or longer-than-wide Triangular – three-sided, broadest below the middle Elliptic or Ovate or Obovate – rounded at both ends, or egg shaped (ovate or obovate, broader at one end), being rounded but either symmetrical about the middle, or broader below the middle, or broader above the middle Discoid – resembling a disc or plate, having both thickness and parallel faces and with a rounded margin) Ellipsoid Globose – spherical Subglobose (Inflated, but less than spherical) Lenticular Ovoid Sectoroid Other common descriptors for seeds focus on color, texture, and form. Striate seeds are striped with parallel, longitudinal lines or ridges. The most common colours are brown and black, with other colours appearing less frequently. The surface texture varies from highly polished to considerably roughened. The surface may also have a variety of appendages (see Seed coat), and be described by terms such as papillate or digitiform (finger-like). A seed coat with the consistency of cork is referred to as suberose. Other terms include crustaceous (hard, thin or brittle). Structure A typical seed includes two basic parts: an embryo; a seed coat. In addition, the endosperm forms a supply of nutrients for the embryo in most monocotyledons and the endospermic dicotyledons. Seed types Seeds have been considered to occur in many structurally different types (Martin 1946). These are based on a number of criteria, of which the dominant one is the embryo-to-seed size ratio. This reflects the degree to which the developing cotyledons absorb the nutrients of the endosperm, and thus obliterate it. Six types occur amongst the monocotyledons, ten in the dicotyledons, and two in the gymnosperms (linear and spatulate). This classification is based on three characteristics: embryo morphology, amount of endosperm and the position of the embryo relative to the endosperm. Embryo In endospermic seeds, there are two distinct regions inside the seed coat, an upper and larger endosperm and a lower smaller embryo. The embryo is the fertilised ovule, an immature plant from which a new plant will grow under proper conditions. The embryo has one cotyledon or seed leaf in monocotyledons, two cotyledons in almost all dicotyledons and two or more in gymnosperms. In the fruit of grains (caryopses) the single monocotyledon is shield shaped and hence called a scutellum. The scutellum is pressed closely against the endosperm from which it absorbs food and passes it to the growing parts. Embryo descriptors include small, straight, bent, curved, and curled. Nutrient storage Within the seed, there usually is a store of nutrients for the seedling that will grow from the embryo. The form of the stored nutrition varies depending on the kind of plant. In angiosperms, the stored food begins as a tissue called the endosperm, which is derived from the mother plant and the pollen via double fertilization. It is usually triploid, and is rich in oil or starch, and protein. In gymnosperms, such as conifers, the food storage tissue (also called endosperm) is part of the female gametophyte, a haploid tissue. The endosperm is surrounded by the aleurone layer (peripheral endosperm), filled with proteinaceous aleurone grains. Originally, by analogy with the animal ovum, the outer nucellus layer (perisperm) was referred to as albumen, and the inner endosperm layer as vitellus. Although misleading, the term began to be applied to all the nutrient matter. This terminology persists in referring to endospermic seeds as "albuminous". The nature of this material is used in both describing and classifying seeds, in addition to the embryo to endosperm size ratio. The endosperm may be considered to be farinaceous (or mealy) in which the cells are filled with starch, as for instance cereal grains, or not (non-farinaceous). The endosperm may also be referred to as "fleshy" or "cartilaginous" with thicker soft cells such as coconut, but may also be oily as in Ricinus (castor oil), Croton and Poppy. The endosperm is called "horny" when the cell walls are thicker such as date and coffee, or "ruminated" if mottled, as in nutmeg, palms and Annonaceae. In most monocotyledons (such as grasses and palms) and some (endospermic or albuminous) dicotyledons (such as castor beans) the embryo is embedded in the endosperm (and nucellus), which the seedling will use upon germination. In the non-endospermic dicotyledons the endosperm is absorbed by the embryo as the latter grows within the developing seed, and the cotyledons of the embryo become filled with stored food. At maturity, seeds of these species have no endosperm and are also referred to as exalbuminous seeds. The exalbuminous seeds include the legumes (such as beans and peas), trees such as the oak and walnut, vegetables such as squash and radish, and sunflowers. According to Bewley and Black (1978), Brazil nut storage is in hypocotyl and this place of storage is uncommon among seeds. All gymnosperm seeds are albuminous. Seed coat The seed coat develops from the maternal tissue, the integuments, originally surrounding the ovule. The seed coat in the mature seed can be a paper-thin layer (e.g. peanut) or something more substantial (e.g. thick and hard in honey locust and coconut), or fleshy as in the sarcotesta of pomegranate. The seed coat helps protect the embryo from mechanical injury, predators, and drying out. Depending on its development, the seed coat is either bitegmic or unitegmic. Bitegmic seeds form a testa from the outer integument and a tegmen from the inner integument while unitegmic seeds have only one integument. Usually, parts of the testa or tegmen form a hard protective mechanical layer. The mechanical layer may prevent water penetration and germination. Amongst the barriers may be the presence of lignified sclereids. The outer integument has a number of layers, generally between four and eight organised into three layers: (a) outer epidermis, (b) outer pigmented zone of two to five layers containing tannin and starch, and (c) inner epidermis. The endotegmen is derived from the inner epidermis of the inner integument, the exotegmen from the outer surface of the inner integument. The endotesta is derived from the inner epidermis of the outer integument, and the outer layer of the testa from the outer surface of the outer integument is referred to as the exotesta. If the exotesta is also the mechanical layer, this is called an exotestal seed, but if the mechanical layer is the endotegmen, then the seed is endotestal. The exotesta may consist of one or more rows of cells that are elongated and pallisade like (e.g. Fabaceae), hence 'palisade exotesta'. In addition to the three basic seed parts, some seeds have an appendage, an aril, a fleshy outgrowth of the funicle (funiculus), (as in yew and nutmeg) or an oily appendage, an elaiosome (as in Corydalis), or hairs (trichomes). In the latter example these hairs are the source of the textile crop cotton. Other seed appendages include the raphe (a ridge), wings, caruncles (a soft spongy outgrowth from the outer integument in the vicinity of the micropyle), spines, or tubercles. A scar also may remain on the seed coat, called the hilum, where the seed was attached to the ovary wall by the funicle. Just below it is a small pore, representing the micropyle of the ovule. Size and seed set Seeds are very diverse in size. The dust-like orchid seeds are the smallest, with about one million seeds per gram; they are often embryonic seeds with immature embryos and no significant energy reserves. Orchids and a few other groups of plants are mycoheterotrophs which depend on mycorrhizal fungi for nutrition during germination and the early growth of the seedling. Some terrestrial orchid seedlings, in fact, spend the first few years of their lives deriving energy from the fungi and do not produce green leaves. At up to 55 pounds (25 kilograms) the largest seed is the coco de mer(Lodoicea maldivica). This indicates a 25 Billion fold difference in seed weight. Plants that produce smaller seeds can generate many more seeds per flower, while plants with larger seeds invest more resources into those seeds and normally produce fewer seeds. Small seeds are quicker to ripen and can be dispersed sooner, so autumn all blooming plants often have small seeds. Many annual plants produce great quantities of smaller seeds; this helps to ensure at least a few will end in a favorable place for growth. Herbaceous perennials and woody plants often have larger seeds; they can produce seeds over many years, and larger seeds have more energy reserves for germination and seedling growth and produce larger, more established seedlings after germination. Functions Seeds serve several functions for the plants that produce them. Key among these functions are nourishment of the embryo, dispersal to a new location, and dormancy during unfavorable conditions. Seeds fundamentally are means of reproduction, and most seeds are the product of sexual reproduction which produces a remixing of genetic material and phenotype variability on which natural selection acts. Plant seeds hold endophytic microorganisms that can perform various functions, the most important of which is protection against disease. Embryo nourishment Seeds protect and nourish the embryo or young plant. They usually give a seedling a faster start than a sporeling from a spore, because of the larger food reserves in the seed and the multicellularity of the enclosed embryo. Dispersal Unlike animals, plants are limited in their ability to seek out favorable conditions for life and growth. As a result, plants have evolved many ways to disperse their offspring by dispersing their seeds (see also vegetative reproduction). A seed must somehow "arrive" at a location and be there at a time favorable for germination and growth. When the fruits open and release their seeds in a regular way, it is called dehiscent, which is often distinctive for related groups of plants; these fruits include capsules, follicles, legumes, silicles and siliques. When fruits do not open and release their seeds in a regular fashion, they are called indehiscent, which include the fruits achenes, caryopses, nuts, samaras, and utricles. By wind (anemochory) Some seeds (e.g., pine) have a wing that aids in wind dispersal. The dustlike seeds of orchids are carried efficiently by the wind. Some seeds (e.g. milkweed, poplar) have hairs that aid in wind dispersal. Other seeds are enclosed in fruit structures that aid wind dispersal in similar ways: Dandelion achenes have hairs. Maple samaras have two wings. By water (hydrochory) Some plants, such as Mucuna and Dioclea, produce buoyant seeds termed sea-beans or drift seeds because they float in rivers to the oceans and wash up on beaches. By animals (zoochory) Seeds (burrs) with barbs or hooks (e.g. acaena, burdock, dock) which attach to animal fur or feathers, and then drop off later. Seeds with a fleshy covering (e.g. apple, cherry, juniper) are eaten by animals (birds, mammals, reptiles, fish) which then disperse these seeds in their droppings. Seeds (nuts) are attractive long-term storable food resources for animals (e.g. acorns, hazelnut, walnut); the seeds are stored some distance from the parent plant, and some escape being eaten if the animal forgets them. Myrmecochory is the dispersal of seeds by ants. Foraging ants disperse seeds which have appendages called elaiosomes (e.g. bloodroot, trilliums, acacias, and many species of Proteaceae). Elaiosomes are soft, fleshy structures that contain nutrients for animals that eat them. The ants carry such seeds back to their nest, where the elaiosomes are eaten. The remainder of the seed, which is hard and inedible to the ants, then germinates either within the nest or at a removal site where the seed has been discarded by the ants. This dispersal relationship is an example of mutualism, since the plants depend upon the ants to disperse seeds, while the ants depend upon the plants seeds for food. As a result, a drop in numbers of one partner can reduce success of the other. In South Africa, the Argentine ant (Linepithema humile) has invaded and displaced native species of ants. Unlike the native ant species, Argentine ants do not collect the seeds of Mimetes cucullatus or eat the elaiosomes. In areas where these ants have invaded, the numbers of Mimetes seedlings have dropped. Dormancy Seed dormancy has two main functions: the first is synchronizing germination with the optimal conditions for survival of the resulting seedling; the second is spreading germination of a batch of seeds over time so a catastrophe (e.g. late frosts, drought, herbivory) does not result in the death of all offspring of a plant (bet-hedging). Seed dormancy is defined as a seed failing to germinate under environmental conditions optimal for germination, normally when the environment is at a suitable temperature with proper soil moisture. This true dormancy or innate dormancy is therefore caused by conditions within the seed that prevent germination. Thus dormancy is a state of the seed, not of the environment. Induced dormancy, enforced dormancy or seed quiescence occurs when a seed fails to germinate because the external environmental conditions are inappropriate for germination, mostly in response to conditions being too dark or light, too cold or hot, or too dry. Seed dormancy is not the same as seed persistence in the soil or on the plant, though even in scientific publications dormancy and persistence are often confused or used as synonyms. Often, seed dormancy is divided into four major categories: exogenous; endogenous; combinational; and secondary. A more recent system distinguishes five classes: morphological, physiological, morphophysiological, physical, and combinational dormancy. Exogenous dormancy is caused by conditions outside the embryo, including: Physical dormancy or hard seed coats occurs when seeds are impermeable to water. At dormancy break, a specialized structure, the 'water gap', is disrupted in response to environmental cues, especially temperature, so water can enter the seed and germination can occur. Plant families where physical dormancy occurs include Anacardiaceae, Cannaceae, Convulvulaceae, Fabaceae and Malvaceae. Chemical dormancy considers species that lack physiological dormancy, but where a chemical prevents germination. This chemical can be leached out of the seed by rainwater or snow melt or be deactivated somehow. Leaching of chemical inhibitors from the seed by rain water is often cited as an important cause of dormancy release in seeds of desert plants, but little evidence exists to support this claim. Endogenous dormancy is caused by conditions within the embryo itself, including: In morphological dormancy, germination is prevented due to morphological characteristics of the embryo. In some species, the embryo is just a mass of cells when seeds are dispersed; it is not differentiated. Before germination can take place, both differentiation and growth of the embryo have to occur. In other species, the embryo is differentiated but not fully grown (underdeveloped) at dispersal, and embryo growth up to a species specific length is required before germination can occur. Examples of plant families where morphological dormancy occurs are Apiaceae, Cycadaceae, Liliaceae, Magnoliaceae and Ranunculaceae. Morphophysiological dormancy includes seeds with underdeveloped embryos, and also have physiological components to dormancy. These seeds, therefore, require a dormancy-breaking treatments, as well as a period of time to develop fully grown embryos. Plant families where morphophysiological dormancy occurs include Apiaceae, Aquifoliaceae, Liliaceae, Magnoliaceae, Papaveraceae and Ranunculaceae. Some plants with morphophysiological dormancy, such as Asarum or Trillium species, have multiple types of dormancy, one affects radicle (root) growth, while the other affects plumule (shoot) growth. The terms "double dormancy" and "two-year seeds" are used for species whose seeds need two years to complete germination or at least two winters and one summer. Dormancy of the radicle (seedling root) is broken during the first winter after dispersal while dormancy of the shoot bud is broken during the second winter. Physiological dormancy means the embryo, due to physiological causes, cannot generate enough power to break through the seed coat, endosperm or other covering structures. Dormancy is typically broken at cool wet, warm wet, or warm dry conditions. Abscisic acid is usually the growth inhibitor in seeds, and its production can be affected by light. Drying, in some plants, including a number of grasses and those from seasonally arid regions, is needed before they will germinate. The seeds are released, but need to have a lower moisture content before germination can begin. If the seeds remain moist after dispersal, germination can be delayed for many months or even years. Many herbaceous plants from temperate climate zones have physiological dormancy that disappears with drying of the seeds. Other species will germinate after dispersal only under very narrow temperature ranges, but as the seeds dry, they are able to germinate over a wider temperature range. In seeds with combinational dormancy, the seed or fruit coat is impermeable to water and the embryo has physiological dormancy. Depending on the species, physical dormancy can be broken before or after physiological dormancy is broken. Secondary dormancy* is caused by conditions after the seed has been dispersed and occurs in some seeds when nondormant seed is exposed to conditions that are not favorable to germination, very often high temperatures. The mechanisms of secondary dormancy are not yet fully understood, but might involve the loss of sensitivity in receptors in the plasma membrane. The following types of seed dormancy do not involve seed dormancy, strictly speaking, as lack of germination is prevented by the environment, not by characteristics of the seed itself (see Germination): Photodormancy or light sensitivity affects germination of some seeds. These photoblastic seeds need a period of darkness or light to germinate. In species with thin seed coats, light may be able to penetrate into the dormant embryo. The presence of light or the absence of light may trigger the germination process, inhibiting germination in some seeds buried too deeply or in others not buried in the soil. Thermodormancy is seed sensitivity to heat or cold. Some seeds, including cocklebur and amaranth, germinate only at high temperatures (30 °C or 86 °F); many plants that have seeds that germinate in early to midsummer have thermodormancy, so germinate only when the soil temperature is warm. Other seeds need cool soils to germinate, while others, such as celery, are inhibited when soil temperatures are too warm. Often, thermodormancy requirements disappear as the seed ages or dries. Not all seeds undergo a period of dormancy. Seeds of some mangroves are viviparous; they begin to germinate while still attached to the parent. The large, heavy root allows the seed to penetrate into the ground when it falls. Many garden plant seeds will germinate readily as soon as they have water and are warm enough; though their wild ancestors may have had dormancy, these cultivated plants lack it. After many generations of selective pressure by plant breeders and gardeners, dormancy has been selected out. For annuals, seeds are a way for the species to survive dry or cold seasons. Ephemeral plants are usually annuals that can go from seed to seed in as few as six weeks. Persistence and seed banks Germination Seed germination is a process by which a seed embryo develops into a seedling. It involves the reactivation of the metabolic pathways that lead to growth and the emergence of the radicle or seed root and plumule or shoot. The emergence of the seedling above the soil surface is the next phase of the plant's growth and is called seedling establishment. Three fundamental conditions must exist before germination can occur. (1) The embryo must be alive, called seed viability. (2) Any dormancy requirements that prevent germination must be overcome. (3) The proper environmental conditions must exist for germination. Far red light can prevent germination. Seed viability is the ability of the embryo to germinate and is affected by a number of different conditions. Some plants do not produce seeds that have functional complete embryos, or the seed may have no embryo at all, often called empty seeds. Predators and pathogens can damage or kill the seed while it is still in the fruit or after it is dispersed. Environmental conditions like flooding or heat can kill the seed before or during germination. The age of the seed affects its health and germination ability: since the seed has a living embryo, over time cells die and cannot be replaced. Some seeds can live for a long time before germination, while others can only survive for a short period after dispersal before they die. Seed vigor is a measure of the quality of seed, and involves the viability of the seed, the germination percentage, germination rate, and the strength of the seedlings produced. The germination percentage is simply the proportion of seeds that germinate from all seeds subject to the right conditions for growth. The germination rate is the length of time it takes for the seeds to germinate. Germination percentages and rates are affected by seed viability, dormancy and environmental effects that impact on the seed and seedling. In agriculture and horticulture quality seeds have high viability, measured by germination percentage plus the rate of germination. This is given as a percent of germination over a certain amount of time, 90% germination in 20 days, for example. 'Dormancy' is covered above; many plants produce seeds with varying degrees of dormancy, and different seeds from the same fruit can have different degrees of dormancy. It's possible to have seeds with no dormancy if they are dispersed right away and do not dry (if the seeds dry they go into physiological dormancy). There is great variation amongst plants and a dormant seed is still a viable seed even though the germination rate might be very low. Environmental conditions affecting seed germination include; water, oxygen, temperature and light. Three distinct phases of seed germination occur: water imbibition; lag phase; and radicle emergence. In order for the seed coat to split, the embryo must imbibe (soak up water), which causes it to swell, splitting the seed coat. However, the nature of the seed coat determines how rapidly water can penetrate and subsequently initiate germination. The rate of imbibition is dependent on the permeability of the seed coat, amount of water in the environment and the area of contact the seed has to the source of water. For some seeds, imbibing too much water too quickly can kill the seed. For some seeds, once water is imbibed the germination process cannot be stopped, and drying then becomes fatal. Other seeds can imbibe and lose water a few times without causing ill effects, but drying can cause secondary dormancy. Repair of DNA damage During seed dormancy, often associated with unpredictable and stressful environments, DNA damage accumulates as the seeds age. In rye seeds, the reduction of DNA integrity due to damage is associated with loss of seed viability during storage. Upon germination, seeds of Vicia faba undergo DNA repair. A plant DNA ligase that is involved in repair of single- and double-strand breaks during seed germination is an important determinant of seed longevity. Also, in Arabidopsis seeds, the activities of the DNA repair enzymes Poly ADP ribose polymerases (PARP) are likely needed for successful germination. Thus DNA damages that accumulate during dormancy appear to be a problem for seed survival, and the enzymatic repair of DNA damages during germination appears to be important for seed viability. Inducing germination A number of different strategies are used by gardeners and horticulturists to break seed dormancy. Scarification allows water and gases to penetrate into the seed; it includes methods to physically break the hard seed coats or soften them by chemicals, such as soaking in hot water or poking holes in the seed with a pin or rubbing them on sandpaper or cracking with a press or hammer. Sometimes fruits are harvested while the seeds are still immature and the seed coat is not fully developed and sown right away before the seed coat become impermeable. Under natural conditions, seed coats are worn down by rodents chewing on the seed, the seeds rubbing against rocks (seeds are moved by the wind or water currents), by undergoing freezing and thawing of surface water, or passing through an animal's digestive tract. In the latter case, the seed coat protects the seed from digestion, while often weakening the seed coat such that the embryo is ready to sprout when it is deposited, along with a bit of fecal matter that acts as fertilizer, far from the parent plant. Microorganisms are often effective in breaking down hard seed coats and are sometimes used by people as a treatment; the seeds are stored in a moist warm sandy medium for several months under nonsterile conditions. Stratification, also called moist-chilling, breaks down physiological dormancy, and involves the addition of moisture to the seeds so they absorb water, and they are then subjected to a period of moist chilling to after-ripen the embryo. Sowing in late summer and fall and allowing to overwinter under cool conditions is an effective way to stratify seeds; some seeds respond more favorably to periods of oscillating temperatures which are a part of the natural environment. Leaching or the soaking in water removes chemical inhibitors in some seeds that prevent germination. Rain and melting snow naturally accomplish this task. For seeds planted in gardens, running water is best – if soaked in a container, 12 to 24 hours of soaking is sufficient. Soaking longer, especially in stagnant water, can result in oxygen starvation and seed death. Seeds with hard seed coats can be soaked in hot water to break open the impermeable cell layers that prevent water intake. Other methods used to assist in the germination of seeds that have dormancy include prechilling, predrying, daily alternation of temperature, light exposure, potassium nitrate, the use of plant growth regulators, such as gibberellins, cytokinins, ethylene, thiourea, sodium hypochlorite, and others. Some seeds germinate best after a fire. For some seeds, fire cracks hard seed coats, while in others, chemical dormancy is broken in reaction to the presence of smoke. Liquid smoke is often used by gardeners to assist in the germination of these species. Sterile seeds Seeds may be sterile for few reasons: they may have been irradiated, unpollinated, cells lived past expectancy, or bred for the purpose. Evolution and origin of seeds The issue of the origin of seed plants remains unsolved. However, more and more data tends to place this origin in the middle Devonian. The description in 2004 of the proto-seed Runcaria heinzelinii in the Givetian of Belgium is an indication of that ancient origin of seed-plants. As with modern ferns, most land plants before this time reproduced by sending into the air spores that would land and become whole new plants. Taxonomists have described early "true" seeds from the upper Devonian, which probably became the theater of their true first evolutionary radiation. With this radiation came an evolution of seed size, shape, dispersal and eventually the radiation of gymnosperms and angiosperms and monocotyledons and dicotyledons. Seed plants progressively became one of the major elements of nearly all ecosystems. True to the seed Also called growing true, refers to plants whose seed will yield the same type of plant as the original plant. Open pollinated plants, which include heirlooms, will almost always grow true to seed if another variety does not cross-pollinate them. Seed microbiome Seeds harbor a diverse microbial community. Most of these microorganisms are transmitted from the seed to the developing seedlings. Economic importance Seed market In the United States farmers spent $22 billion on seeds in 2018, a 35 percent increase since 2010. DowDuPont and Monsanto account for 72 percent of corn and soybean seed sales in the U.S. with the average price of a bag of GMO corn seed is priced at $270. Seed production Seed production in natural plant populations varies widely from year to year in response to weather variables, insects and diseases, and internal cycles within the plants themselves. Over a 20-year period, for example, forests composed of loblolly pine and shortleaf pine produced from 0 to nearly 5.5 million sound pine seeds per hectare. Over this period, there were six bumper, five poor, and nine good seed crops, when evaluated for production of adequate seedlings for natural forest reproduction. Edible seeds Many seeds are edible and the majority of human calories comes from seeds, especially from cereals, legumes and nuts. Seeds also provide most cooking oils, many beverages and spices and some important food additives. In different seeds the seed embryo or the endosperm dominates and provides most of the nutrients. The storage proteins of the embryo and endosperm differ in their amino acid content and physical properties. For example, the gluten of wheat, important in providing the elastic property to bread dough is strictly an endosperm protein. Seeds are used to propagate many crops such as cereals, legumes, forest trees, turfgrasses, and pasture grasses. Particularly in developing countries, a major constraint faced is the inadequacy of the marketing channels to get the seed to poor farmers. Thus the use of farmer-retained seed remains quite common. Seeds are also eaten by animals (seed predation), and are also fed to livestock or provided as birdseed. Poison and food safety While some seeds are edible, others are harmful, poisonous or deadly. Plants and seeds often contain chemical compounds to discourage herbivores and seed predators. In some cases, these compounds simply taste bad (such as in mustard), but other compounds are toxic or break down into toxic compounds within the digestive system. Children, being smaller than adults, are more susceptible to poisoning by plants and seeds. A deadly poison, ricin, comes from seeds of the castor bean. Reported lethal doses are anywhere from two to eight seeds, though only a few deaths have been reported when castor beans have been ingested by animals. In addition, seeds containing amygdalin – apple, apricot, bitter almond, peach, plum, cherry, quince, and others – when consumed in sufficient amounts, may cause cyanide poisoning. Other seeds that contain poisons include annona, cotton, custard apple, datura, uncooked durian, golden chain, horse-chestnut, larkspur, locoweed, lychee, nectarine, rambutan, rosary pea, sour sop, sugar apple, wisteria, and yew. The seeds of the strychnine tree are also poisonous, containing the poison strychnine. The seeds of many legumes, including the common bean (Phaseolus vulgaris), contain proteins called lectins which can cause gastric distress if the beans are eaten without cooking. The common bean and many others, including the soybean, also contain trypsin inhibitors which interfere with the action of the digestive enzyme trypsin. Normal cooking processes degrade lectins and trypsin inhibitors to harmless forms. Other uses Cotton fiber grows attached to cotton plant seeds. Other seed fibers are from kapok and milkweed. Many important nonfood oils are extracted from seeds. Linseed oil is used in paints. Oil from jojoba and crambe are similar to whale oil. Seeds are the source of some medicines including castor oil, tea tree oil and the quack cancer drug Laetrile. Many seeds have been used as beads in necklaces and rosaries including Job's tears, Chinaberry, rosary pea, and castor bean. However, the latter three are also poisonous. Other seed uses include: Seeds once used as weights for balances. Seeds used as toys by children, such as for the game Conkers. Resin from Clusia rosea seeds used to caulk boats. Nematicide from milkweed seeds. Cottonseed meal used as animal feed and fertilizer. Seed records The oldest viable carbon-14-dated seed that has grown into a plant was a Judean date palm seed about 2,000 years old, recovered from excavations at Herod the Great's palace on Masada in Israel. It was germinated in 2005. (A reported regeneration of Silene stenophylla (narrow-leafed campion) from material preserved for 31,800 years in the Siberian permafrost was achieved using fruit tissue, not seed.) The largest seed is produced by the coco de mer, or "double coconut palm", Lodoicea maldivica. The entire fruit may weigh up to 23 kilograms (50 pounds) and usually contains a single seed. The smallest seeds are produced by epiphytic orchids. They are only 85 micrometers long, and weigh 0.81 micrograms. They have no endosperm and contain underdeveloped embryos. The earliest fossil seeds are around 365 million years old from the Late Devonian of West Virginia. The seeds are preserved immature ovules of the plant Elkinsia polymorpha. In religion The Book of Genesis in the Old Testament begins with an explanation of how all plant forms began: And God said, Let the earth bring forth grass, the herb yielding seed, and the fruit tree yielding fruit after his kind, whose seed is in itself, upon the earth: and it was so. And the earth brought forth grass, and herb yielding seed after its kind, and the tree yielding fruit, whose seed was in itself, after its kind: and God saw that it was good. And the evening and the morning were the third day. The Quran speaks of seed germination thus: It is Allah Who causeth the seed-grain and the date-stone to split and sprout. He causeth the living to issue from the dead, and He is the one to cause the dead to issue from the living. That is Allah: then how are ye deluded away from the truth? See also Biological dispersal Carpology Genetically modified crops List of world's largest seeds Recalcitrant seed Seed company Seed enhancement Seed library Seed orchard Seed paper Seed saving Seed testing Seed trap Seedbed Soil seed bank Selective embryo abortion References Bibliography A.C. Martin. The Comparative Internal Morphology of Seeds. American Midland Naturalist Vol. 36, No. 3 (Nov., 1946), pp. 513–660 M.B. McDonald, Francis Y. Kwong (eds.). Flower Seeds: Biology and Technology. CABI, 2005. Edred John Henry Corner. The Seeds of Dicotyledons. Cambridge University Press, 1976. United States Forest Service. Woody Plant Seed Manual. 1948 Stuppy, W. Glossary of Seed and Fruit Morphological Terms. Royal botanical gardens, Kew 2004 External links Royal Holloway, University of London: The Seed Biology Place The Millennium Seed Bank Project Kew Garden's ambitious preservation project The Svalbard Global Seed Vault – a backup facility for the world's seed banks Plant Physiology online: Types of Seed Dormancy and the Roles of Environmental Factors Canadian Grain Commission:Seed characters used in the identification of small oilseeds and weed seeds The Seed Site: collecting, storing, sowing, germinating, and exchanging seeds, with pictures of seeds, seedpods and seedlings. Plant Fix: Check out various plant seeds and learn more information about them Botany Plant reproduction Plant sexuality
Seed
Biology
10,530
515,770
https://en.wikipedia.org/wiki/Refresh%20rate
The refresh rate, also known as vertical refresh rate or vertical scan rate in reference to terminology originating with the cathode-ray tubes (CRTs), is the number of times per second that a raster-based display device displays a new image. This is independent from frame rate, which describes how many images are stored or generated every second by the device driving the display. On CRT displays, higher refresh rates produce less flickering, thereby reducing eye strain. In other technologies such as liquid-crystal displays, the refresh rate affects only how often the image can potentially be updated. Non-raster displays may not have a characteristic refresh rate. Vector displays, for instance, do not trace the entire screen, only the actual lines comprising the displayed image, so refresh speed may differ by the size and complexity of the image data. For computer programs or telemetry, the term is sometimes applied to how frequently a datum is updated with a new external value from another source (for example; a shared public spreadsheet or hardware feed). Physical factors While all raster display devices have a characteristic refresh rate, the physical implementation differs between technologies. Cathode-ray tubes Raster-scan CRTs by their nature must refresh the screen since their phosphors will fade and the image will disappear quickly unless refreshed regularly. In a CRT, the vertical scan rate is the number of times per second that the electron beam returns to the upper left corner of the screen to begin drawing a new frame. It is controlled by the vertical blanking signal generated by the video controller, and is partially limited by the monitor's maximum horizontal scan rate. The refresh rate can be calculated from the horizontal scan rate by dividing the scanning frequency by the number of horizontal lines, plus some amount of time to allow for the beam to return to the top. By convention, this is a 1.05x multiplier. For instance, a monitor with a horizontal scanning frequency of 96 kHz at a resolution of results in a refresh rate of 89 Hz (rounded down). CRT refresh rates have historically been an important factor in video game programming. In early videogame systems, the only time available for computation was during the vertical blanking interval, during which the beam is returning to the top right corner of the screen and no image is being drawn. Even in modern games, however, it is important to avoid altering the computer's video buffer except during the vertical retrace, to prevent flickering graphics or screen tearing. Liquid-crystal displays Unlike CRTs, where the image will fade unless refreshed, the pixels of liquid-crystal displays retain their state for as long as power is provided. Consequently, there is no intrinsic flicker regardless of refresh rate. However, the refresh rate still determines the highest frame rate that can be displayed, and despite there being no actual blanking of the screen, the vertical blanking interval is still a period in each refresh cycle when the screen is not being updated, during which the image data in the host system's frame buffer can be updated. Vsync options can eliminate screen tearing by rendering the whole image at the same time. Computer displays On smaller CRT monitors (up to about ), few people notice any discomfort between 60–72 Hz. On larger CRT monitors ( or larger), most people experience mild discomfort unless the refresh is set to 72 Hz or higher. A rate of 100 Hz is comfortable at almost any size. However, this does not apply to LCD monitors. The closest equivalent to a refresh rate on an LCD monitor is its frame rate, which is often locked at 60 fps. But this is rarely a problem, because the only part of an LCD monitor that could produce CRT-like flicker—its backlight—typically operates at around a minimum of 200 Hz. Different operating systems set the default refresh rate differently. Microsoft Windows 95 and Windows 98 (First and Second Editions) set the refresh rate to the highest rate that they believe the display supports. Windows NT-based operating systems, such as Windows 2000 and its descendants Windows XP, Windows Vista and Windows 7, set the default refresh rate to a conservative rate, usually 60 Hz. Some fullscreen applications, including many games, now allow the user to reconfigure the refresh rate before entering fullscreen mode, but most default to a conservative resolution and refresh rate and let you increase the settings in the options. Old monitors could be damaged if a user set the video card to a refresh rate higher than the highest rate supported by the monitor. Some models of monitors display a notice that the video signal uses an unsupported refresh rate. Dynamic refresh rate Some LCDs support adapting their refresh rate to the current frame rate delivered by the graphics card. Two technologies that allow this are FreeSync and G-Sync. Stereo displays When LCD shutter glasses are used for stereo 3D displays, the effective refresh rate is halved, because each eye needs a separate picture. For this reason, it is usually recommended to use a display capable of at least 120 Hz, because divided in half this rate is again 60 Hz. Higher refresh rates result in greater image stability, for example 72 Hz non-stereo is 144 Hz stereo, and 90 Hz non-stereo is 180 Hz stereo. Most low-end computer graphics cards and monitors cannot handle these high refresh rates, especially at higher resolutions. For LCD monitors the pixel brightness changes are much slower than CRT or plasma phosphors. Typically LCD pixel brightness changes are faster when voltage is applied than when voltage is removed, resulting in an asymmetric pixel response time. With 3D shutter glasses this can result in a blurry smearing of the display and poor depth perception, due to the previous image frame not fading to black fast enough as the next frame is drawn. Televisions The development of televisions in the 1930s was determined by a number of technical limitations. The AC power line frequency was used for the vertical refresh rate for two reasons. The first reason was that the television's vacuum tube was susceptible to interference from the unit's power supply, including residual ripple. This could cause drifting horizontal bars (hum bars). Using the same frequency reduced this, and made interference static on the screen and therefore less obtrusive. The second reason was that television studios would use AC lamps, filming at a different frequency would cause strobing. Thus producers had little choice but to run sets at 60 Hz in America, and 50 Hz in Europe. These rates formed the basis for the sets used today: 60 Hz System M (almost always used with NTSC color coding) and 50 Hz System B/G (almost always used with PAL or SECAM color coding). This accident of chance gave European sets higher resolution, in exchange for lower frame rates. Compare System M (704 × 480 at 30i) and System B/G (704 × 576 at 25i). However, the lower refresh rate of 50 Hz introduces more flicker, so sets that use digital technology to double the refresh rate to 100 Hz are now very popular. (see Broadcast television systems) Another difference between 50 Hz and 60 Hz standards is the way motion pictures (film sources as opposed to video camera sources) are transferred or presented. 35 mm film is typically shot at 24 frames per second (fps). For PAL 50 Hz this allows film sources to be easily transferred by accelerating the film by 4%. The resulting picture is therefore smooth, however, there is a small shift in the pitch of the audio. NTSC sets display both 24 fps and 25 fps material without any speed shifting by using a technique called 3:2 pulldown, but at the expense of introducing unsmooth playback in the form of telecine judder. Similar to some computer monitors and some DVDs, analog television systems use interlace, which decreases the apparent flicker by painting first the odd lines and then the even lines (these are known as fields). This doubles the refresh rate, compared to a progressive scan image at the same frame rate. This works perfectly for video cameras, where each field results from a separate exposurethe effective frame rate doubles, there are now 50 rather than 25 exposures per second. The dynamics of a CRT are ideally suited to this approach, fast scenes will benefit from the 50 Hz refresh, the earlier field will have largely decayed away when the new field is written, and static images will benefit from improved resolution as both fields will be integrated by the eye. Modern CRT-based televisions may be made flicker-free in the form of 100 Hz technology. Many high-end LCD televisions now have a 120 or 240 Hz (current and former NTSC countries) or 100 or 200 Hz (PAL/SECAM countries) refresh rate. The rate of 120 was chosen as the least common multiple of 24 fps (cinema) and 30 fps (NTSC TV), and allows for less distortion when movies are viewed due to the elimination of telecine (3:2 pulldown). For PAL at 25 fps, 100 or 200 Hz is used as a fractional compromise of the least common multiple of 600 (24 × 25). These higher refresh rates are most effective from a 24p-source video output (e.g. Blu-ray Disc), and/or scenes of fast motion. Displaying movie content on a TV As movies are usually filmed at a rate of 24 frames per second, while television sets operate at different rates, some conversion is necessary. Different techniques exist to give the viewer an optimal experience. The combination of content production, playback device, and display device processing may also give artifacts that are unnecessary. A display device producing a fixed 60 fps rate cannot display a 24 fps movie at an even, judder-free rate. Usually a 3:2 pulldown is used, giving a slight uneven movement. While common multisync CRT computer monitors have been capable of running at even multiples of 24 Hz since the early 1990s, recent "120 Hz" LCDs have been produced for the purpose of having smoother, more fluid motion, depending upon the source material, and any subsequent processing done to the signal. In the case of material shot on video, improvements in smoothness just from having a higher refresh rate may be barely noticeable. In the case of filmed material, as 120 is an even multiple of 24, it is possible to present a 24 fps sequence without judder on a well-designed 120 Hz display (i.e., so-called 5-5 pulldown). If the 120 Hz rate is produced by frame-doubling a 60 fps 3:2 pulldown signal, the uneven motion could still be visible (i.e., so-called 6-4 pulldown). Additionally, material may be displayed with synthetically created smoothness with the addition of motion interpolation abilities to the display, which has an even larger effect on filmed material. "50 Hz" TV sets (when fed with "50 Hz" content) usually get a movie that is slightly faster than normal, avoiding any problems with uneven pulldown. See also Plasma display Comparison of display technology List of smartphones with a high refresh rate display High frame rate Flicker-free References Television technology Graphics hardware Rates
Refresh rate
Technology
2,339
47,887,698
https://en.wikipedia.org/wiki/HD%20175640
HD 175640 is a star in the equatorial constellation of Aquila. It has an apparent visual magnitude of 6.20, which is bright enough to be visible to the naked eye under suitable seeing conditions. The star is located at a distance of approximately 516 light years as determined through parallax measurements. At that distance, the star's color is modified by an extinction of 0.36 magnitude due to interstellar dust. It is drifting closer with a heliocentric radial velocity of roughly −26 km/s. This is classified as a mercury-manganese star, which is a late B-type chemically peculiar star of type CP3. A distinctive feature of this class of stars is an apparent extreme overabundance of the elements mercury and manganese. It has a low longitudinal magnetic field strength of . This is a particularly stable star, showing no signs of pulsation. As with other HgMn stars, it is spinning slowly, showing a projected rotational velocity of 1.6 km/s. In 2007, some evidence was found that this may be a single-lined spectroscopic binary star system. In particular, shifts in radial velocity were observed in the range of . References Further reading B-type main-sequence stars Mercury-manganese stars Spectroscopic binaries Aquila (constellation) 7143 Durchmusterung objects 175640 092963
HD 175640
Astronomy
287
52,592,501
https://en.wikipedia.org/wiki/Pyrithione
Pyrithione is the common name of an organosulfur compound with molecular formula , chosen as an abbreviation of pyridinethione, and found in the Persian shallot. It exists as a pair of tautomers, the major form being the thione 1-hydroxy-2(1H)-pyridinethione and the minor form being the thiol 2-mercaptopyridine N-oxide; it crystallises in the thione form. It is usually prepared from either 2-bromopyridine, 2-chloropyridine, or 2-chloropyridine N-oxide, and is commercially available as both the neutral compound and its sodium salt. It is used to prepare zinc pyrithione, which is used primarily to treat dandruff and seborrhoeic dermatitis in medicated shampoos, though is also an anti-fouling agent in paints. Preparation The preparation of pyrithione was first reported in 1950 by Shaw and was prepared by reaction of 2-chloropyridine N-oxide with sodium hydrosulfide followed by acidification, or more recently with sodium sulfide. 2-chloropyridine N-oxide itself can be prepared from 2-chloropyridine using peracetic acid. Another approach involves treating the same starting N-oxide with thiourea to afford pyridyl-2-isothiouronium chloride N-oxide which undergoes base hydrolysis to pyrithione. 2-Bromopyridine can be oxidised to its N-oxide using a suitable peracid (as per 2-chloropyridine), both approaches being analogous to that reported in Organic Syntheses for the oxidation of pyridine to its N-oxide. A substitution reaction using either sodium dithionite () or sodium sulfide with sodium hydroxide will allow the replacement of the bromo substituent with a thiol functional group. The alternative strategy is to form the mercaptan before introducing the N-oxide moiety. 2-Mercaptopyridine was originally synthesized in 1931 by heating 2-chloropyridine with calcium hydrosulfide, an approach similar that first used to prepare pyrithione. The analogous thiourea approach via a uronium salt was reported in 1958 and provides a more convenient route to 2-mercaptopyridine. Oxidation to the N-oxide can then be undertaken. Pyrithione is found as a natural product in the Allium stipitatum plant, an Asian species of onion, also known as the Persian shallot. Its presence was detected using positive ion mass spectrometry using a DART ion source and the disulfide (2,2'-disulfanediylbis(pyridine)-1,1'-dioxide) has been reported from the same species. Dipyrithione can be prepared in a laboratory by oxidation of pyrithione with chlorine in the presence of sodium hydroxide: 2    +     +   2 NaOH   →     +   2 NaCl   +   2  Dipyrithione is used as a fungicide and bactericide, and has been reported to possess novel cytotoxic activity by inducing apoptosis. However, as apoptosis only occurs in higher organisms, this mechanism isn't relevant to the antifungal and bacteric idal properties of pyrithione. Properties Pyrithione exists as a pair of prototropes, a form of tautomerism whereby the rapid interconversion of constitutional isomers involves the shift of a single proton, in this case between the sulfur and oxygen atoms (shown in the infobox above). Salts of the conjugate base of pyrithione can also be considered to exhibit tautomerism by notionally associating the sodium ion with whichever heteroatom bears the negative charge of the anion (as opposed to the formal charges associated with the N-oxide); however, considering the anion alone, this could also be described as an example of resonance. Pyrithione is a weak acid with pKa values of −1.95 and +4.6 (thiol proton), but is a markedly stronger acid than either of its parent compounds (pyridine-N-oxide and pyridine-2-thiol), both of which have pKa > 8. It is only slightly soluble in water (2.5 g L−1) but is soluble in many organic solvents (including benzene, chloroform, dichloromethane, dimethylformamide, dimethylsulfoxide, and ethyl acetate) and slight solubility in others (diethyl ether, ethanol, methyl tert-butyl ether, and tetrahydrofuran). Pyrithione can be used as a source of hydroxyl radical in organic synthesis as it photochemically decomposes to HO• and (pyridin-2-yl)sulfanyl radical. Applications The conjugate base of pyrithione (pyrithionate ion) is an anion containing two donor atoms, a sulfur atom and an oxygen atom each bearing a negative formal charge; the nitrogen atom remains formally positively charged. The thiolate anion can be formed by reaction with sodium carbonate, and zinc pyrithione is formed when zinc chloride is added. The anion can act as either a monodentate or bidentate ligand and forms a 1:2 complex with a zinc(II) metal centre. Zinc pyrithione has been used since the 1930s though its preparation was not disclosed until a 1955 British patent in which pyrithione was reacted directly with hydrated zinc sulfate in ethanol. In its monomeric form, zinc pyrithione has two of the anions chelated to a zinc centre with a tetrahedral geometry. In the solid state, it forms a dimer in which each zinc centre adopts a trigonal bipyramidal geometry with two of the anions acting as bridging ligands coordinated through the oxygen atoms in the axial positions. In solution, the dimers dissociate via scission of zinc-oxygen bonds to each bridging ligand. Further dissociation of the monomer into its constituents can occur and is undesirable as the complex is more potent in medical applications; for this reason, zinc carbonate can be added to formulations as it inhibits the monomer dissociation. Zinc pyrithione has a long history of use in medicated shampoos to treat dandruff and seborrhoeic dermatitis (dandruff can be considered a mild form of seborrheic dermatitis). It exhibits both antifungal and antimicrobial properties, inhibiting the Malassezia yeasts which promote these scalp conditions. The mechanisms by which this work are the subject of ongoing study. It can be used as an antibacterial agent against Staphylococcus and Streptococcus infections for conditions such as athlete's foot, eczema, psoriasis, and ringworm. It is known to be cytotoxic against Pityrosporum ovale, especially in combination with ketoconazole, which is the preferred formulation for seborrheic dermatitis. Pyrithione itself inhibits membrane transport processes in fungi. Paints used in external environments sometimes include zinc pyrithione as a preventive against algae and mildew. References Amine oxides Hydroxypyridines Thiols
Pyrithione
Chemistry
1,640
826,478
https://en.wikipedia.org/wiki/Population%20viability%20analysis
Population viability analysis (PVA) is a species-specific method of risk assessment frequently used in conservation biology. It is traditionally defined as the process that determines the probability that a population will go extinct within a given number of years. More recently, PVA has been described as a marriage of ecology and statistics that brings together species characteristics and environmental variability to forecast population health and extinction risk. Each PVA is individually developed for a target population or species, and consequently, each PVA is unique. The larger goal in mind when conducting a PVA is to ensure that the population of a species is self-sustaining over the long term. Uses Population viability analysis (PVA) is used to estimate the likelihood of a population’s extinction and indicate the urgency of recovery efforts, and identify key life stages or processes that should be the focus of recovery efforts. PVA is also used to identify factors that drive population dynamics, compare proposed management options and assess existing recovery efforts. PVA is frequently used in endangered species management to develop a plan of action, rank the pros and cons of different management scenarios, and assess the potential impacts of habitat loss. History In the 1970s, Yellowstone National Park was the centre of a heated debate over different proposals to manage the park’s problem grizzly bears (Ursus arctos). In 1978, Mark Shaffer proposed a model for the grizzlies that incorporated random variability, and calculated extinction probabilities and minimum viable population size. The first PVA is credited to Shaffer. PVA gained popularity in the United States as federal agencies and ecologists required methods to evaluate the risk of extinction and possible outcomes of management decisions, particularly in accordance with the Endangered Species Act of 1973, and the National Forest Management Act of 1976. In 1986, Gilpin and Soulé broadened the PVA definition to include the interactive forces that affect the viability of a population, including genetics. The use of PVA increased dramatically in the late 1980s and early 1990s following advances in personal computers and software packages. Examples The endangered Fender's blue butterfly (Icaricia icarioides) was recently assessed with a goal of providing additional information to the United States Fish and Wildlife Service, which was developing a recovery plan for the species. The PVA concluded that the species was more at risk of extinction than previously thought and identified key sites where recovery efforts should be focused. The PVA also indicated that because the butterfly populations fluctuate widely from year to year, to prevent the populations from going extinct the minimum annual population growth rate must be kept much higher than at levels typically considered acceptable for other species. Following a recent outbreak of canine distemper virus, a PVA was performed for the critically endangered island fox (Urocyon littoralis) of Santa Catalina Island, California. The Santa Catalina island fox population is uniquely composed of two subpopulations that are separated by an isthmus, with the eastern subpopulation at greater risk of extinction than the western subpopulation. PVA was conducted with the goals of 1) evaluating the island fox’s extinction risk, 2) estimating the island fox’s sensitivity to catastrophic events, and 3) evaluating recent recovery efforts which include release of captive-bred foxes and transport of wild juvenile foxes from the west to the east side. Results of the PVA concluded that the island fox is still at significant risk of extinction, and is highly susceptible to catastrophes that occur more than once every 20 years. Furthermore, extinction risks and future population sizes on both sides of the island were significantly dependent on the number of foxes released and transported each year. PVAs in combination with sensitivity analysis can also be used to identify which vital rates has the relative greatest effect on population growth and other measures of population viability. For example, a study by Manlik et al. (2016) forecast the viability of two bottlenose dolphin populations in Western Australia and identified reproduction as having the greatest influence on the forecast of these populations. One of the two populations was forecast to be stable, whereas the other population was forecast to decline, if it isolated from other populations and low reproductive rates persist. The difference in viability between the two studies was primarily due to differences in reproduction and not survival. The study also showed that temporal variation in reproduction had a greater effect on population growth than temporal variation in survival. Controversy Debates exist and remain unresolved over the appropriate uses of PVA in conservation biology and PVA’s ability to accurately assess extinction risks. A large quantity of field data is desirable for PVA; some conservatively estimate that for a precise extinction probability assessment extending T years into the future, five-to-ten times T years of data are needed. Datasets of such magnitude are typically unavailable for rare species; it has been estimated that suitable data for PVA is available for only 2% of threatened bird species. PVA for threatened and endangered species is particularly a problem as the predictive power of PVA plummets dramatically with minimal datasets. Ellner et al. (2002) argued that PVA has little value in such circumstances and is best replaced by other methods. Others argue that PVA remains the best tool available for estimations of extinction risk, especially with the use of sensitivity model runs. Even with an adequate dataset, it is possible that a PVA can still have large errors in extinction rate predictions. It is impossible to incorporate all future possibilities into a PVA: habitats may change, catastrophes may occur, new diseases may be introduced. PVA utility can be enhanced by multiple model runs with varying sets of assumptions including the forecast future date. Some prefer to use PVA always in a relative analysis of benefits of alternative management schemes, such as comparing proposed resource management plans. Accuracy of PVAs has been tested in a few retrospective studies. For example, a study comparing PVA model forecasts with the actual fate of 21 well-studied taxa, showed that growth rate projections are accurate, if input variables are based on sound data, but highlighted the importance of understanding density-dependence (Brook et al. 2000). Also, McCarthey et al. (2003) showed that PVA predictions are relatively accurate, when they are based on long-term data. Still, the usefulness of PVA lies more in its capacity to identify and assess potential threats, than in making long-term, categorical predictions (Akçakaya & Sjögren-Gulve 2000). Future directions Improvements to PVA likely to occur in the near future include: 1) creating a fixed definition of PVA and scientific standards of quality by which all PVA are judged and 2) incorporating recent genetic advances into PVA. See also Biodiversity Endangered Species Act IUCN Red List Minimum viable population Population dynamics Population genetics References Further reading Beissinger, Steven R. and McCullough, Dale R. (2002). “Population Viability Analysis”, Chicago: University of Chicago Press. Gilpin, M.E. and Soulé, M.E. (1986). “Conservation biology: The Science of Scarcity and Diversity”, Sunderland, Massachusetts: Sinauer Associates Perrins, C.M., Lebreton, J.D., and Hirons, G.J.M. (eds.) (1991). “Bird population studies: relevance to conservation and management”, New York: Oxford University Press External links GreenBoxes code sharing network. Greenboxes (Beta) is a repository for open-source population modeling and PVA code. Greenboxes allows users an easy way to share their code and to search for others shared code. VORTEX . VORTEX is an individual-based simulation software that incorporates deterministic forces as well as demographic, environmental and genetic stochastic events on wildlife populations. RAMAS. Widely accepted software packages for PVA with options for age/stage structure, spatial processes, and landscape change. Models can be built and run using a graphic user interface or users can incorporate the program's batch mode into automated workflows. Ecology Mathematical and theoretical biology Biostatistics
Population viability analysis
Mathematics,Biology
1,652
72,504,438
https://en.wikipedia.org/wiki/R%20Trianguli
R Trianguli (abbreviated as R Tri) is a short-period oxygen-rich Mira variable in Triangulum with a period of 266.9 days, discovered by T. H. E. C. Espin in 1890. It is losing about , close to average for a short-period Mira variable. While most short-period Mira variables reside in the Galactic halo, R Trianguli is a member of the thick disk, and its proper motion is fairly high for its distance. Its angular diameter in the K band was measured in 2002 to be, on average, , with a shape suggesting that there is an optically thin disk structure surrounding the star. References Mira variables M-type giants Trianguli, R Triangulum Astronomical objects discovered in 1890 0758 016210 J02370234+3415513 Emission-line stars 012193
R Trianguli
Astronomy
181
21,652
https://en.wikipedia.org/wiki/Natural%20language%20processing
Natural language processing (NLP) is a subfield of computer science and especially artificial intelligence. It is primarily concerned with providing computers with the ability to process data encoded in natural language and is thus closely related to information retrieval, knowledge representation and computational linguistics, a subfield of linguistics. Typically data is collected in text corpora, using either rule-based, statistical or neural-based approaches in machine learning and deep learning. Major tasks in natural language processing are speech recognition, text classification, natural-language understanding, and natural-language generation. History Natural language processing has its roots in the 1950s. Already in 1950, Alan Turing published an article titled "Computing Machinery and Intelligence" which proposed what is now called the Turing test as a criterion of intelligence, though at the time that was not articulated as a problem separate from artificial intelligence. The proposed test includes a task that involves the automated interpretation and generation of natural language. Symbolic NLP (1950s – early 1990s) The premise of symbolic NLP is well-summarized by John Searle's Chinese room experiment: Given a collection of rules (e.g., a Chinese phrasebook, with questions and matching answers), the computer emulates natural language understanding (or other NLP tasks) by applying those rules to the data it confronts. 1950s: The Georgetown experiment in 1954 involved fully automatic translation of more than sixty Russian sentences into English. The authors claimed that within three or five years, machine translation would be a solved problem. However, real progress was much slower, and after the ALPAC report in 1966, which found that ten years of research had failed to fulfill the expectations, funding for machine translation was dramatically reduced. Little further research in machine translation was conducted in America (though some research continued elsewhere, such as Japan and Europe) until the late 1980s when the first statistical machine translation systems were developed. 1960s: Some notably successful natural language processing systems developed in the 1960s were SHRDLU, a natural language system working in restricted "blocks worlds" with restricted vocabularies, and ELIZA, a simulation of a Rogerian psychotherapist, written by Joseph Weizenbaum between 1964 and 1966. Using almost no information about human thought or emotion, ELIZA sometimes provided a startlingly human-like interaction. When the "patient" exceeded the very small knowledge base, ELIZA might provide a generic response, for example, responding to "My head hurts" with "Why do you say your head hurts?". Ross Quillian's successful work on natural language was demonstrated with a vocabulary of only twenty words, because that was all that would fit in a computer memory at the time. 1970s: During the 1970s, many programmers began to write "conceptual ontologies", which structured real-world information into computer-understandable data. Examples are MARGIE (Schank, 1975), SAM (Cullingford, 1978), PAM (Wilensky, 1978), TaleSpin (Meehan, 1976), QUALM (Lehnert, 1977), Politics (Carbonell, 1979), and Plot Units (Lehnert 1981). During this time, the first chatterbots were written (e.g., PARRY). 1980s: The 1980s and early 1990s mark the heyday of symbolic methods in NLP. Focus areas of the time included research on rule-based parsing (e.g., the development of HPSG as a computational operationalization of generative grammar), morphology (e.g., two-level morphology), semantics (e.g., Lesk algorithm), reference (e.g., within Centering Theory) and other areas of natural language understanding (e.g., in the Rhetorical Structure Theory). Other lines of research were continued, e.g., the development of chatterbots with Racter and Jabberwacky. An important development (that eventually led to the statistical turn in the 1990s) was the rising importance of quantitative evaluation in this period. Statistical NLP (1990s–2010s) Up until the 1980s, most natural language processing systems were based on complex sets of hand-written rules. Starting in the late 1980s, however, there was a revolution in natural language processing with the introduction of machine learning algorithms for language processing. This was due to both the steady increase in computational power (see Moore's law) and the gradual lessening of the dominance of Chomskyan theories of linguistics (e.g. transformational grammar), whose theoretical underpinnings discouraged the sort of corpus linguistics that underlies the machine-learning approach to language processing. 1990s: Many of the notable early successes in statistical methods in NLP occurred in the field of machine translation, due especially to work at IBM Research, such as IBM alignment models. These systems were able to take advantage of existing multilingual textual corpora that had been produced by the Parliament of Canada and the European Union as a result of laws calling for the translation of all governmental proceedings into all official languages of the corresponding systems of government. However, most other systems depended on corpora specifically developed for the tasks implemented by these systems, which was (and often continues to be) a major limitation in the success of these systems. As a result, a great deal of research has gone into methods of more effectively learning from limited amounts of data. 2000s: With the growth of the web, increasing amounts of raw (unannotated) language data have become available since the mid-1990s. Research has thus increasingly focused on unsupervised and semi-supervised learning algorithms. Such algorithms can learn from data that has not been hand-annotated with the desired answers or using a combination of annotated and non-annotated data. Generally, this task is much more difficult than supervised learning, and typically produces less accurate results for a given amount of input data. However, there is an enormous amount of non-annotated data available (including, among other things, the entire content of the World Wide Web), which can often make up for the inferior results if the algorithm used has a low enough time complexity to be practical. Neural NLP (present) In 2003, word n-gram model, at the time the best statistical algorithm, was outperformed by a multi-layer perceptron (with a single hidden layer and context length of several words trained on up to 14 million of words with a CPU cluster in language modelling) by Yoshua Bengio with co-authors. In 2010, Tomáš Mikolov (then a PhD student at Brno University of Technology) with co-authors applied a simple recurrent neural network with a single hidden layer to language modelling, and in the following years he went on to develop Word2vec. In the 2010s, representation learning and deep neural network-style (featuring many hidden layers) machine learning methods became widespread in natural language processing. That popularity was due partly to a flurry of results showing that such techniques can achieve state-of-the-art results in many natural language tasks, e.g., in language modeling and parsing. This is increasingly important in medicine and healthcare, where NLP helps analyze notes and text in electronic health records that would otherwise be inaccessible for study when seeking to improve care or protect patient privacy. Approaches: Symbolic, statistical, neural networks Symbolic approach, i.e., the hand-coding of a set of rules for manipulating symbols, coupled with a dictionary lookup, was historically the first approach used both by AI in general and by NLP in particular: such as by writing grammars or devising heuristic rules for stemming. Machine learning approaches, which include both statistical and neural networks, on the other hand, have many advantages over the symbolic approach: both statistical and neural networks methods can focus more on the most common cases extracted from a corpus of texts, whereas the rule-based approach needs to provide rules for both rare cases and common ones equally. language models, produced by either statistical or neural networks methods, are more robust to both unfamiliar (e.g. containing words or structures that have not been seen before) and erroneous input (e.g. with misspelled words or words accidentally omitted) in comparison to the rule-based systems, which are also more costly to produce. the larger such a (probabilistic) language model is, the more accurate it becomes, in contrast to rule-based systems that can gain accuracy only by increasing the amount and complexity of the rules leading to intractability problems. Although rule-based systems for manipulating symbols were still in use in 2020, they have become mostly obsolete with the advance of LLMs in 2023. Before that they were commonly used: when the amount of training data is insufficient to successfully apply machine learning methods, e.g., for the machine translation of low-resource languages such as provided by the Apertium system, for preprocessing in NLP pipelines, e.g., tokenization, or for postprocessing and transforming the output of NLP pipelines, e.g., for knowledge extraction from syntactic parses. Statistical approach In the late 1980s and mid-1990s, the statistical approach ended a period of AI winter, which was caused by the inefficiencies of the rule-based approaches. The earliest decision trees, producing systems of hard if–then rules, were still very similar to the old rule-based approaches. Only the introduction of hidden Markov models, applied to part-of-speech tagging, announced the end of the old rule-based approach. Neural networks A major drawback of statistical methods is that they require elaborate feature engineering. Since 2015, the statistical approach has been replaced by the neural networks approach, using semantic networks and word embeddings to capture semantic properties of words. Intermediate tasks (e.g., part-of-speech tagging and dependency parsing) are not needed anymore. Neural machine translation, based on then-newly invented sequence-to-sequence transformations, made obsolete the intermediate steps, such as word alignment, previously necessary for statistical machine translation. Common NLP tasks The following is a list of some of the most commonly researched tasks in natural language processing. Some of these tasks have direct real-world applications, while others more commonly serve as subtasks that are used to aid in solving larger tasks. Though natural language processing tasks are closely intertwined, they can be subdivided into categories for convenience. A coarse division is given below. Text and speech processing Optical character recognition (OCR) Given an image representing printed text, determine the corresponding text. Speech recognition Given a sound clip of a person or people speaking, determine the textual representation of the speech. This is the opposite of text to speech and is one of the extremely difficult problems colloquially termed "AI-complete" (see above). In natural speech there are hardly any pauses between successive words, and thus speech segmentation is a necessary subtask of speech recognition (see below). In most spoken languages, the sounds representing successive letters blend into each other in a process termed coarticulation, so the conversion of the analog signal to discrete characters can be a very difficult process. Also, given that words in the same language are spoken by people with different accents, the speech recognition software must be able to recognize the wide variety of input as being identical to each other in terms of its textual equivalent. Speech segmentation Given a sound clip of a person or people speaking, separate it into words. A subtask of speech recognition and typically grouped with it. Text-to-speech Given a text, transform those units and produce a spoken representation. Text-to-speech can be used to aid the visually impaired. Word segmentation (Tokenization) Tokenization is a process used in text analysis that divides text into individual words or word fragments. This technique results in two key components: a word index and tokenized text. The word index is a list that maps unique words to specific numerical identifiers, and the tokenized text replaces each word with its corresponding numerical token. These numerical tokens are then used in various deep learning methods. For a language like English, this is fairly trivial, since words are usually separated by spaces. However, some written languages like Chinese, Japanese and Thai do not mark word boundaries in such a fashion, and in those languages text segmentation is a significant task requiring knowledge of the vocabulary and morphology of words in the language. Sometimes this process is also used in cases like bag of words (BOW) creation in data mining. Morphological analysis Lemmatization The task of removing inflectional endings only and to return the base dictionary form of a word which is also known as a lemma. Lemmatization is another technique for reducing words to their normalized form. But in this case, the transformation actually uses a dictionary to map words to their actual form. Morphological segmentation Separate words into individual morphemes and identify the class of the morphemes. The difficulty of this task depends greatly on the complexity of the morphology (i.e., the structure of words) of the language being considered. English has fairly simple morphology, especially inflectional morphology, and thus it is often possible to ignore this task entirely and simply model all possible forms of a word (e.g., "open, opens, opened, opening") as separate words. In languages such as Turkish or Meitei, a highly agglutinated Indian language, however, such an approach is not possible, as each dictionary entry has thousands of possible word forms. Part-of-speech tagging Given a sentence, determine the part of speech (POS) for each word. Many words, especially common ones, can serve as multiple parts of speech. For example, "book" can be a noun ("the book on the table") or verb ("to book a flight"); "set" can be a noun, verb or adjective; and "out" can be any of at least five different parts of speech. Stemming The process of reducing inflected (or sometimes derived) words to a base form (e.g., "close" will be the root for "closed", "closing", "close", "closer" etc.). Stemming yields similar results as lemmatization, but does so on grounds of rules, not a dictionary. Syntactic analysis Grammar induction Generate a formal grammar that describes a language's syntax. Sentence breaking (also known as "sentence boundary disambiguation") Given a chunk of text, find the sentence boundaries. Sentence boundaries are often marked by periods or other punctuation marks, but these same characters can serve other purposes (e.g., marking abbreviations). Parsing Determine the parse tree (grammatical analysis) of a given sentence. The grammar for natural languages is ambiguous and typical sentences have multiple possible analyses: perhaps surprisingly, for a typical sentence there may be thousands of potential parses (most of which will seem completely nonsensical to a human). There are two primary types of parsing: dependency parsing and constituency parsing. Dependency parsing focuses on the relationships between words in a sentence (marking things like primary objects and predicates), whereas constituency parsing focuses on building out the parse tree using a probabilistic context-free grammar (PCFG) (see also stochastic grammar). Lexical semantics (of individual words in context) Lexical semantics What is the computational meaning of individual words in context? Distributional semantics How can we learn semantic representations from data? Named entity recognition (NER) Given a stream of text, determine which items in the text map to proper names, such as people or places, and what the type of each such name is (e.g. person, location, organization). Although capitalization can aid in recognizing named entities in languages such as English, this information cannot aid in determining the type of named entity, and in any case, is often inaccurate or insufficient. For example, the first letter of a sentence is also capitalized, and named entities often span several words, only some of which are capitalized. Furthermore, many other languages in non-Western scripts (e.g. Chinese or Arabic) do not have any capitalization at all, and even languages with capitalization may not consistently use it to distinguish names. For example, German capitalizes all nouns, regardless of whether they are names, and French and Spanish do not capitalize names that serve as adjectives. Another name for this task is token classification. Sentiment analysis (see also Multimodal sentiment analysis) Sentiment analysis is a computational method used to identify and classify the emotional intent behind text. This technique involves analyzing text to determine whether the expressed sentiment is positive, negative, or neutral. Models for sentiment classification typically utilize inputs such as word n-grams, Term Frequency-Inverse Document Frequency (TF-IDF) features, hand-generated features, or employ deep learning models designed to recognize both long-term and short-term dependencies in text sequences. The applications of sentiment analysis are diverse, extending to tasks such as categorizing customer reviews on various online platforms. Terminology extraction The goal of terminology extraction is to automatically extract relevant terms from a given corpus. Word-sense disambiguation (WSD) Many words have more than one meaning; we have to select the meaning which makes the most sense in context. For this problem, we are typically given a list of words and associated word senses, e.g. from a dictionary or an online resource such as WordNet. Entity linking Many words—typically proper names—refer to named entities; here we have to select the entity (a famous individual, a location, a company, etc.) which is referred to in context. Relational semantics (semantics of individual sentences) Relationship extraction Given a chunk of text, identify the relationships among named entities (e.g. who is married to whom). Semantic parsing Given a piece of text (typically a sentence), produce a formal representation of its semantics, either as a graph (e.g., in AMR parsing) or in accordance with a logical formalism (e.g., in DRT parsing). This challenge typically includes aspects of several more elementary NLP tasks from semantics (e.g., semantic role labelling, word-sense disambiguation) and can be extended to include full-fledged discourse analysis (e.g., discourse analysis, coreference; see Natural language understanding below). Semantic role labelling (see also implicit semantic role labelling below) Given a single sentence, identify and disambiguate semantic predicates (e.g., verbal frames), then identify and classify the frame elements (semantic roles). Discourse (semantics beyond individual sentences) Coreference resolution Given a sentence or larger chunk of text, determine which words ("mentions") refer to the same objects ("entities"). Anaphora resolution is a specific example of this task, and is specifically concerned with matching up pronouns with the nouns or names to which they refer. The more general task of coreference resolution also includes identifying so-called "bridging relationships" involving referring expressions. For example, in a sentence such as "He entered John's house through the front door", "the front door" is a referring expression and the bridging relationship to be identified is the fact that the door being referred to is the front door of John's house (rather than of some other structure that might also be referred to). Discourse analysis This rubric includes several related tasks. One task is discourse parsing, i.e., identifying the discourse structure of a connected text, i.e. the nature of the discourse relationships between sentences (e.g. elaboration, explanation, contrast). Another possible task is recognizing and classifying the speech acts in a chunk of text (e.g. yes–no question, content question, statement, assertion, etc.). Given a single sentence, identify and disambiguate semantic predicates (e.g., verbal frames) and their explicit semantic roles in the current sentence (see Semantic role labelling above). Then, identify semantic roles that are not explicitly realized in the current sentence, classify them into arguments that are explicitly realized elsewhere in the text and those that are not specified, and resolve the former against the local text. A closely related task is zero anaphora resolution, i.e., the extension of coreference resolution to pro-drop languages. Recognizing textual entailment Given two text fragments, determine if one being true entails the other, entails the other's negation, or allows the other to be either true or false. Topic segmentation and recognition Given a chunk of text, separate it into segments each of which is devoted to a topic, and identify the topic of the segment. Argument mining The goal of argument mining is the automatic extraction and identification of argumentative structures from natural language text with the aid of computer programs. Such argumentative structures include the premise, conclusions, the argument scheme and the relationship between the main and subsidiary argument, or the main and counter-argument within discourse. Higher-level NLP applications Automatic summarization (text summarization) Produce a readable summary of a chunk of text. Often used to provide summaries of the text of a known type, such as research papers, articles in the financial section of a newspaper. Grammatical error detection and correction involves a great band-width of problems on all levels of linguistic analysis (phonology/orthography, morphology, syntax, semantics, pragmatics). Grammatical error correction is impactful since it affects hundreds of millions of people that use or acquire English as a second language. It has thus been subject to a number of shared tasks since 2011. As far as orthography, morphology, syntax and certain aspects of semantics are concerned, and due to the development of powerful neural language models such as GPT-2, this can now (2019) be considered a largely solved problem and is being marketed in various commercial applications. Logic translation Translate a text from a natural language into formal logic. Machine translation (MT) Automatically translate text from one human language to another. This is one of the most difficult problems, and is a member of a class of problems colloquially termed "AI-complete", i.e. requiring all of the different types of knowledge that humans possess (grammar, semantics, facts about the real world, etc.) to solve properly. Natural-language understanding (NLU) Convert chunks of text into more formal representations such as first-order logic structures that are easier for computer programs to manipulate. Natural language understanding involves the identification of the intended semantic from the multiple possible semantics which can be derived from a natural language expression which usually takes the form of organized notations of natural language concepts. Introduction and creation of language metamodel and ontology are efficient however empirical solutions. An explicit formalization of natural language semantics without confusions with implicit assumptions such as closed-world assumption (CWA) vs. open-world assumption, or subjective Yes/No vs. objective True/False is expected for the construction of a basis of semantics formalization. Natural-language generation (NLG): Convert information from computer databases or semantic intents into readable human language. Book generation Not an NLP task proper but an extension of natural language generation and other NLP tasks is the creation of full-fledged books. The first machine-generated book was created by a rule-based system in 1984 (Racter, The policeman's beard is half-constructed). The first published work by a neural network was published in 2018, 1 the Road, marketed as a novel, contains sixty million words. Both these systems are basically elaborate but non-sensical (semantics-free) language models. The first machine-generated science book was published in 2019 (Beta Writer, Lithium-Ion Batteries, Springer, Cham). Unlike Racter and 1 the Road, this is grounded on factual knowledge and based on text summarization. Document AI A Document AI platform sits on top of the NLP technology enabling users with no prior experience of artificial intelligence, machine learning or NLP to quickly train a computer to extract the specific data they need from different document types. NLP-powered Document AI enables non-technical teams to quickly access information hidden in documents, for example, lawyers, business analysts and accountants. Dialogue management Computer systems intended to converse with a human. Question answering Given a human-language question, determine its answer. Typical questions have a specific right answer (such as "What is the capital of Canada?"), but sometimes open-ended questions are also considered (such as "What is the meaning of life?"). Text-to-image generation Given a description of an image, generate an image that matches the description. Text-to-scene generation Given a description of a scene, generate a 3D model of the scene. Text-to-video Given a description of a video, generate a video that matches the description. General tendencies and (possible) future directions Based on long-standing trends in the field, it is possible to extrapolate future directions of NLP. As of 2020, three trends among the topics of the long-standing series of CoNLL Shared Tasks can be observed: Interest on increasingly abstract, "cognitive" aspects of natural language (1999–2001: shallow parsing, 2002–03: named entity recognition, 2006–09/2017–18: dependency syntax, 2004–05/2008–09 semantic role labelling, 2011–12 coreference, 2015–16: discourse parsing, 2019: semantic parsing). Increasing interest in multilinguality, and, potentially, multimodality (English since 1999; Spanish, Dutch since 2002; German since 2003; Bulgarian, Danish, Japanese, Portuguese, Slovenian, Swedish, Turkish since 2006; Basque, Catalan, Chinese, Greek, Hungarian, Italian, Turkish since 2007; Czech since 2009; Arabic since 2012; 2017: 40+ languages; 2018: 60+/100+ languages) Elimination of symbolic representations (rule-based over supervised towards weakly supervised methods, representation learning and end-to-end systems) Cognition Most higher-level NLP applications involve aspects that emulate intelligent behaviour and apparent comprehension of natural language. More broadly speaking, the technical operationalization of increasingly advanced aspects of cognitive behaviour represents one of the developmental trajectories of NLP (see trends among CoNLL shared tasks above). Cognition refers to "the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses." Cognitive science is the interdisciplinary, scientific study of the mind and its processes. Cognitive linguistics is an interdisciplinary branch of linguistics, combining knowledge and research from both psychology and linguistics. Especially during the age of symbolic NLP, the area of computational linguistics maintained strong ties with cognitive studies. As an example, George Lakoff offers a methodology to build natural language processing (NLP) algorithms through the perspective of cognitive science, along with the findings of cognitive linguistics, with two defining aspects: Apply the theory of conceptual metaphor, explained by Lakoff as "the understanding of one idea, in terms of another" which provides an idea of the intent of the author. For example, consider the English word big. When used in a comparison ("That is a big tree"), the author's intent is to imply that the tree is physically large relative to other trees or the authors experience. When used metaphorically ("Tomorrow is a big day"), the author's intent to imply importance. The intent behind other usages, like in "She is a big person", will remain somewhat ambiguous to a person and a cognitive NLP algorithm alike without additional information. Assign relative measures of meaning to a word, phrase, sentence or piece of text based on the information presented before and after the piece of text being analyzed, e.g., by means of a probabilistic context-free grammar (PCFG). The mathematical equation for such algorithms is presented in US Patent 9269353: Where RMM is the relative measure of meaning token is any block of text, sentence, phrase or word N is the number of tokens being analyzed PMM is the probable measure of meaning based on a corpora d is the non zero location of the token along the sequence of N tokens PF is the probability function specific to a language Ties with cognitive linguistics are part of the historical heritage of NLP, but they have been less frequently addressed since the statistical turn during the 1990s. Nevertheless, approaches to develop cognitive models towards technically operationalizable frameworks have been pursued in the context of various frameworks, e.g., of cognitive grammar, functional grammar, construction grammar, computational psycholinguistics and cognitive neuroscience (e.g., ACT-R), however, with limited uptake in mainstream NLP (as measured by presence on major conferences of the ACL). More recently, ideas of cognitive NLP have been revived as an approach to achieve explainability, e.g., under the notion of "cognitive AI". Likewise, ideas of cognitive NLP are inherent to neural models multimodal NLP (although rarely made explicit) and developments in artificial intelligence, specifically tools and technologies using large language model approaches and new directions in artificial general intelligence based on the free energy principle by British neuroscientist and theoretician at University College London Karl J. Friston. See also 1 the Road Artificial intelligence detection software Automated essay scoring Biomedical text mining Compound term processing Computational linguistics Computer-assisted reviewing Controlled natural language Deep learning Deep linguistic processing Distributional semantics Foreign language reading aid Foreign language writing aid Information extraction Information retrieval Language and Communication Technologies Language model Language technology Latent semantic indexing Multi-agent system Native-language identification Natural-language programming Natural-language understanding Natural-language search Outline of natural language processing Query expansion Query understanding Reification (linguistics) Speech processing Spoken dialogue systems Text-proofing Text simplification Transformer (machine learning model) Truecasing Question answering Word2vec References Further reading Steven Bird, Ewan Klein, and Edward Loper (2009). Natural Language Processing with Python. O'Reilly Media. . Kenna Hughes-Castleberry, "A Murder Mystery Puzzle: The literary puzzle Cain's Jawbone, which has stumped humans for decades, reveals the limitations of natural-language-processing algorithms", Scientific American, vol. 329, no. 4 (November 2023), pp. 81–82. "This murder mystery competition has revealed that although NLP (natural-language processing) models are capable of incredible feats, their abilities are very much limited by the amount of context they receive. This [...] could cause [difficulties] for researchers who hope to use them to do things such as analyze ancient languages. In some cases, there are few historical records on long-gone civilizations to serve as training data for such a purpose." (p. 82.) Daniel Jurafsky and James H. Martin (2008). Speech and Language Processing, 2nd edition. Pearson Prentice Hall. . Mohamed Zakaria Kurdi (2016). Natural Language Processing and Computational Linguistics: speech, morphology, and syntax, Volume 1. ISTE-Wiley. . Mohamed Zakaria Kurdi (2017). Natural Language Processing and Computational Linguistics: semantics, discourse, and applications, Volume 2. ISTE-Wiley. . Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schütze (2008). Introduction to Information Retrieval. Cambridge University Press. . Official html and pdf versions available without charge. Christopher D. Manning and Hinrich Schütze (1999). Foundations of Statistical Natural Language Processing. The MIT Press. . David M. W. Powers and Christopher C. R. Turk (1989). Machine Learning of Natural Language. Springer-Verlag. . External links Computational fields of study Computational linguistics Speech recognition
Natural language processing
Technology
6,609
56,469,937
https://en.wikipedia.org/wiki/NGC%20522
NGC 522, also occasionally referred to as PGC 5218 or UGC 970, is a spiral galaxy located approximately 122 million light-years from the Solar System in the constellation Pisces. It was discovered on 25 September 1862 by astronomer Heinrich Louis d'Arrest. Observation history D'Arrest discovered NGC 522 using his 11-inch refractor telescope at Copenhagen. He located the galaxy's position with a total of two observations. As the position matches both UGC 962 and PGC 5190, the objects are generally referred to as synonymous. NGC 522 was later catalogued by John Louis Emil Dreyer in the New General Catalogue, where the galaxy was described as "extremely faint, pretty large, irregular figure, perhaps cluster plus nebula". Description The galaxy can be observed edge-on from Earth, thus appearing very elongated. It can be classified as spiral galaxy of type Sbc using the Hubble Sequence. The object's distance of roughly 120 million light-years from the Solar System can be estimated using its redshift and Hubble's law. See also Spiral galaxy List of NGC objects (1–1000) Pisces (constellation) References External links SEDS Spiral galaxies Pisces (constellation) 0522 5218 00970 Astronomical objects discovered in 1862 Discoveries by Heinrich Louis d'Arrest
NGC 522
Astronomy
274
18,494,818
https://en.wikipedia.org/wiki/John%20Christopher%20Willis
John Christopher Willis FRS (20 February 1868 – 21 March 1958) was an English botanist known for his Age and Area hypothesis and criticism of natural selection. Education Born in Liverpool, he was educated at and University College, Liverpool (biology) and Gonville and Caius College, Cambridge (botany). Career In 1896 Willis was appointed director of the Royal Botanical Gardens, Peradeniya, Ceylon (now Sri Lanka) until 1912 when he was appointed director of the botanic gardens at Rio de Janeiro. He was elected a Fellow of the Linnean Society in 1897, and a Fellow of the Royal Society in 1919. His notable publications include "A Manual and Dictionary of the Flowering Plants and Ferns" in two volumes and "Age and Area: A Study of Geographical Distribution and Origin of Species”, published in 1922. He returned to Cambridge in 1915, and later went to live in Montreux, Switzerland. He died in 1958 at the age of 90 and was posthumously awarded the Darwin–Wallace Medal by the Linnean Society. In 1901, botanist Johannes Eugenius Bülow Warming published Willisia, which is a genus of flowering plants from India and Bangladesh belonging to the family Podostemaceae. It was named in John Christopher Willis's honour. Age and Area Willis formed the Age and Area hypothesis during botanical field work in Ceylon where he studied the distributional patterns of the Ceylonese vascular plants in great detail. According to his hypothesis the extent of range of a species may be used as an indication of the age of that species. He also maintained that the "dying out" of species occurs rarely, and that new forms arise by mutation rather than by local adaptation through natural selection. Willis defined his hypothesis as: The Dutch botanist and geneticist Hugo de Vries supported the hypothesis; however, it was criticised by the American palaeontologist Edward W. Berry who wrote it was contradicted by palaeontological evidence. Edmund W. Sinnott rejected the hypothesis and wrote "other factors than age share in the area occupied by a species". According to Sinnott factors inherent in the plant such as hardiness, adaptability and growth play an important part in determining distribution. Willis published the book Age and Area. A Study in Geographical Distribution and Origin of Species in 1922. The American entomologist Philip P. Calvert documented examples of the geographical distribution of insects that contradicted the hypothesis in a paper in 1923. On the subject in 1924, Berry wrote: In 1924, the American botanist Merritt Lyndon Fernald wrote that studies on floras of the northern hemisphere do not support the Age and Area hypothesis. Willis responded to the early criticisms and stated that his critics such as Berry and Sinnott had misrepresented his hypothesis. Willis claimed that his hypothesis should not be applied to single species but to groups of allied species. He wrote there was no rival hypothesis to his own to explain the botanical data and that his hypothesis had made successful predictions about flora distribution in New Zealand. The American ecologist H. A. Gleason praised the hypothesis for being testable in the field of phytogeography but came to the conclusion that it could not account for migration data. In 1926 Willis wrote a paper defending his hypothesis and responded to the criticism. Most scientists, however, had rejected the hypothesis for various reasons and according to the historian of science Charles H. Smith "The "age and area" theory attracted some interest for about twenty years, but support for it was clearly on the wane by the time of Willis's late books The Course of Evolution and The Birth and Spread of Plants (though each of these works contained some interesting ideas)." The Course of Evolution Willis published a controversial book on evolution The Course of Evolution by Differentiation Or Divergent Mutation Rather Than by Selection (1940) which was a sequel to his Age and Area. Willis questioned the adequacy of natural selection of chance variations as a major factor in evolution. He supported mutations as the main mechanism of evolution, and chromosome alterations to be largely responsible for mutations. He opposed Darwinian gradualism and favoured saltational evolution. The American ichthyologist Carl Leavitt Hubbs reviewed the book claiming Willis was advocating a form of orthogenesis: The American geneticist Sewall Wright similarly noted that Willis believed evolution was not the result of chance but an orthogenetic drive, and that he was a proponent of saltationism. Publications Studies in the Morphology and Ecology of the Podostemaceæ of Ceylon and India (1902) A Manual and Dictionary of the Flowering Plants and Ferns (1908) The Distribution of Species in New Zealand (1916) The Relative Age of Endemic Species and Other Controversial Points (1917) Age and Area. A Study in Geographical Distribution and Origin of Species (1922) The Course of Evolution by Differentiation Or Divergent Mutation Rather Than by Selection (1940) The Birth and Spread of Plants (1949) References Further reading J. M. Greenman. (1925). The Age-And-Area Hypothesis with Special Reference to the Flora of Tropical America]. American Journal of Botany, Vol. 12, No. 3, pp. 189–193. Arnold Miller. (1997). A New Look at Age and Area: The Geographic and Environmental Expansion of Genera During the Ordovician Radiation. Paleobiology, Vol. 23, No. 4, pp. 410–419. 1868 births 1958 deaths Fellows of the Royal Society English botanists Fellows of the Linnean Society of London Mutationism Scientists from Liverpool Alumni of Gonville and Caius College, Cambridge Alumni of the University of Liverpool
John Christopher Willis
Biology
1,138
239,121
https://en.wikipedia.org/wiki/Calculation
A calculation is a deliberate mathematical process that transforms one or more inputs into one or more outputs or results. The term is used in a variety of senses, from the very definite arithmetical calculation of using an algorithm, to the vague heuristics of calculating a strategy in a competition, or calculating the chance of a successful relationship between two people. For example, multiplying 7 by 6 is a simple algorithmic calculation. Extracting the square root or the cube root of a number using mathematical models is a more complex algorithmic calculation. Statistical estimations of the likely election results from opinion polls also involve algorithmic calculations, but produces ranges of possibilities rather than exact answers. To calculate means to determine mathematically in the case of a number or amount, or in the case of an abstract problem to deduce the answer using logic, reason or common sense. The English word derives from the Latin , which originally meant a pebble (from Latin ), for instance the small stones used as a counters on an abacus (, ). The abacus was an instrument used by Greeks and Romans for arithmetic calculations, preceding the slide-rule and the electronic calculator, and consisted of perforated pebbles sliding on iron bars. See also Calculus (disambiguation) — list of general methods of calculation by application area Complexity class — theoretical notion to categorize calculability Cost accounting — business application of calculation List of algorithms — fully formalized, computer-executable methods of calculation Mental calculation — performing arithmetics using one's brain only References External links "The Lifting of the Veil in the Operations of Calculation" is a manuscript, from the 18th-century, in Arabic, by Ibn al-Banna' al-Marrakushi, about calculation processes Elementary arithmetic Information
Calculation
Mathematics
366
233,500
https://en.wikipedia.org/wiki/Scramjet
A scramjet (supersonic combustion ramjet) is a variant of a ramjet airbreathing jet engine in which combustion takes place in supersonic airflow. As in ramjets, a scramjet relies on high vehicle speed to compress the incoming air forcefully before combustion (hence ramjet), but whereas a ramjet decelerates the air to subsonic velocities before combustion using shock cones, a scramjet has no shock cone and slows the airflow using shockwaves produced by its ignition source in place of a shock cone. This allows the scramjet to operate efficiently at extremely high speeds. Although scramjet engines have been used in a handful of operational military vehicles, scramjets have so far mostly been demonstrated in research test articles and experimental vehicles. History Before 2000 The Bell X-1 attained supersonic flight in 1947 and, by the early 1960s, rapid progress toward faster aircraft suggested that operational aircraft would be flying at "hypersonic" speeds within a few years. Except for specialized rocket research vehicles like the North American X-15 and other rocket-powered spacecraft, aircraft top speeds have remained level, generally in the range of Mach1 to Mach3. During the US aerospaceplane program, between the 1950s and the mid 1960s, Alexander Kartveli and Antonio Ferri were proponents of the scramjet approach. In the 1950s and 1960s, a variety of experimental scramjet engines were built and ground tested in the US and the UK. Antonio Ferri successfully demonstrated a scramjet producing net thrust in November 1964, eventually producing 517 pounds-force (2.30 kN), about 80% of his goal. In 1958, an analytical paper discussed the merits and disadvantages of supersonic combustion ramjets. In 1964, Frederick S. Billig and Gordon L. Dugger submitted a patent application for a supersonic combustion ramjet based on Billig's PhD thesis. This patent was issued in 1981 following the removal of an order of secrecy. In 1981, tests were made in Australia under the guidance of Professor Ray Stalker in the T3 ground test facility at ANU. The first successful flight test of a scramjet was performed as a joint effort with NASA, over the Soviet Union in 1991. It was an axisymmetric hydrogen-fueled dual-mode scramjet developed by Central Institute of Aviation Motors (CIAM), Moscow in the late 1970s, but modernized with a FeCrAl alloy on a converted SM-6 missile to achieve initial flight parameters of Mach 6.8, before the scramjet flew at Mach 5.5. The scramjet flight was flown captive-carry atop the SA-5 surface-to-air missile that included an experimental flight support unit known as the "Hypersonic Flying Laboratory" (HFL), "Kholod". Then, from 1992 to 1998, an additional six flight tests of the axisymmetric high-speed scramjet-demonstrator were conducted by CIAM together with France and then with NASA. Maximum flight speed greater than Mach6.4 was achieved and scramjet operation during 77 seconds was demonstrated. These flight test series also provided insight into autonomous hypersonic flight controls. 2000s In the 2000s, significant progress was made in the development of hypersonic technology, particularly in the field of scramjet engines. The HyShot project demonstrated scramjet combustion on 30 July 2002. The scramjet engine worked effectively and demonstrated supersonic combustion in action. However, the engine was not designed to provide thrust to propel a craft. It was designed more or less as a technology demonstrator. A joint British and Australian team from UK defense company Qinetiq and the University of Queensland were the first group to demonstrate a scramjet working in an atmospheric test. Hyper-X claimed the first flight of a thrust-producing scramjet-powered vehicle with full aerodynamic maneuvering surfaces in 2004 with the X-43A. The last of the three X-43A scramjet tests achieved Mach9.6 for a brief time. On 15 June 2007, the US Defense Advanced Research Project Agency (DARPA), in cooperation with the Australian Defence Science and Technology Organisation (DSTO), announced a successful scramjet flight at Mach10 using rocket engines to boost the test vehicle to hypersonic speeds. A series of scramjet ground tests was completed at NASA Langley Arc-Heated Scramjet Test Facility (AHSTF) at simulated Mach8 flight conditions. These experiments were used to support HIFiRE flight 2. On 22 May 2009, Woomera hosted the first successful test flight of a hypersonic aircraft in HIFiRE (Hypersonic International Flight Research Experimentation). The launch was one of ten planned test flights. The series of flights is part of a joint research program between the Defence Science and Technology Organisation and the US Air Force, designated as the HIFiRE. HIFiRE is investigating hypersonics technology and its application to advanced scramjet-powered space launch vehicles; the objective is to support the new Boeing X-51 scramjet demonstrator while also building a strong base of flight test data for quick-reaction space launch development and hypersonic "quick-strike" weapons. 2010s On 22 and 23 March 2010, Australian and American defense scientists successfully tested a (HIFiRE) hypersonic rocket. It reached an atmospheric speed of "more than 5,000 kilometres per hour" (Mach4) after taking off from the Woomera Test Range in outback South Australia. On 27 May 2010, NASA and the United States Air Force successfully flew the X-51A Waverider for approximately 200 seconds at Mach5, setting a new world record for flight duration at hypersonic airspeed. The Waverider flew autonomously before losing acceleration for an unknown reason and destroying itself as planned. The test was declared a success. The X-51A was carried aboard a B-52, accelerated to Mach4.5 via a solid rocket booster, and then ignited the Pratt & Whitney Rocketdyne scramjet engine to reach Mach5 at . However, a second flight on 13 June 2011 was ended prematurely when the engine lit briefly on ethylene but failed to transition to its primary JP-7 fuel, failing to reach full power. On 16 November 2010, Australian scientists from the University of New South Wales at the Australian Defence Force Academy successfully demonstrated that the high-speed flow in a naturally non-burning scramjet engine can be ignited using a pulsed laser source. A further X-51A Waverider test failed on 15 August 2012. The attempt to fly the scramjet for a prolonged period at Mach6 was cut short when, only 15 seconds into the flight, the X-51A craft lost control and broke apart, falling into the Pacific Ocean north-west of Los Angeles. The cause of the failure was blamed on a faulty control fin. In May 2013, an X-51A Waverider reached 4828 km/h (Mach3.9) during a three-minute flight under scramjet power. The WaveRider was dropped at from a B-52 bomber, and then accelerated to Mach4.8 by a solid rocket booster which then separated before the WaveRider's scramjet engine came into effect. On 28 August 2016, the Indian space agency ISRO conducted a successful test of a scramjet engine on a two-stage, solid-fueled rocket. Twin scramjet engines were mounted on the back of the second stage of a two-stage, solid-fueled sounding rocket called Advanced Technology Vehicle (ATV), which is ISRO's advanced sounding rocket. The twin scramjet engines were ignited during the second stage of the rocket when the ATV achieved a speed of 7350 km/h (Mach6) at an altitude of 20 km. The scramjet engines were fired for a duration of about 5 seconds. On 12 June 2019, India successfully conducted the maiden flight test of its indigenously developed uncrewed scramjet demonstration aircraft for hypersonic speed flight from a base from Abdul Kalam Island in the Bay of Bengal at about 11:25 am. The aircraft is called the Hypersonic Technology Demonstrator Vehicle. The trial was carried out by the Defence Research and Development Organisation. The aircraft forms an important component of the country's programme for development of a hypersonic cruise missile system. 2020s On 27 September 2021, DARPA announced successful flight of its Hypersonic Air-breathing Weapon Concept scramjet cruise missile. Another successful test was carried out in mid-March 2022 amid the Russian invasion of Ukraine. Details were kept secret to avoid escalating tension with Russia, only to be revealed by an unnamed Pentagon official in early April. Design principles Scramjet engines are a type of jet engine, and rely on the combustion of fuel and an oxidizer to produce thrust. Similar to conventional jet engines, scramjet-powered aircraft carry the fuel on board, and obtain the oxidizer by the ingestion of atmospheric oxygen (as compared to rockets, which carry both fuel and an oxidizing agent). This requirement limits scramjets to suborbital atmospheric propulsion, where the oxygen content of the air is sufficient to maintain combustion. The scramjet is composed of three basic components: a converging inlet, where incoming air is compressed; a combustor, where gaseous fuel is burned with atmospheric oxygen to produce heat; and a diverging nozzle, where the heated air is accelerated to produce thrust. Unlike a typical jet engine, such as a turbojet or turbofan engine, a scramjet does not use rotating, fan-like components to compress the air; rather, the achievable speed of the aircraft moving through the atmosphere causes the air to compress within the inlet. As such, no moving parts are needed in a scramjet. In comparison, typical turbojet engines require multiple stages of rotating compressor rotors, and multiple rotating turbine stages, all of which add weight, complexity, and a greater number of failure points to the engine. Due to the nature of their design, scramjet operation is limited to near-hypersonic velocities. As they lack mechanical compressors, scramjets require the high kinetic energy of a hypersonic flow to compress the incoming air to operational conditions. Thus, a scramjet-powered vehicle must be accelerated to the required velocity (usually about Mach4) by some other means of propulsion, such as turbojet, or rocket engines. In the flight of the experimental scramjet-powered Boeing X-51A, the test craft was lifted to flight altitude by a Boeing B-52 Stratofortress before being released and accelerated by a detachable rocket to near Mach4.5. In May 2013, another flight achieved an increased speed of Mach5.1. While scramjets are conceptually simple, actual implementation is limited by extreme technical challenges. Hypersonic flight within the atmosphere generates immense drag, and temperatures found on the aircraft and within the engine can be much greater than that of the surrounding air. Maintaining combustion in the supersonic flow presents additional challenges, as the fuel must be injected, mixed, ignited, and burned within milliseconds. While scramjet technology has been under development since the 1950s, only very recently have scramjets successfully achieved powered flight. Scramjets are designed to operate in the hypersonic flight regime, beyond the reach of turbojet engines, and, along with ramjets, fill the gap between the high efficiency of turbojets and the high speed of rocket engines. Turbomachinery-based engines, while highly efficient at subsonic speeds, become increasingly inefficient at transonic speeds, as the compressor rotors found in turbojet engines require subsonic speeds to operate. While the flow from transonic to low supersonic speeds can be decelerated to these conditions, doing so at supersonic speeds results in a tremendous increase in temperature and a loss in the total pressure of the flow. Around Mach3–4, turbomachinery is no longer useful, and ram-style compression becomes the preferred method. Ramjets use high-speed characteristics of air to literally 'ram' air through an inlet diffuser into the combustor. At transonic and supersonic flight speeds, the air upstream of the inlet is not able to move out of the way quickly enough, and is compressed within the diffuser before being diffused into the combustor. Combustion in a ramjet takes place at subsonic velocities, similar to turbojets but the combustion products are then accelerated through a convergent-divergent nozzle to supersonic speeds. As they have no mechanical means of compression, ramjets cannot start from a standstill, and generally do not achieve sufficient compression until supersonic flight. The lack of intricate turbomachinery allows ramjets to deal with the temperature rise associated with decelerating a supersonic flow to subsonic speeds. However, as speed rises, the internal energy of the flow after diffusor grows rapidly, so the relative addition of energy due to fuel combustion becomes lower, leading to decrease in efficiency of the engine. This leads to decrease in thrust generated by ramjets at higher speeds. Thus, to generate thrust at very high velocities, the rise of the pressure and temperature of the incoming air flow must be tightly controlled. In particular, this means that deceleration of the airflow to subsonic speed cannot be allowed. Mixing the fuel and air in this situation presents a considerable engineering challenge, compounded by the need to closely manage the speed of combustion while maximizing the relative increase of internal energy within the combustion chamber. Consequently, current scramjet technology requires the use of high-energy fuels and active cooling schemes to maintain sustained operation, often using hydrogen and regenerative cooling techniques. Theory All scramjet engines have an intake which compresses the incoming air, fuel injectors, a combustion chamber, and a divergent thrust nozzle. Sometimes engines also include a region which acts as a flame holder, although the high stagnation temperatures mean that an area of focused waves may be used, rather than a discrete engine part as seen in turbine engines. Other engines use pyrophoric fuel additives, such as silane, to avoid flameout. An isolator between the inlet and combustion chamber is often included to improve the homogeneity of the flow in the combustor and to extend the operating range of the engine. Shockwave imaging by the University of Maryland using Schlieren imaging determined that the fuel mixture controls compression by creating backpressure and shockwaves that slow and compress the air before ignition, much like the shock cone of a Ramjet. The imaging showed that the higher the fuel flow and combustion, the more shockwaves formed ahead of the combustor, which slowed and compressed the air before ignition. A scramjet is reminiscent of a ramjet. In a typical ramjet, the supersonic inflow of the engine is decelerated at the inlet to subsonic speeds and then reaccelerated through a nozzle to supersonic speeds to produce thrust. This deceleration, which is produced by a normal shock, creates a total pressure loss which limits the upper operating point of a ramjet engine. For a scramjet, the kinetic energy of the freestream air entering the scramjet engine is largely comparable to the energy released by the reaction of the oxygen content of the air with a fuel (e.g. hydrogen). Thus the heat released from combustion at Mach2.5 is around 10% of the total enthalpy of the working fluid. Depending on the fuel, the kinetic energy of the air and the potential combustion heat release will be equal at around Mach8. Thus the design of a scramjet engine is as much about minimizing drag as maximizing thrust. This high speed makes the control of the flow within the combustion chamber more difficult. Since the flow is supersonic, no downstream influence propagates within the freestream of the combustion chamber. Throttling of the entrance to the thrust nozzle is not a usable control technique. In effect, a block of gas entering the combustion chamber must mix with fuel and have sufficient time for initiation and reaction, all the while traveling supersonically through the combustion chamber, before the burned gas is expanded through the thrust nozzle. This places stringent requirements on the pressure and temperature of the flow, and requires that the fuel injection and mixing be extremely efficient. Usable dynamic pressures lie in the range , where where q is the dynamic pressure of the gas ρ (rho) is the density of the gas v is the velocity of the gas To keep the combustion rate of the fuel constant, the pressure and temperature in the engine must also be constant. This is problematic because the airflow control systems that would facilitate this are not physically possible in a scramjet launch vehicle due to the large speed and altitude range involved, meaning that it must travel at an altitude specific to its speed. Because air density reduces at higher altitudes, a scramjet must climb at a specific rate as it accelerates to maintain a constant air pressure at the intake. This optimal climb/descent profile is called a "constant dynamic pressure path". It is thought that scramjets might be operable up to an altitude of 75 km. Fuel injection and management is also potentially complex. One possibility would be that the fuel be pressurized to 100 bar by a turbo pump, heated by the fuselage, sent through the turbine and accelerated to higher speeds than the air by a nozzle. The air and fuel stream are crossed in a comb-like structure, which generates a large interface. Turbulence due to the higher speed of the fuel leads to additional mixing. Complex fuels like kerosene need a long engine to complete combustion. The minimum Mach number at which a scramjet can operate is limited by the fact that the compressed flow must be hot enough to burn the fuel, and have pressure high enough that the reaction be finished before the air moves out the back of the engine. Additionally, to be called a scramjet, the compressed flow must still be supersonic after combustion. Here two limits must be observed: First, since when a supersonic flow is compressed it slows down, the level of compression must be low enough (or the initial speed high enough) not to slow the gas below Mach1. If the gas within a scramjet goes below Mach1 the engine will "choke", transitioning to subsonic flow in the combustion chamber. This effect is well known amongst experimenters on scramjets since the waves caused by choking are easily observable. Additionally, the sudden increase in pressure and temperature in the engine can lead to an acceleration of the combustion, leading to the combustion chamber exploding. Second, the heating of the gas by combustion causes the speed of sound in the gas to increase (and the Mach number to decrease) even though the gas is still travelling at the same speed. Forcing the speed of air flow in the combustion chamber under Mach1 in this way is called "thermal choking". It is clear that a pure scramjet can operate at Mach numbers of 6–8, but in the lower limit, it depends on the definition of a scramjet. There are engine designs where a ramjet transforms into a scramjet over the Mach3–6 range, known as dual-mode scramjets. In this range however, the engine is still receiving significant thrust from subsonic combustion of the ramjet type. The high cost of flight testing and the unavailability of ground facilities have hindered scramjet development. A large amount of the experimental work on scramjets has been undertaken in cryogenic facilities, direct-connect tests, or burners, each of which simulates one aspect of the engine operation. Further, vitiated facilities (with the ability to control air impurities), storage heated facilities, arc facilities and the various types of shock tunnels each have limitations which have prevented perfect simulation of scramjet operation. The HyShot flight test showed the relevance of the 1:1 simulation of conditions in the T4 and HEG shock tunnels, despite having cold models and a short test time. The NASA-CIAM tests provided similar verification for CIAM's C-16 V/K facility and the Hyper-X project is expected to provide similar verification for the Langley AHSTF, CHSTF, and HTT. Computational fluid dynamics has only recently reached a position to make reasonable computations in solving scramjet operation problems. Boundary layer modeling, turbulent mixing, two-phase flow, flow separation, and real-gas aerothermodynamics continue to be problems on the cutting edge of CFD. Additionally, the modeling of kinetic-limited combustion with very fast-reacting species such as hydrogen makes severe demands on computing resources. Reaction schemes are numerically stiff requiring reduced reaction schemes. Much of scramjet experimentation remains classified. Several groups, including the US Navy with the SCRAM engine between 1968 and 1974, and the Hyper-X program with the X-43A, have claimed successful demonstrations of scramjet technology. Since these results have not been published openly, they remain unverified and a final design method of scramjet engines still does not exist. The final application of a scramjet engine is likely to be in conjunction with engines which can operate outside the scramjet's operating range. Dual-mode scramjets combine subsonic combustion with supersonic combustion for operation at lower speeds, and rocket-based combined cycle (RBCC) engines supplement a traditional rocket's propulsion with a scramjet, allowing for additional oxidizer to be added to the scramjet flow. RBCCs offer a possibility to extend a scramjet's operating range to higher speeds or lower intake dynamic pressures than would otherwise be possible. Characteristics Aircraft Does not have to carry oxygen No rotating parts makes it easier to manufacture than a turbojet Has a higher specific impulse (change in momentum per unit of propellant) than a rocket engine; could provide between 1000 and 4000 seconds, while a rocket typically provides around 450 seconds or less. Higher speed could mean cheaper access to outer space in the future Difficult / expensive testing and development Very high initial propulsion requirements Unlike a rocket that quickly passes mostly vertically through the atmosphere or a turbojet or ramjet that flies at much lower speeds, a hypersonic airbreathing vehicle optimally flies a "depressed trajectory", staying within the atmosphere at hypersonic speeds. Because scramjets have only mediocre thrust-to-weight ratios, acceleration would be limited. Therefore, time in the atmosphere at supersonic speed would be considerable, possibly 15–30 minutes. Similar to a reentering space vehicle, heat insulation would be a formidable task, with protection required for a duration longer than that of a typical space capsule, although less than the Space Shuttle. New materials offer good insulation at high temperature, but they often sacrifice themselves in the process. Therefore, studies often plan on "active cooling", where coolant circulating throughout the vehicle skin prevents it from disintegrating. Often the coolant is the fuel itself, in much the same way that modern rockets use their own fuel and oxidizer as coolant for their engines. All cooling systems add weight and complexity to a launch system. The cooling of scramjets in this way may result in greater efficiency, as heat is added to the fuel prior to entry into the engine, but results in increased complexity and weight which ultimately could outweigh any performance gains. The performance of a launch system is complex and depends greatly on its weight. Normally craft are designed to maximise range (), orbital radius () or payload mass fraction () for a given engine and fuel. This results in tradeoffs between the efficiency of the engine (takeoff fuel weight) and the complexity of the engine (takeoff dry weight), which can be expressed by the following: Where : is the empty mass fraction, and represents the weight of the superstructure, tankage and engine. is the fuel mass fraction, and represents the weight of fuel, oxidiser and any other materials which are consumed during the launch. is initial mass ratio, and is the inverse of the payload mass fraction. This represents how much payload the vehicle can deliver to a destination. A scramjet increases the mass of the motor over a rocket, and decreases the mass of the fuel . It can be difficult to decide whether this will result in an increased (which would be an increased payload delivered to a destination for a constant vehicle takeoff weight). The logic behind efforts driving a scramjet is (for example) that the reduction in fuel decreases the total mass by 30%, while the increased engine weight adds 10% to the vehicle total mass. Unfortunately the uncertainty in the calculation of any mass or efficiency changes in a vehicle is so great that slightly different assumptions for engine efficiency or mass can provide equally good arguments for or against scramjet powered vehicles. Additionally, the drag of the new configuration must be considered. The drag of the total configuration can be considered as the sum of the vehicle drag () and the engine installation drag (). The installation drag traditionally results from the pylons and the coupled flow due to the engine jet, and is a function of the throttle setting. Thus it is often written as: Where: is the loss coefficient is the thrust of the engine For an engine strongly integrated into the aerodynamic body, it may be more convenient to think of () as the difference in drag from a known base configuration. The overall engine efficiency can be represented as a value between 0 and 1 (), in terms of the specific impulse of the engine: Where: is the acceleration due to gravity at ground level is the vehicle speed is the specific impulse is fuel heat of reaction Specific impulse is often used as the unit of efficiency for rockets, since in the case of the rocket, there is a direct relation between specific impulse, specific fuel consumption and exhaust velocity. This direct relation is not generally present for airbreathing engines, and so specific impulse is less used in the literature. Note that for an airbreathing engine, both and are a function of velocity. The specific impulse of a rocket engine is independent of velocity, and common values are between 200 and 600 seconds (450s for the space shuttle main engines). The specific impulse of a scramjet varies with velocity, reducing at higher speeds, starting at about 1200s, although values in the literature vary. For the simple case of a single stage vehicle, the fuel mass fraction can be expressed as: Where this can be expressed for single stage transfer to orbit as: or for level atmospheric flight from air launch (missile flight): Where is the range, and the calculation can be expressed in the form of the Breguet range formula: Where: is the lift coefficient is the drag coefficient This extremely simple formulation, used for the purposes of discussion assumes: Single stage vehicle No aerodynamic lift for the transatmospheric lifter However they are true generally for all engines. A scramjet cannot produce efficient thrust unless boosted to high speed, around Mach5, although depending on the design it could act as a ramjet at low speeds. A horizontal take-off aircraft would need conventional turbofan, turbojet, or rocket engines to take off, sufficiently large to move a heavy craft. Also needed would be fuel for those engines, plus all engine-associated mounting structure and control systems. Turbofan and turbojet engines are heavy and cannot easily exceed about Mach2–3, so another propulsion method would be needed to reach scramjet operating speed. That could be ramjets or rockets. Those would also need their own separate fuel supply, structure, and systems. Many proposals instead call for a first stage of droppable solid rocket boosters, which greatly simplifies the design. Unlike jet or rocket propulsion systems facilities which can be tested on the ground, testing scramjet designs uses extremely expensive hypersonic test chambers or expensive launch vehicles, both of which lead to high instrumentation costs. Tests using launched test vehicles very typically end with destruction of the test item and instrumentation. Orbital vehicles An advantage of a hypersonic airbreathing (typically scramjet) vehicle like the X-30 is avoiding or at least reducing the need for carrying oxidizer. For example, the Space Shuttle external tank held 616,432.2 kg of liquid oxygen (LOX) and 103,000 kg of liquid hydrogen (LH) while having an empty weight of 30,000 kg. The orbiter gross weight was 109,000 kg with a maximum payload of about 25,000 kg and to get the assembly off the launch pad the shuttle used two very powerful solid rocket boosters with a weight of 590,000 kg each. If the oxygen could be eliminated, the vehicle could be lighter at liftoff and possibly carry more payload. On the other hand, scramjets spend more time in the atmosphere and require more hydrogen fuel to deal with aerodynamic drag. Whereas liquid oxygen is quite a dense fluid (1141 kg/m3), liquid hydrogen has much lower density (70.85 kg/m3) and takes up more volume. This means that the vehicle using this fuel becomes much bigger and gives more drag. Other fuels have more comparable density, such as RP-1 (810 kg/m3) JP-7 (density at 15 °C 779–806 kg/m3) and unsymmetrical dimethylhydrazine (UDMH) (793.00 kg/m3). One issue is that scramjet engines are predicted to have exceptionally poor thrust-to-weight ratio of around 2, when installed in a launch vehicle. A rocket has the advantage that its engines have very high thrust-weight ratios (~100:1), while the tank to hold the liquid oxygen approaches a volume ratio of ~100:1 also. Thus a rocket can achieve a very high mass fraction, which improves performance. By way of contrast the projected thrust/weight ratio of scramjet engines of about 2 mean a much larger percentage of the takeoff mass is engine (ignoring that this fraction increases anyway by a factor of about four due to the lack of onboard oxidiser). In addition the vehicle's lower thrust does not necessarily avoid the need for the expensive, bulky, and failure-prone high performance turbopumps found in conventional liquid-fuelled rocket engines, since most scramjet designs seem to be incapable of orbital speeds in airbreathing mode, and hence extra rocket engines are needed. Scramjets might be able to accelerate from approximately Mach5–7 to around somewhere between half of orbital speed and orbital speed (X-30 research suggested that Mach17 might be the limit compared to an orbital speed of Mach25, and other studies put the upper speed limit for a pure scramjet engine between Mach10 and 25, depending on the assumptions made). Generally, another propulsion system (very typically, a rocket is proposed) is expected to be needed for the final acceleration into orbit. Since the delta-V is moderate and the payload fraction of scramjets high, lower performance rockets such as solids, hypergolics, or simple liquid fueled boosters might be acceptable. Theoretical projections place the top speed of a scramjet between and . For comparison, the orbital speed at low Earth orbit is . The scramjet's heat-resistant underside potentially doubles as its reentry system if a single-stage-to-orbit vehicle using non-ablative, non-active cooling is visualised. If an ablative shielding is used on the engine it will probably not be usable after ascent to orbit. If active cooling is used with the fuel as coolant, the loss of all fuel during the burn to orbit will also mean the loss of all cooling for the thermal protection system. Reducing the amount of fuel and oxidizer does not necessarily improve costs as rocket propellants are comparatively very cheap. Indeed, the unit cost of the vehicle can be expected to end up far higher, since aerospace hardware cost is about two orders of magnitude higher than liquid oxygen, fuel and tankage, and scramjet hardware seems to be much heavier than rockets for any given payload. Still, if scramjets enable reusable vehicles, this could theoretically be a cost benefit. Whether equipment subject to the extreme conditions of a scramjet can be reused sufficiently many times is unclear; all flown scramjet tests only survive for short periods and have never been designed to survive a flight to date. The eventual cost of such a vehicle is the subject of intense debate since even the best estimates disagree whether a scramjet vehicle would be advantageous. It is likely that a scramjet vehicle would need to lift more load than a rocket of equal takeoff weight to be equally as cost efficient (if the scramjet is a non-reusable vehicle). Space launch vehicles may or may not benefit from having a scramjet stage. A scramjet stage of a launch vehicle theoretically provides a specific impulse of 1000 to 4000s whereas a rocket provides less than 450s while in the atmosphere. A scramjet's specific impulse decreases rapidly with speed, however, and the vehicle would suffer from a relatively low lift to drag ratio. The installed thrust to weight ratio of scramjets compares very unfavorably with the 50–100 of a typical rocket engine. This is compensated for in scramjets partly because the weight of the vehicle would be carried by aerodynamic lift rather than pure rocket power (giving reduced 'gravity losses'), but scramjets would take much longer to get to orbit due to lower thrust which greatly offsets the advantage. The takeoff weight of a scramjet vehicle is significantly reduced over that of a rocket, due to the lack of onboard oxidiser, but increased by the structural requirements of the larger and heavier engines. Whether this vehicle could be reusable or not is still a subject of debate and research. Proposed applications An aircraft using this type of jet engine could dramatically reduce the time it takes to travel from one place to another, potentially putting any place on Earth within a 90-minute flight. However, there are questions about whether such a vehicle could carry enough fuel to make useful length trips. In addition, some countries ban or penalize airliners and other civil aircraft that create sonic booms. (For example, in the United States, FAA regulations prohibit supersonic flights over land, by civil aircraft. ) Scramjet vehicle has been proposed for a single stage to tether vehicle, where a Mach12 spinning orbital tether would pick up a payload from a vehicle at around 100 km and carry it to orbit. See also Avangard (hypersonic glide vehicle) Precooled jet engine Ram accelerator Shcramjet SABRE (rocket engine) References Citations Bibliography Aerospaceplane – 1961. Aerospace Projects Review, Volume 2, No 5. Aspects of the Aerospace Plane. Flight International, 2 January 1964, pages 36–37. External links Aircraft engines Jet engines Spacecraft propulsion Single-stage-to-orbit Space access Non-rocket spacelaunch Australian inventions de:Staustrahltriebwerk#Überschallverbrennung im Scramjet
Scramjet
Technology
7,281
47,668,419
https://en.wikipedia.org/wiki/Xanthoconium%20purpureum
Xanthoconium purpureum is a species of bolete fungus in the genus Xanthoconium. It was described as new to science in 1962 by Wally Snell and Esther Dick in 1962. It is found in eastern North America, where it fruits under oak, sometimes in oak-pine forests. See also List of North American boletes References External links Boletaceae Fungi described in 1962 Fungus species
Xanthoconium purpureum
Biology
88
2,815,135
https://en.wikipedia.org/wiki/Billion%20Electric
Billion Electric Co. Ltd. (Taiex: #3027), based in Taiwan, is an electronics company founded in 1973. Their range of ADSL modem/routers were introduced into Australia in 2002. Since then, features have been added including 4 port switches, wireless, VoIP, and VPN termination. See also List of companies of Taiwan List of networking hardware vendors References "Billion gives IPv6 to X-Series customers in New Year". Impress Media Australia. 26 November 2010. Retrieved 28 November 2010. External links Computer companies of Taiwan Computer hardware companies Electronics companies established in 1973 Electronics companies of Taiwan Networking hardware companies Manufacturing companies based in New Taipei 1973 establishments in Taiwan
Billion Electric
Technology
141
50,897,056
https://en.wikipedia.org/wiki/VELCT
Velocity Energy-efficient and Link-aware Cluster-Tree (VELCT) is a cluster and tree-based topology management protocol for mobile wireless sensor networks (MWSNs). See also DCN DCT CIDT References Wireless networking
VELCT
Technology,Engineering
50
13,478,498
https://en.wikipedia.org/wiki/Digital%20Preservation%20Award
The Digital Preservation Award is an international award sponsored by the Digital Preservation Coalition. The award 'recognises the many new initiatives being undertaken in the challenging field of digital preservation'. It was inaugurated in 2004 and was initially presented as part of the Institute of ConservationConservation Awards. Since 2012 the prize, which includes a trophy and a cheque, is presented independently. Awards ceremonies have taken place at the British Library, the British Museum and the Wellcome Trust. Winners and shortlisted entries 2004 Winner The Digital Archive: The National Archives of the United Kingdom Shortlisted The CAMiLEON Project: University of Leeds & University of Michigan (Special Commendation) JISC Continuing Access and Digital Preservation Strategy: Jisc Preservation Metadata Extraction Tool: National Library of New Zealand Wellcome Library/JISC Web Archiving Project: Wellcome Library & Jisc 2005 Winner PREMIS (Preservation Metadata: Implementation Strategies): PREMIS Working Group Shortlisted Choosing the optimal digital preservation strategy: Vienna University of Technology Digital Preservation Testbed: National Archives of the Netherlands Reverse Standards Conversion: British Broadcasting Corporation UK Web Archiving Consortium 2007 Winner Active Preservation at The National Archives - PRONOM Technical Registry and DROID file format identification tool: The National Archives of the United Kingdom Shortlisted LIFE: British Library Web Curator Tool software development project: National Library of New Zealand & British Library PARADIGM (The Personal Archives Accessible in Digital Media): Bodleian Library, University of Oxford, & John Rylands University Library, University of Manchester Digital Repository Audit and Certification: Center for Research Libraries, RLG-OCLC, NARA, Digital Curation Centre, Digital Preservation Europe and NESTOR 2010 Winner The Memento Project: Time Travel for the Web : Old Dominion University and the Los Alamos National Laboratory in the United States Shortlisted Web Continuity: ensuring access to online government information, from The National Archives UK PLATO 3: Preservation Planning made simple from Vienna University of Technology and the PLANETS Project The Blue Ribbon Task Force on Sustainable Digital Preservation and Access Preserving Virtual Worlds, University of Illinois at Urbana Champaign with Rochester Institute of Technology, University of Maryland, Stanford University and Linden Lab in the United States 2012 Winner - outstanding contribution to teaching and communication in digital preservation in the last 2 years The Digital Preservation Training Programme, University of London Computing Centre Shortlisted - outstanding contribution to teaching and communication in digital preservation in the last 2 years The Signal, Library of Congress Keeping Research Data Safe Project, Charles Beagrie Ltd and partners Digital Archaeology Exhibition, Story Worldwide Ltd Winner - outstanding contribution to research and innovation in digital preservation in the last two years The PLANETS Project Preservation and Long-term Access through Networked Services, The Open Planets Foundation and partners Shortlisted - outstanding contribution to research and innovation in digital preservation in the last two years Data Management Planning Toolkit, The Digital Curation Centre and partners TOTEM Trustworthy Online Technical Environment Metadata Registry, University of Portsmouth and partners The KEEP Emulation Framework, Koninklijke Bibliotheek (National Library of the Netherlands) and partners Winner - most outstanding contribution to digital preservation in the last decade The Archaeology Data Service at the University of York Shortlisted - most outstanding contribution to digital preservation in the last decade The International Internet Preservation Consortium The National Archives for the PRONOM and DROID services The PREMIS Preservation Metadata Working Group for the PREMIS Standard 2014 Winner - OPF Award for Research and Innovation bwFLA Functional Long Term Archiving and Access by the University of Freiburg and partners Shortlisted - OPF Award for Research and Innovation Jpylyzer by the KB (National Library of the Netherlands) and partners The SPRUCE Project by The University of Leeds and partners Winner - NCDD Award for Teaching and Communications Practical Digital Preservation: a how to guide for organizations of any size by Adrian Brown Shortlisted - NCDD Award for Teaching and Communications Skilling the Information Professional by Aberystwyth University Introduction to Digital Curation: An open online UCLeXtend Course by University College London Winner - Award for the Most Distinguished Student Work in Digital Preservation Game Preservation in the UK by Alasdair Bachell, University of Glasgow Shortlisted - Award for the Most Distinguished Student Work in Digital Preservation Voices from a Disused Quarry by Kerry Evans, Ann MacDonald and Sarah Vaughan, University of Aberystwyth and partners Emulation v Format Conversion by Victoria Sloyan, University College London Winner - Award for Safeguarding the Digital Legacy Carcanet Press Email Archive, University of Manchester Shortlisted - Award for Safeguarding the Digital Legacy Conservation and Re-enactment of Digital Art Ready-Made, by the University of Freiburg and Rhizome Inspiring Ireland, Digital Repository of Ireland and Partners The Cloud and the Cow, Archives and Records Council of Wales 2016 Winner - SSI Award for Research and Innovation NCDD and NDE, ‘Constructing a network of nationwide facilities together.’ Winner - NCDD Award for Teaching and Communications The National Archives and The Scottish Council on Archives: ‘Transforming Archives/Opening Up Scotland’s Archives.’ Winner - Award for the Most Distinguished Student Work in Digital Preservation Anthea Seles, University College London and ‘The Transferability of Trusted Digital Repository Standards to an East African context.’ Winner - The National Archives Award for Safeguarding the Digital Legacy Amsterdam Museum and Partners, ‘The Digital City revives: A case study of web archaeology.’ Winner - DPC Award for the Most Outstanding Digital Preservation Initiative in Industry HSBC, 'The Global Digital Archive' DPC Fellowship Brewster Kahle, the Internet Archive Full List of Finalists 2016 List of Finalists 2018 Winner - Software Sustainability Institute Award for Research and Innovation ePADD, University of Stanford Shortlisted - Software Sustainability Institute Award for Research and Innovation VeraPDF, Open Preservation Foundation Contributions towards Defining the Discipline, Sarah Higgins - Aberystwyth University Flashback: Preservation of legacy digital collections, British Library Winner - DPC Award for Teaching and Communications The Archivist’s Guide to KryoFlux, Universities of Texas, Duke, Los Angeles, Yale and Emory Shortlisted - DPC Award for Teaching and Communications Evidence-based postgraduate education in digital information management, University College Dublin Leren Preserveren (Learning Digital Preservation), Digital Heritage Network and Het Nieuwe Instituut Ibadan/Liverpool Digital Curation Curriculum Review Project, Universities of Ibadan and Liverpool Winner - National Records of Scotland Award for the Most Distinguished Student Work in Digital Preservation 'Navigating the PDF/A Standard: A Case Study of Theses' by Anna Oates, University of Illinois at Urbana-Champaign Shortlisted - National Records of Scotland Award for the Most Distinguished Student Work in Digital Preservation 'Preserving the past: the challenge of digital archiving within a Scottish Local Authority' by Lorraine Murray, University of Glasgow 'Essay on the record-making and record-keeping issues implicit in Wearables' by Philippa Turner, University of Liverpool Winner - Open Data Institute Award for the Most Outstanding Digital Preservation Initiative in Commerce, Industry and the Third sector Archiving Crossrail - Europe’s largest infrastructure project, Crossrail and Transport for London Shortlisted - Open Data Institute Award for the Most Outstanding Digital Preservation Initiative in Commerce, Industry and the Third sector Music Treasures, Stichting Omroep Muziek (SOM) Heritage preservation of contemporary dance and choreography through research and innovation in digital documentation and annotation of creative processes, ICKamsterdam and Motion Bank Winner - The National Archives Award for Safeguarding the Digital Legacy IFI Open Source tools: IFIscripts/ Loopline project, IFI Irish Film Archive Shortlisted - The National Archives Award for Safeguarding the Digital Legacy Cloud-Enabled Preservation of Life in the 20th Century White House, White House Historical Association Digital Library Design, Deliver, Embed: Establishing Digital Transfer in Parliament, UK Parliamentary Archives Local Authority Digital Preservation Consortium: Dorset History Centre, West Sussex Records Office, Wiltshire & Swindon History Centre DPC Fellowship Barbara Sierman, KB Netherlands See also List of computer science awards Digital preservation Digital Preservation Coalition References External links Digital Preservation Awards website Conservation Awards website Digital preservation Computer science awards
Digital Preservation Award
Technology
1,611
51,153,954
https://en.wikipedia.org/wiki/Stuart%20Dalziel
Stuart Bruce Dalziel is a British and New Zealand fluid dynamicist. He is currently based at the Department of Applied Mathematics and Theoretical Physics at the University of Cambridge, where he has directed the GKB Laboratory since 1997. He was promoted to the rank of Professor in 2016. Dalziel completed his PhD in Cambridge in 1988, under the supervision of Paul Linden. Dalziel's research areas include stratified turbulence and internal gravity waves. References Year of birth missing (living people) Living people Fluid dynamicists British physicists Fellows of the American Physical Society 20th-century New Zealand physicists 21st-century British physicists Alumni of the University of Cambridge
Stuart Dalziel
Chemistry
136
30,443,699
https://en.wikipedia.org/wiki/PANDIT%20%28database%29
PANDIT is a database of multiple sequence alignments and phylogenetic trees covering many common protein domains. See also Pfam: database of protein domains Phylogeny Sequence alignment References External links http://www.ebi.ac.uk/goldman-srv/pandit Biological databases Computational phylogenetics Genetics in the United Kingdom Protein domains Science and technology in Cambridgeshire South Cambridgeshire District
PANDIT (database)
Biology
78
39,536,084
https://en.wikipedia.org/wiki/Bruce%20Sagan
Bruce Eli Sagan (born March 29, 1954) is an American Professor of Mathematics at Michigan State University. He specializes in enumerative, algebraic, and topological combinatorics. He is also known as a musician, playing music from Scandinavia and the Balkans. Early life Sagan is the son of Eugene Benjamin Sagan and Arlene Kaufmann Sagan. He grew up in Berkeley, California. He started playing classical violin at a young age under the influence of his mother who was a music teacher and conductor. He received his B.S. in mathematics (1974) from California State University, East Bay (then called California State University, Hayward). He received his Ph.D. in mathematics (1979) from the Massachusetts Institute of Technology. His doctoral thesis "Partially Ordered Sets with Hooklengths – an Algorithmic Approach" was supervised by Richard P. Stanley. He was Stanley's third doctoral student. During his graduate school years he also joined and became music director of the Mandala Folkdance Ensemble. Mathematical career Sagan held postdoctoral positions at Université Louis Pasteur (1979–1980), the University of Michigan (1980–1983), University College of Wales, Aberystwyth, Middlebury College (1984–1985), the University of Pennsylvania, and Université du Québec à Montréal (Fall, 1985), before becoming a faculty member at MSU in the Spring of 1986. He has held visiting positions at the Institute for Mathematics and its Applications (Spring, 1988), UCSD (Spring, 1991), the Royal Institute of Technology (1993–1994), MSRI (Winter, 1997), the Isaac Newton Institute (Winter, 2001), Mittag-Leffler Institute (Spring, 2005), and DIMACS (2005–2006). He was also a rotating Program Officer at the National Science Foundation (2007–2010). Sagan has published over 100 research papers. He has given over 300 talks in North America, Europe, Asia, and Australia. These have included keynote addresses at the Conference on Formal Power Series and Algebraic Combinatorics (2006) and the British Combinatorial Conference (2011). He has graduated 15 Ph.D. students. During his time at Michigan State University, he has won two awards for teaching excellence. Sagan has been an Editor-in-Chief for the Electronic Journal of Combinatorics since 2004. Books Mathematical Essays in Honor of Gian-Carlo Rota (co-edited with Richard P. Stanley), Birkhäuser, Cambridge, 1998, . The Symmetric Group: Representations, Combinatorial Algorithms, and Symmetric Functions, 2nd edition, Springer-Verlag, New York, 2001, . Festschrift in Honor of Richard Stanley (special editor), Electronic Journal of Combinatorics, 2004–2006. Selected papers . Musical career Sagan plays music from the Scandinavian countries and the Balkans on fiddle and native instruments. These include the Swedish nyckelharpa, the Norwegian hardingfele, and the Bulgarian gadulka. In 1985 he and his then wife, Judy Barlas, founded the music and dance camp Scandinavian Week at Buffalo Gap (now known as Nordic Fiddles and Feet). He is currently a regular staff member at Northern Week at Ashokan run by Jay Ungar and Molly Mason. In 1994 he was awarded the Zorn Medal in Bronze for his playing in front of a jury of Swedish musicians. He has performed and given workshops in North America, Europe, and Australia. He plays Swedish music as a duo with Brad Battey and also with Lydia Ievens. His trio Veselba, with Nan Nelson and Chris Rietz, performs music from Bulgaria. Discography Andrea Hoag (fiddle, vocals) and Bruce Sagan (fiddle, hardingfele, nyckelharpa) with Larry Robinson (bouzouki), Spelstundarna, E. Thomas ETD 102, 1993. 20 tunes in Scandinavian style. Bruce Sagan (fiddle, hardingfele, nyckelharpa, gâdulka) with Brad Battey (fiddle), Nan Nelson (bass, tambura) and Chris Rietz (guitar, kaval), With Friends, 2002. 15 tunes in Scandinavian and Bulgarian styles. In a review of this album, the Swedish folkmusic magazine Spelmannen wrote that Sagan plays "som en inföding," i.e., "like a native." lydia ievins (fiddle, nyckelharpa) and Bruce Sagan (fiddle, nyckelharpa, hardingfele), Northlands, 2010. 18 tunes composed mainly by the performers in Scandinavian style. In a review of this album, Sing Out! wrote that it is "a delightful recording of two highly talented players." Brad Battey (fiddle, nyckelharpa) and Bruce Sagan (fiddle, nyckelharpa), Letter from America, 2020. 17 tunes composed by American musicians in Scandinavian style. External links Mathematical home page Musical home page References 20th-century American mathematicians 21st-century American mathematicians Michigan State University faculty Combinatorialists 1954 births Living people American folk musicians American multi-instrumentalists University of Michigan fellows California State University, East Bay alumni Massachusetts Institute of Technology School of Science alumni People from Berkeley, California Mathematicians from California
Bruce Sagan
Mathematics
1,086
1,545,816
https://en.wikipedia.org/wiki/Leslie%20matrix
The Leslie matrix is a discrete, age-structured model of population growth that is very popular in population ecology named after Patrick H. Leslie. The Leslie matrix (also called the Leslie model) is one of the most well-known ways to describe the growth of populations (and their projected age distribution), in which a population is closed to migration, growing in an unlimited environment, and where only one sex, usually the female, is considered. The Leslie matrix is used in ecology to model the changes in a population of organisms over a period of time. In a Leslie model, the population is divided into groups based on age classes. A similar model which replaces age classes with ontogenetic stages is called a Lefkovitch matrix, whereby individuals can both remain in the same stage class or move on to the next one. At each time step, the population is represented by a vector with an element for each age class where each element indicates the number of individuals currently in that class. The Leslie matrix is a square matrix with the same number of rows and columns as the population vector has elements. The (i,j)th cell in the matrix indicates how many individuals will be in the age class i at the next time step for each individual in stage j. At each time step, the population vector is multiplied by the Leslie matrix to generate the population vector for the subsequent time step. To build a matrix, the following information must be known from the population: , the count of individuals (n) of each age class x , the fraction of individuals that survives from age class x to age class x+1, , fecundity, the per capita average number of female offspring reaching born from mother of the age class x. More precisely, it can be viewed as the number of offspring produced at the next age class weighted by the probability of reaching the next age class. Therefore, From the observations that at time t+1 is simply the sum of all offspring born from the previous time step and that the organisms surviving to time t+1 are the organisms at time t surviving at probability , one gets . This implies the following matrix representation: where is the maximum age attainable in the population. This can be written as: or: where is the population vector at time t and is the Leslie matrix. The dominant eigenvalue of , denoted , gives the population's asymptotic growth rate (growth rate at the stable age distribution). The corresponding eigenvector provides the stable age distribution, the proportion of individuals of each age within the population, which remains constant at this point of asymptotic growth barring changes to vital rates. Once the stable age distribution has been reached, a population undergoes exponential growth at rate . The characteristic polynomial of the matrix is given by the Euler–Lotka equation. The Leslie model is very similar to a discrete-time Markov chain. The main difference is that in a Markov model, one would have for each , while the Leslie model may have these sums greater or less than 1. Stable age structure This age-structured growth model suggests a steady-state, or stable, age-structure and growth rate. Regardless of the initial population size, , or age distribution, the population tends asymptotically to this age-structure and growth rate. It also returns to this state following perturbation. The Euler–Lotka equation provides a means of identifying the intrinsic growth rate. The stable age-structure is determined both by the growth rate and the survival function (i.e. the Leslie matrix). For example, a population with a large intrinsic growth rate will have a disproportionately “young” age-structure. A population with high mortality rates at all ages (i.e. low survival) will have a similar age-structure. Random Leslie model There is a generalization of the population growth rate to when a Leslie matrix has random elements which may be correlated. When characterizing the disorder, or uncertainties, in vital parameters; a perturbative formalism has to be used to deal with linear non-negative random matrix difference equations. Then the non-trivial, effective eigenvalue which defines the long-term asymptotic dynamics of the mean-value population state vector can be presented as the effective growth rate. This eigenvalue and the associated mean-value invariant state vector can be calculated from the smallest positive root of a secular polynomial and the residue of the mean-valued Green function. Exact and perturbative results can thusly be analyzed for several models of disorder. References Further reading Population Population ecology Matrices
Leslie matrix
Mathematics
944
19,858
https://en.wikipedia.org/wiki/Model%20theory
In mathematical logic, model theory is the study of the relationship between formal theories (a collection of sentences in a formal language expressing statements about a mathematical structure), and their models (those structures in which the statements of the theory hold). The aspects investigated include the number and size of models of a theory, the relationship of different models to each other, and their interaction with the formal language itself. In particular, model theorists also investigate the sets that can be defined in a model of a theory, and the relationship of such definable sets to each other. As a separate discipline, model theory goes back to Alfred Tarski, who first used the term "Theory of Models" in publication in 1954. Since the 1970s, the subject has been shaped decisively by Saharon Shelah's stability theory. Compared to other areas of mathematical logic such as proof theory, model theory is often less concerned with formal rigour and closer in spirit to classical mathematics. This has prompted the comment that "if proof theory is about the sacred, then model theory is about the profane". The applications of model theory to algebraic and Diophantine geometry reflect this proximity to classical mathematics, as they often involve an integration of algebraic and model-theoretic results and techniques. Consequently, proof theory is syntactic in nature, in contrast to model theory, which is semantic in nature. The most prominent scholarly organization in the field of model theory is the Association for Symbolic Logic. Overview This page focuses on finitary first order model theory of infinite structures. The relative emphasis placed on the class of models of a theory as opposed to the class of definable sets within a model fluctuated in the history of the subject, and the two directions are summarised by the pithy characterisations from 1973 and 1997 respectively: model theory = universal algebra + logic where universal algebra stands for mathematical structures and logic for logical theories; and model theory = algebraic geometry − fields. where logical formulas are to definable sets what equations are to varieties over a field. Nonetheless, the interplay of classes of models and the sets definable in them has been crucial to the development of model theory throughout its history. For instance, while stability was originally introduced to classify theories by their numbers of models in a given cardinality, stability theory proved crucial to understanding the geometry of definable sets. Fundamental notions of first-order model theory First-order logic A first-order formula is built out of atomic formulas such as or by means of the Boolean connectives and prefixing of quantifiers or . A sentence is a formula in which each occurrence of a variable is in the scope of a corresponding quantifier. Examples for formulas are (or to indicate is the unbound variable in ) and (or ), defined as follows: (Note that the equality symbol has a double meaning here.) It is intuitively clear how to translate such formulas into mathematical meaning. In the semiring of natural numbers , viewed as a structure with binary functions for addition and multiplication and constants for 0 and 1 of the natural numbers, for example, an element satisfies the formula if and only if is a prime number. The formula similarly defines irreducibility. Tarski gave a rigorous definition, sometimes called "Tarski's definition of truth", for the satisfaction relation , so that one easily proves: is a prime number. is irreducible. A set of sentences is called a (first-order) theory, which takes the sentences in the set as its axioms. A theory is satisfiable if it has a model , i.e. a structure (of the appropriate signature) which satisfies all the sentences in the set . A complete theory is a theory that contains every sentence or its negation. The complete theory of all sentences satisfied by a structure is also called the theory of that structure. It's a consequence of Gödel's completeness theorem (not to be confused with his incompleteness theorems) that a theory has a model if and only if it is consistent, i.e. no contradiction is proved by the theory. Therefore, model theorists often use "consistent" as a synonym for "satisfiable". Basic model-theoretic concepts A signature or language is a set of non-logical symbols such that each symbol is either a constant symbol, or a function or relation symbol with a specified arity. Note that in some literature, constant symbols are considered as function symbols with zero arity, and hence are omitted. A structure is a set together with interpretations of each of the symbols of the signature as relations and functions on (not to be confused with the formal notion of an "interpretation" of one structure in another). Example: A common signature for ordered rings is , where and are 0-ary function symbols (also known as constant symbols), and are binary (= 2-ary) function symbols, is a unary (= 1-ary) function symbol, and is a binary relation symbol. Then, when these symbols are interpreted to correspond with their usual meaning on (so that e.g. is a function from to and is a subset of ), one obtains a structure . A structure is said to model a set of first-order sentences in the given language if each sentence in is true in with respect to the interpretation of the signature previously specified for . (Again, not to be confused with the formal notion of an "interpretation" of one structure in another) A model of is a structure that models . A substructure of a σ-structure is a subset of its domain, closed under all functions in its signature σ, which is regarded as a σ-structure by restricting all functions and relations in σ to the subset. This generalises the analogous concepts from algebra; for instance, a subgroup is a substructure in the signature with multiplication and inverse. A substructure is said to be elementary if for any first-order formula and any elements a1, ..., an of , if and only if . In particular, if is a sentence and an elementary substructure of , then if and only if . Thus, an elementary substructure is a model of a theory exactly when the superstructure is a model. Example: While the field of algebraic numbers is an elementary substructure of the field of complex numbers , the rational field is not, as we can express "There is a square root of 2" as a first-order sentence satisfied by but not by . An embedding of a σ-structure into another σ-structure is a map f: A → B between the domains which can be written as an isomorphism of with a substructure of . If it can be written as an isomorphism with an elementary substructure, it is called an elementary embedding. Every embedding is an injective homomorphism, but the converse holds only if the signature contains no relation symbols, such as in groups or fields. A field or a vector space can be regarded as a (commutative) group by simply ignoring some of its structure. The corresponding notion in model theory is that of a reduct of a structure to a subset of the original signature. The opposite relation is called an expansion - e.g. the (additive) group of the rational numbers, regarded as a structure in the signature {+,0} can be expanded to a field with the signature {×,+,1,0} or to an ordered group with the signature {+,0,<}. Similarly, if σ' is a signature that extends another signature σ, then a complete σ'-theory can be restricted to σ by intersecting the set of its sentences with the set of σ-formulas. Conversely, a complete σ-theory can be regarded as a σ'-theory, and one can extend it (in more than one way) to a complete σ'-theory. The terms reduct and expansion are sometimes applied to this relation as well. Compactness and the Löwenheim-Skolem theorem The compactness theorem states that a set of sentences S is satisfiable if every finite subset of S is satisfiable. The analogous statement with consistent instead of satisfiable is trivial, since every proof can have only a finite number of antecedents used in the proof. The completeness theorem allows us to transfer this to satisfiability. However, there are also several direct (semantic) proofs of the compactness theorem. As a corollary (i.e., its contrapositive), the compactness theorem says that every unsatisfiable first-order theory has a finite unsatisfiable subset. This theorem is of central importance in model theory, where the words "by compactness" are commonplace. Another cornerstone of first-order model theory is the Löwenheim-Skolem theorem. According to the Löwenheim-Skolem Theorem, every infinite structure in a countable signature has a countable elementary substructure. Conversely, for any infinite cardinal κ every infinite structure in a countable signature that is of cardinality less than κ can be elementarily embedded in another structure of cardinality κ (There is a straightforward generalisation to uncountable signatures). In particular, the Löwenheim-Skolem Theorem implies that any theory in a countable signature with infinite models has a countable model as well as arbitrarily large models. In a certain sense made precise by Lindström's theorem, first-order logic is the most expressive logic for which both the Löwenheim–Skolem theorem and the compactness theorem hold. Definability Definable sets In model theory, definable sets are important objects of study. For instance, in the formula defines the subset of prime numbers, while the formula defines the subset of even numbers. In a similar way, formulas with n free variables define subsets of . For example, in a field, the formula defines the curve of all such that . Both of the definitions mentioned here are parameter-free, that is, the defining formulas don't mention any fixed domain elements. However, one can also consider definitions with parameters from the model. For instance, in , the formula uses the parameter from to define a curve. Eliminating quantifiers In general, definable sets without quantifiers are easy to describe, while definable sets involving possibly nested quantifiers can be much more complicated. This makes quantifier elimination a crucial tool for analysing definable sets: A theory T has quantifier elimination if every first-order formula φ(x1, ..., xn) over its signature is equivalent modulo T to a first-order formula ψ(x1, ..., xn) without quantifiers, i.e. holds in all models of T. If the theory of a structure has quantifier elimination, every set definable in a structure is definable by a quantifier-free formula over the same parameters as the original definition. For example, the theory of algebraically closed fields in the signature σring = (×,+,−,0,1) has quantifier elimination. This means that in an algebraically closed field, every formula is equivalent to a Boolean combination of equations between polynomials. If a theory does not have quantifier elimination, one can add additional symbols to its signature so that it does. Axiomatisability and quantifier elimination results for specific theories, especially in algebra, were among the early landmark results of model theory. But often instead of quantifier elimination a weaker property suffices: A theory T is called model-complete if every substructure of a model of T which is itself a model of T is an elementary substructure. There is a useful criterion for testing whether a substructure is an elementary substructure, called the Tarski–Vaught test. It follows from this criterion that a theory T is model-complete if and only if every first-order formula φ(x1, ..., xn) over its signature is equivalent modulo T to an existential first-order formula, i.e. a formula of the following form: , where ψ is quantifier free. A theory that is not model-complete may have a model completion, which is a related model-complete theory that is not, in general, an extension of the original theory. A more general notion is that of a model companion. Minimality In every structure, every finite subset is definable with parameters: Simply use the formula . Since we can negate this formula, every cofinite subset (which includes all but finitely many elements of the domain) is also always definable. This leads to the concept of a minimal structure. A structure is called minimal if every subset definable with parameters from is either finite or cofinite. The corresponding concept at the level of theories is called strong minimality: A theory T is called strongly minimal if every model of T is minimal. A structure is called strongly minimal if the theory of that structure is strongly minimal. Equivalently, a structure is strongly minimal if every elementary extension is minimal. Since the theory of algebraically closed fields has quantifier elimination, every definable subset of an algebraically closed field is definable by a quantifier-free formula in one variable. Quantifier-free formulas in one variable express Boolean combinations of polynomial equations in one variable, and since a nontrivial polynomial equation in one variable has only a finite number of solutions, the theory of algebraically closed fields is strongly minimal. On the other hand, the field of real numbers is not minimal: Consider, for instance, the definable set . This defines the subset of non-negative real numbers, which is neither finite nor cofinite. One can in fact use to define arbitrary intervals on the real number line. It turns out that these suffice to represent every definable subset of . This generalisation of minimality has been very useful in the model theory of ordered structures. A densely totally ordered structure in a signature including a symbol for the order relation is called o-minimal if every subset definable with parameters from is a finite union of points and intervals. Definable and interpretable structures Particularly important are those definable sets that are also substructures, i. e. contain all constants and are closed under function application. For instance, one can study the definable subgroups of a certain group. However, there is no need to limit oneself to substructures in the same signature. Since formulas with n free variables define subsets of , n-ary relations can also be definable. Functions are definable if the function graph is a definable relation, and constants are definable if there is a formula such that a is the only element of such that is true. In this way, one can study definable groups and fields in general structures, for instance, which has been important in geometric stability theory. One can even go one step further, and move beyond immediate substructures. Given a mathematical structure, there are very often associated structures which can be constructed as a quotient of part of the original structure via an equivalence relation. An important example is a quotient group of a group. One might say that to understand the full structure one must understand these quotients. When the equivalence relation is definable, we can give the previous sentence a precise meaning. We say that these structures are interpretable. A key fact is that one can translate sentences from the language of the interpreted structures to the language of the original structure. Thus one can show that if a structure interprets another whose theory is undecidable, then itself is undecidable. Types Basic notions For a sequence of elements of a structure and a subset A of , one can consider the set of all first-order formulas with parameters in A that are satisfied by . This is called the complete (n-)type realised by over A. If there is an automorphism of that is constant on A and sends to respectively, then and realise the same complete type over A. The real number line , viewed as a structure with only the order relation {<}, will serve as a running example in this section. Every element satisfies the same 1-type over the empty set. This is clear since any two real numbers a and b are connected by the order automorphism that shifts all numbers by b-a. The complete 2-type over the empty set realised by a pair of numbers depends on their order: either , or . Over the subset of integers, the 1-type of a non-integer real number a depends on its value rounded down to the nearest integer. More generally, whenever is a structure and A a subset of , a (partial) n-type over A is a set of formulas p with at most n free variables that are realised in an elementary extension of . If p contains every such formula or its negation, then p is complete. The set of complete n-types over A is often written as . If A is the empty set, then the type space only depends on the theory of . The notation is commonly used for the set of types over the empty set consistent with . If there is a single formula such that the theory of implies for every formula in p, then p is called isolated. Since the real numbers are Archimedean, there is no real number larger than every integer. However, a compactness argument shows that there is an elementary extension of the real number line in which there is an element larger than any integer. Therefore, the set of formulas is a 1-type over that is not realised in the real number line . A subset of that can be expressed as exactly those elements of realising a certain type over A is called type-definable over A. For an algebraic example, suppose is an algebraically closed field. The theory has quantifier elimination . This allows us to show that a type is determined exactly by the polynomial equations it contains. Thus the set of complete -types over a subfield corresponds to the set of prime ideals of the polynomial ring , and the type-definable sets are exactly the affine varieties. Structures and types While not every type is realised in every structure, every structure realises its isolated types. If the only types over the empty set that are realised in a structure are the isolated types, then the structure is called atomic. On the other hand, no structure realises every type over every parameter set; if one takes all of as the parameter set, then every 1-type over realised in is isolated by a formula of the form a = x for an . However, any proper elementary extension of contains an element that is not in . Therefore, a weaker notion has been introduced that captures the idea of a structure realising all types it could be expected to realise. A structure is called saturated if it realises every type over a parameter set that is of smaller cardinality than itself. While an automorphism that is constant on A will always preserve types over A, it is generally not true that any two sequences and that satisfy the same type over A can be mapped to each other by such an automorphism. A structure in which this converse does hold for all A of smaller cardinality than is called homogeneous. The real number line is atomic in the language that contains only the order , since all n-types over the empty set realised by in are isolated by the order relations between the . It is not saturated, however, since it does not realise any 1-type over the countable set that implies x to be larger than any integer. The rational number line is saturated, in contrast, since is itself countable and therefore only has to realise types over finite subsets to be saturated. Stone spaces The set of definable subsets of over some parameters is a Boolean algebra. By Stone's representation theorem for Boolean algebras there is a natural dual topological space, which consists exactly of the complete -types over . The topology generated by sets of the form for single formulas . This is called the Stone space of n-types over A. This topology explains some of the terminology used in model theory: The compactness theorem says that the Stone space is a compact topological space, and a type p is isolated if and only if p is an isolated point in the Stone topology. While types in algebraically closed fields correspond to the spectrum of the polynomial ring, the topology on the type space is the constructible topology: a set of types is basic open iff it is of the form or of the form . This is finer than the Zariski topology. Constructing models Realising and omitting types Constructing models that realise certain types and do not realise others is an important task in model theory. Not realising a type is referred to as omitting it, and is generally possible by the (Countable) Omitting types theorem: Let be a theory in a countable signature and let be a countable set of non-isolated types over the empty set. Then there is a model of which omits every type in . This implies that if a theory in a countable signature has only countably many types over the empty set, then this theory has an atomic model. On the other hand, there is always an elementary extension in which any set of types over a fixed parameter set is realised: Let be a structure and let be a set of complete types over a given parameter set Then there is an elementary extension of which realises every type in . However, since the parameter set is fixed and there is no mention here of the cardinality of , this does not imply that every theory has a saturated model. In fact, whether every theory has a saturated model is independent of the Zermelo-Fraenkel axioms of set theory, and is true if the generalised continuum hypothesis holds. Ultraproducts Ultraproducts are used as a general technique for constructing models that realise certain types. An ultraproduct is obtained from the direct product of a set of structures over an index set by identifying those tuples that agree on almost all entries, where almost all is made precise by an ultrafilter on . An ultraproduct of copies of the same structure is known as an ultrapower. The key to using ultraproducts in model theory is Łoś's theorem: Let be a set of -structures indexed by an index set and an ultrafilter on . Then any -formula is true in the ultraproduct of the by if the set of all for which lies in . In particular, any ultraproduct of models of a theory is itself a model of that theory, and thus if two models have isomorphic ultrapowers, they are elementarily equivalent. The Keisler-Shelah theorem provides a converse: If and are elementary equivalent, then there is a set and an ultrafilter on such that the ultrapowers by of and : are isomorphic. Therefore, ultraproducts provide a way to talk about elementary equivalence that avoids mentioning first-order theories at all. Basic theorems of model theory such as the compactness theorem have alternative proofs using ultraproducts, and they can be used to construct saturated elementary extensions if they exist. Categoricity A theory was originally called categorical if it determines a structure up to isomorphism. It turns out that this definition is not useful, due to serious restrictions in the expressivity of first-order logic. The Löwenheim–Skolem theorem implies that if a theory T has an infinite model for some infinite cardinal number, then it has a model of size for any sufficiently large cardinal number . Since two models of different sizes cannot possibly be isomorphic, only finite structures can be described by a categorical theory. However, the weaker notion of -categoricity for a cardinal has become a key concept in model theory. A theory T is called -categorical if any two models of T that are of cardinality are isomorphic. It turns out that the question of -categoricity depends critically on whether is bigger than the cardinality of the language (i.e. , where is the cardinality of the signature). For finite or countable signatures this means that there is a fundamental difference between -cardinality and -cardinality for uncountable . -categoricity -categorical theories can be characterised by properties of their type space: For a complete first-order theory T in a finite or countable signature the following conditions are equivalent: T is -categorical. Every type in Sn(T) is isolated. For every natural number n, Sn(T) is finite. For every natural number n, the number of formulas φ(x1, ..., xn) in n free variables, up to equivalence modulo T, is finite. The theory of , which is also the theory of , is -categorical, as every n-type over the empty set is isolated by the pairwise order relation between the . This means that every countable dense linear order is order-isomorphic to the rational number line. On the other hand, the theories of , and as fields are not -categorical. This follows from the fact that in all those fields, any of the infinitely many natural numbers can be defined by a formula of the form . -categorical theories and their countable models also have strong ties with oligomorphic groups: A complete first-order theory T in a finite or countable signature is -categorical if and only if its automorphism group is oligomorphic. The equivalent characterisations of this subsection, due independently to Engeler, Ryll-Nardzewski and Svenonius, are sometimes referred to as the Ryll-Nardzewski theorem. In combinatorial signatures, a common source of -categorical theories are Fraïssé limits, which are obtained as the limit of amalgamating all possible configurations of a class of finite relational structures. Uncountable categoricity Michael Morley showed in 1963 that there is only one notion of uncountable categoricity for theories in countable languages. Morley's categoricity theorem If a first-order theory T in a finite or countable signature is -categorical for some uncountable cardinal , then T is κ-categorical for all uncountable cardinals . Morley's proof revealed deep connections between uncountable categoricity and the internal structure of the models, which became the starting point of classification theory and stability theory. Uncountably categorical theories are from many points of view the most well-behaved theories. In particular, complete strongly minimal theories are uncountably categorical. This shows that the theory of algebraically closed fields of a given characteristic is uncountably categorical, with the transcendence degree of the field determining its isomorphism type. A theory that is both -categorical and uncountably categorical is called totally categorical. Stability theory A key factor in the structure of the class of models of a first-order theory is its place in the stability hierarchy. A complete theory T is called -stable for a cardinal if for any model of T and any parameter set of cardinality not exceeding , there are at most complete T-types over A. A theory is called stable if it is -stable for some infinite cardinal . Traditionally, theories that are -stable are called -stable. The stability hierarchy A fundamental result in stability theory is the stability spectrum theorem, which implies that every complete theory T in a countable signature falls in one of the following classes: There are no cardinals such that T is -stable. T is -stable if and only if (see Cardinal exponentiation for an explanation of ). T is -stable for any (where is the cardinality of the continuum). A theory of the first type is called unstable, a theory of the second type is called strictly stable and a theory of the third type is called superstable. Furthermore, if a theory is -stable, it is stable in every infinite cardinal, so -stability is stronger than superstability. Many construction in model theory are easier when restricted to stable theories; for instance, every model of a stable theory has a saturated elementary extension, regardless of whether the generalised continuum hypothesis is true. Shelah's original motivation for studying stable theories was to decide how many models a countable theory has of any uncountable cardinality. If a theory is uncountably categorical, then it is -stable. More generally, the Main gap theorem implies that if there is an uncountable cardinal such that a theory T has less than models of cardinality , then T is superstable. Geometric stability theory The stability hierarchy is also crucial for analysing the geometry of definable sets within a model of a theory. In -stable theories, Morley rank is an important dimension notion for definable sets S within a model. It is defined by transfinite induction: The Morley rank is at least 0 if S is non-empty. For α a successor ordinal, the Morley rank is at least α if in some elementary extension N of M, the set S has infinitely many disjoint definable subsets, each of rank at least α − 1. For α a non-zero limit ordinal, the Morley rank is at least α if it is at least β for all β less than α. A theory T in which every definable set has well-defined Morley Rank is called totally transcendental; if T is countable, then T is totally transcendental if and only if T is -stable. Morley Rank can be extended to types by setting the Morley Rank of a type to be the minimum of the Morley ranks of the formulas in the type. Thus, one can also speak of the Morley rank of an element a over a parameter set A, defined as the Morley rank of the type of a over A. There are also analogues of Morley rank which are well-defined if and only if a theory is superstable (U-rank) or merely stable (Shelah's -rank). Those dimension notions can be used to define notions of independence and of generic extensions. More recently, stability has been decomposed into simplicity and "not the independence property" (NIP). Simple theories are those theories in which a well-behaved notion of independence can be defined, while NIP theories generalise o-minimal structures. They are related to stability since a theory is stable if and only if it is NIP and simple, and various aspects of stability theory have been generalised to theories in one of these classes. Non-elementary model theory Model-theoretic results have been generalised beyond elementary classes, that is, classes axiomatisable by a first-order theory. Model theory in higher-order logics or infinitary logics is hampered by the fact that completeness and compactness do not in general hold for these logics. This is made concrete by Lindstrom's theorem, stating roughly that first-order logic is essentially the strongest logic in which both the Löwenheim-Skolem theorems and compactness hold. However, model theoretic techniques have been developed extensively for these logics too. It turns out, however, that much of the model theory of more expressive logical languages is independent of Zermelo-Fraenkel set theory. More recently, alongside the shift in focus to complete stable and categorical theories, there has been work on classes of models defined semantically rather than axiomatised by a logical theory. One example is homogeneous model theory, which studies the class of substructures of arbitrarily large homogeneous models. Fundamental results of stability theory and geometric stability theory generalise to this setting. As a generalisation of strongly minimal theories, quasiminimally excellent classes are those in which every definable set is either countable or co-countable. They are key to the model theory of the complex exponential function. The most general semantic framework in which stability is studied are abstract elementary classes, which are defined by a strong substructure relation generalising that of an elementary substructure. Even though its definition is purely semantic, every abstract elementary class can be presented as the models of a first-order theory which omit certain types. Generalising stability-theoretic notions to abstract elementary classes is an ongoing research program. Selected applications Among the early successes of model theory are Tarski's proofs of quantifier elimination for various algebraically interesting classes, such as the real closed fields, Boolean algebras and algebraically closed fields of a given characteristic. Quantifier elimination allowed Tarski to show that the first-order theories of real-closed and algebraically closed fields as well as the first-order theory of Boolean algebras are decidable, classify the Boolean algebras up to elementary equivalence and show that the theories of real-closed fields and algebraically closed fields of a given characteristic are unique. Furthermore, quantifier elimination provided a precise description of definable relations on algebraically closed fields as algebraic varieties and of the definable relations on real-closed fields as semialgebraic sets In the 1960s, the introduction of the ultraproduct construction led to new applications in algebra. This includes Ax's work on pseudofinite fields, proving that the theory of finite fields is decidable, and Ax and Kochen's proof of as special case of Artin's conjecture on diophantine equations, the Ax-Kochen theorem. The ultraproduct construction also led to Abraham Robinson's development of nonstandard analysis, which aims to provide a rigorous calculus of infinitesimals. More recently, the connection between stability and the geometry of definable sets led to several applications from algebraic and diophantine geometry, including Ehud Hrushovski's 1996 proof of the geometric Mordell-Lang conjecture in all characteristics In 2001, similar methods were used to prove a generalisation of the Manin-Mumford conjecture. In 2011, Jonathan Pila applied techniques around o-minimality to prove the André-Oort conjecture for products of Modular curves. In a separate strand of inquiries that also grew around stable theories, Laskowski showed in 1992 that NIP theories describe exactly those definable classes that are PAC-learnable in machine learning theory. This has led to several interactions between these separate areas. In 2018, the correspondence was extended as Hunter and Chase showed that stable theories correspond to online learnable classes. History Model theory as a subject has existed since approximately the middle of the 20th century, and the name was coined by Alfred Tarski, a member of the Lwów–Warsaw school, in 1954. However some earlier research, especially in mathematical logic, is often regarded as being of a model-theoretical nature in retrospect. The first significant result in what is now model theory was a special case of the downward Löwenheim–Skolem theorem, published by Leopold Löwenheim in 1915. The compactness theorem was implicit in work by Thoralf Skolem, but it was first published in 1930, as a lemma in Kurt Gödel's proof of his completeness theorem. The Löwenheim–Skolem theorem and the compactness theorem received their respective general forms in 1936 and 1941 from Anatoly Maltsev. The development of model theory as an independent discipline was brought on by Alfred Tarski during the interbellum. Tarski's work included logical consequence, deductive systems, the algebra of logic, the theory of definability, and the semantic definition of truth, among other topics. His semantic methods culminated in the model theory he and a number of his Berkeley students developed in the 1950s and '60s. In the further history of the discipline, different strands began to emerge, and the focus of the subject shifted. In the 1960s, techniques around ultraproducts became a popular tool in model theory. At the same time, researchers such as James Ax were investigating the first-order model theory of various algebraic classes, and others such as H. Jerome Keisler were extending the concepts and results of first-order model theory to other logical systems. Then, inspired by Morley's problem, Shelah developed stability theory. His work around stability changed the complexion of model theory, giving rise to a whole new class of concepts. This is known as the paradigm shift. Over the next decades, it became clear that the resulting stability hierarchy is closely connected to the geometry of sets that are definable in those models; this gave rise to the subdiscipline now known as geometric stability theory. An example of an influential proof from geometric model theory is Hrushovski's proof of the Mordell–Lang conjecture for function fields. Connections to related branches of mathematical logic Finite model theory Finite model theory, which concentrates on finite structures, diverges significantly from the study of infinite structures in both the problems studied and the techniques used. In particular, many central results of classical model theory that fail when restricted to finite structures. This includes the compactness theorem, Gödel's completeness theorem, and the method of ultraproducts for first-order logic. At the interface of finite and infinite model theory are algorithmic or computable model theory and the study of 0-1 laws, where the infinite models of a generic theory of a class of structures provide information on the distribution of finite models. Prominent application areas of FMT are descriptive complexity theory, database theory and formal language theory. Set theory Any set theory (which is expressed in a countable language), if it is consistent, has a countable model; this is known as Skolem's paradox, since there are sentences in set theory which postulate the existence of uncountable sets and yet these sentences are true in our countable model. Particularly the proof of the independence of the continuum hypothesis requires considering sets in models which appear to be uncountable when viewed from within the model, but are countable to someone outside the model. The model-theoretic viewpoint has been useful in set theory; for example in Kurt Gödel's work on the constructible universe, which, along with the method of forcing developed by Paul Cohen can be shown to prove the (again philosophically interesting) independence of the axiom of choice and the continuum hypothesis from the other axioms of set theory. In the other direction, model theory is itself formalised within Zermelo-Fraenkel set theory. For instance, the development of the fundamentals of model theory (such as the compactness theorem) rely on the axiom of choice, and is in fact equivalent over Zermelo-Fraenkel set theory without choice to the Boolean prime ideal theorem. Other results in model theory depend on set-theoretic axioms beyond the standard ZFC framework. For example, if the Continuum Hypothesis holds then every countable model has an ultrapower which is saturated (in its own cardinality). Similarly, if the Generalized Continuum Hypothesis holds then every model has a saturated elementary extension. Neither of these results are provable in ZFC alone. Finally, some questions arising from model theory (such as compactness for infinitary logics) have been shown to be equivalent to large cardinal axioms. See also Abstract model theory Algebraic theory Axiomatizable class Compactness theorem Descriptive complexity Elementary equivalence First-order theories Hyperreal number Institutional model theory Kripke semantics Löwenheim–Skolem theorem Model-theoretic grammar Proof theory Saturated model Skolem normal form Notes References Canonical textbooks Other textbooks Free online texts Hodges, Wilfrid, Model theory. The Stanford Encyclopedia Of Philosophy, E. Zalta (ed.). Hodges, Wilfrid, First-order Model theory. The Stanford Encyclopedia Of Philosophy, E. Zalta (ed.). Simmons, Harold (2004), An introduction to Good old fashioned model theory. Notes of an introductory course for postgraduates (with exercises). Model Mathematical proofs
Model theory
Mathematics
8,370
39,108,743
https://en.wikipedia.org/wiki/TecTile
TecTiles are a near field communication (NFC) application, developed by Samsung, for use with mobile smartphone devices. Each TecTile is a low-cost self-adhesive sticker with an embedded NFC Tag. They are programmed before use, which can be done simply by the user, using a downloadable Android app. When an NFC-capable phone is placed or 'tapped' on a Tag, the programmed action is undertaken. This could cause a website to be displayed, the phone switched to silent mode, or many other possible actions. NFC Tags are an application of RFID technology. Unlike most RFID, which makes an effort to give a long reading range, NFC deliberately limits this range to only a few inches or almost touching the phone to the Tag. This is done deliberately, so that Tags have no effect on a phone unless there is a clear user action to 'trigger' the Tag. Although phones are usually touched to Tags, this does not require any 'docking' or galvanic contact with the Tag, so they are still considered to be a non-contact technology. Although NFC Tags can be used with many smartphones, TecTiles gained much prominence in late 2012 with the launch of the Galaxy SIII. Applications Some applications are intended for customising the behaviour of a user's own phone according to a location, e.g. a quiet mode when placed on a bedside table; others are intended for public use, e.g. publicising web content about a location. This programming is carried out entirely on the Tag. Subject to security settings, any compatible phone would have the same response when tapped on the Tag. When the Tag's response is a Facebook 'Like' or similar, this is carried out under the phone user's credentials (such as a Facebook identity), rather than the Tag's identity. Samsung group Tags' functions under four headings: Settings, Phone, Web and Social. A handful of examples: Settings Enter 'quiet' or 'in-car' mode Set an alarm clock Launch an app Join a WiFi network. This could be used for giving convenient access to coffee shop networks. Show a message Phone Make a call Send a text Share a contact vCard Web Open a web page Show a location on a mapping service, such as Google Maps Check-in to Foursquare, or other location-based services. Social Update Facebook status with a location 'Like' on Facebook Send a tweet Follow a Twitter user Tags may also be pre-programmed and distributed to users. Such a Tag could be set to take the user to a manufacturer's service support page and sent out stuck to washing machines or other domestic whitegoods. Factory-prepared Tags can also be printed with logos, or moulded into forms apart from stickers, such as key fobs or wristbands. Lifespan The re-programmability of a Tag is claimed at over 100,000 programming cycles. A Tag placed on a doorway or noticeboard may be re-programmed in situ and could thus have a long life (e.g. many conferences, meetings or events). Tags may be locked after programming, to avoid unauthorized reprogramming. Locked tags may be unlocked only by the same phone that locked them. The duration of a locked Tag's relevance will be the main constraint on Tag lifetime if unlocking is not possible. The lifespan of a Tag is also likely to be limited by physical factors such as the glue adhesion, or the difficulty of peeling them from the glue. Compatibility The TecTile app is not installed by default. If a Tag is read before it is installed, the user is directed to the app download site. Using a Samsung TecTile NFC tag requires a device with the MIFARE Classic chipset. This chipset is based on NXP's NFC controller, which is outside the NFC Forum's standard. Using a TecTile thus requires the NXP chipset. The NXP chipset is found in many Android phones. Recently Android phone manufacturers have chosen to drop TecTile support; notably in Samsung's latest flagship phone the Galaxy S4 and Google's Nexus 4. TecTiles also do not work with BlackBerry and Windows NFC phones. The new version of TecTile, called TecTile 2, have improved compatibility, but currently the Samsung Galaxy S4 is the only device that comes with native support for TecTile 2. NFC Tags that do comply with NFC Forum Type 1 or Type 2 compatibility protocols are much more widely compatible than the MIFARE dependant Samsung TecTile, and are also widely available. Popular standards compliant NFC Tags are the NTAG213 (137 bytes of usable memory), and the Topaz 512 (480 bytes of usable memory). Tag encoding The need for the installed app is one of the drawbacks to TecTile and to NFC Tags in general. The basic NFC Tag standards support Tags carrying URLs, where the scheme or protocol (e.g. the http:// prefix) may be either http (for web addresses), tel for telephony, or an anonymous data scheme. Although support for the http and tel schemes may be assumed in a basic handset, support for the others will not be available unless an App has been installed and registered to handle them. In general, NFC Tags (in the non-TecTile sense) are only useful for web addresses and telephony. To provide features beyond this, Samsung offers the TecTile App. This could have used any scheme on a tag, or even invented a whole new scheme. When installed, such an App would register itself to handle these new schemes. However the App is not part of the default install for a handset, even a Samsung. To allow users to install the App automatically, on first encountering a TecTile, all the TecTile's sophisticated and phone-specific features are still provided through the http scheme. The basic URL is that for initially downloading the App, details of the TecTile operation are encoded as URL parameters within the query string in addition to this. When reading a Tag, one of two things happens: On first reading a Tag, without the App installed, the Tag's http scheme takes the handset user to the App download site. On reading a Tag with the App installed, the App recognises the download URL and suppresses the handset's usual web browsing behaviour. It then use the query string embedded within the URL to instead activate the TecTile function requested. This convoluted behaviour was chosen to make the App effectively self-installing for naive users. Why the App was not supplied as default is unknown. The downsides of this design choice though are that the URLs required to activate TecTile functions are relatively long, meaning that non-TecTile NFC Tags with limited memory size (137 bytes) cannot generally be used for functions other than web addresses. Additionally, the lack of a non-proprietary approach to these more capable functions limits the development of NFC Tags as a general technique across all such handsets, rather than just Samsung TecTiles. Similar tag technologies iButton, an early single-contact based system Touchatag, an RFID-based system QR code, optically-read codes Notes References Near-field communication Mobile telecommunications Wireless
TecTile
Technology,Engineering
1,518
43,341,230
https://en.wikipedia.org/wiki/Bullet%20Group
The Bullet Group (SL2S J08544-0121) is a newly merging group of galaxies, a merger between two galaxy groups to form a new larger one, that recently had a high speed collision between the two component groups. The group exhibits separation between its dark matter and baryonic matter components. The galaxies occur in two clumps, while the gas has expanded into a billowing cloud encompassing all three clumps. As of 2014, it is one of the few galaxy clusters known to show separation between the dark matter and baryonic matter components. The group is named after the Bullet Cluster, a similar merging galaxy cluster, except on a smaller scale, being of groups instead of clusters. The bimodal distribution of galaxies was found at discovery in 2008. The galaxy group is a gravitational lens and strongly lenses a more distant galaxy behind it, at z=~1.2 Characteristics As of 2014, the group is the smallest mass object to exhibit separation between its dark matter and baryonic matter components. The galaxy group is dominated by one elliptical galaxy, situated in one of the two concentrations, while the other node has two large bright galaxies, which do not dominate the group. The group has an apparent radius of 200 arcseconds, and a virial radius of 1 megaparsec. See also Musket Ball Cluster References Galaxy clusters Dark matter Hydra (constellation)
Bullet Group
Physics,Astronomy
287
30,788,710
https://en.wikipedia.org/wiki/BARON
BARON is a computational system for solving non-convex optimization problems to global optimality. Purely continuous, purely integer, and mixed-integer nonlinear problems can be solved by the solver. Linear programming (LP), nonlinear programming (NLP), mixed integer programming (MIP), and mixed integer nonlinear programming (MINLP) are supported. In a comparison of different solvers, BARON solved the most benchmark problems and required the least amount of time per problem. BARON is available under the AIMMS, AMPL, GAMS, JuMP, MATLAB, Pyomo, and YALMIP modeling environments on a variety of platforms. The GAMS/BARON solver is also available on the NEOS Server. The development of the BARON algorithms and software has been recognized by the 2004 INFORMS Computing Society Prize and the 2006 Beale-Orchard-Hays Prize for excellence in computational mathematical programming from the Mathematical Optimization Society. BARON's inventor, Nick Sahinidis, was inducted into the National Academy of Engineering in October 2022 for his contributions to science and engineering. References External links The Optimization Firm Homepage Numerical software Mathematical optimization software
BARON
Mathematics
230
1,789,812
https://en.wikipedia.org/wiki/Bounded%20set%20%28topological%20vector%20space%29
In functional analysis and related areas of mathematics, a set in a topological vector space is called bounded or von Neumann bounded, if every neighborhood of the zero vector can be inflated to include the set. A set that is not bounded is called unbounded. Bounded sets are a natural way to define locally convex polar topologies on the vector spaces in a dual pair, as the polar set of a bounded set is an absolutely convex and absorbing set. The concept was first introduced by John von Neumann and Andrey Kolmogorov in 1935. Definition Suppose is a topological vector space (TVS) over a field A subset of is called or just in if any of the following equivalent conditions are satisfied: : For every neighborhood of the origin there exists a real such that for all scalars satisfying This was the definition introduced by John von Neumann in 1935. is absorbed by every neighborhood of the origin. For every neighborhood of the origin there exists a scalar such that For every neighborhood of the origin there exists a real such that for all scalars satisfying For every neighborhood of the origin there exists a real such that for all real Any one of statements (1) through (5) above but with the word "neighborhood" replaced by any of the following: "balanced neighborhood," "open balanced neighborhood," "closed balanced neighborhood," "open neighborhood," "closed neighborhood". e.g. Statement (2) may become: is bounded if and only if is absorbed by every balanced neighborhood of the origin. If is locally convex then the adjective "convex" may be also be added to any of these 5 replacements. For every sequence of scalars that converges to and every sequence in the sequence converges to in This was the definition of "bounded" that Andrey Kolmogorov used in 1934, which is the same as the definition introduced by Stanisław Mazur and Władysław Orlicz in 1933 for metrizable TVS. Kolmogorov used this definition to prove that a TVS is seminormable if and only if it has a bounded convex neighborhood of the origin. For every sequence in the sequence converges to in Every countable subset of is bounded (according to any defining condition other than this one). If is a neighborhood basis for at the origin then this list may be extended to include: Any one of statements (1) through (5) above but with the neighborhoods limited to those belonging to e.g. Statement (3) may become: For every there exists a scalar such that If is a locally convex space whose topology is defined by a family of continuous seminorms, then this list may be extended to include: is bounded for all There exists a sequence of non-zero scalars such that for every sequence in the sequence is bounded in (according to any defining condition other than this one). For all is bounded (according to any defining condition other than this one) in the semi normed space B is weakly bounded, i.e. every continuous linear functional is bounded on B If is a normed space with norm (or more generally, if it is a seminormed space and is merely a seminorm), then this list may be extended to include: is a norm bounded subset of By definition, this means that there exists a real number such that for all Thus, if is a linear map between two normed (or seminormed) spaces and if is the closed (alternatively, open) unit ball in centered at the origin, then is a bounded linear operator (which recall means that its operator norm is finite) if and only if the image of this ball under is a norm bounded subset of is a subset of some (open or closed) ball. This ball need not be centered at the origin, but its radius must (as usual) be positive and finite. If is a vector subspace of the TVS then this list may be extended to include: is contained in the closure of In other words, a vector subspace of is bounded if and only if it is a subset of (the vector space) Recall that is a Hausdorff space if and only if is closed in So the only bounded vector subspace of a Hausdorff TVS is A subset that is not bounded is called . Bornology and fundamental systems of bounded sets The collection of all bounded sets on a topological vector space is called the or the () A or of is a set of bounded subsets of such that every bounded subset of is a subset of some The set of all bounded subsets of trivially forms a fundamental system of bounded sets of Examples In any locally convex TVS, the set of closed and bounded disks are a base of bounded set. Examples and sufficient conditions Unless indicated otherwise, a topological vector space (TVS) need not be Hausdorff nor locally convex. Finite sets are bounded. Every totally bounded subset of a TVS is bounded. Every relatively compact set in a topological vector space is bounded. If the space is equipped with the weak topology the converse is also true. The set of points of a Cauchy sequence is bounded, the set of points of a Cauchy net need not be bounded. The closure of the origin (referring to the closure of the set ) is always a bounded closed vector subspace. This set is the unique largest (with respect to set inclusion ) bounded vector subspace of In particular, if is a bounded subset of then so is Unbounded sets A set that is not bounded is said to be unbounded. Any vector subspace of a TVS that is not a contained in the closure of is unbounded There exists a Fréchet space having a bounded subset and also a dense vector subspace such that is contained in the closure (in ) of any bounded subset of Stability properties In any TVS, finite unions, finite Minkowski sums, scalar multiples, translations, subsets, closures, interiors, and balanced hulls of bounded sets are again bounded. In any locally convex TVS, the convex hull (also called the convex envelope) of a bounded set is again bounded. However, this may be false if the space is not locally convex, as the (non-locally convex) Lp space spaces for have no nontrivial open convex subsets. The image of a bounded set under a continuous linear map is a bounded subset of the codomain. A subset of an arbitrary (Cartesian) product of TVSs is bounded if and only if its image under every coordinate projections is bounded. If and is a topological vector subspace of then is bounded in if and only if is bounded in In other words, a subset is bounded in if and only if it is bounded in every (or equivalently, in some) topological vector superspace of Properties A locally convex topological vector space has a bounded neighborhood of zero if and only if its topology can be defined by a seminorm. The polar of a bounded set is an absolutely convex and absorbing set. Using the definition of uniformly bounded sets given below, Mackey's countability condition can be restated as: If are bounded subsets of a metrizable locally convex space then there exists a sequence of positive real numbers such that are uniformly bounded. In words, given any countable family of bounded sets in a metrizable locally convex space, it is possible to scale each set by its own positive real so that they become uniformly bounded. Generalizations Uniformly bounded sets A family of sets of subsets of a topological vector space is said to be in if there exists some bounded subset of such that which happens if and only if its union is a bounded subset of In the case of a normed (or seminormed) space, a family is uniformly bounded if and only if its union is norm bounded, meaning that there exists some real such that for every or equivalently, if and only if A set of maps from to is said to be if the family is uniformly bounded in which by definition means that there exists some bounded subset of such that or equivalently, if and only if is a bounded subset of A set of linear maps between two normed (or seminormed) spaces and is uniformly bounded on some (or equivalently, every) open ball (and/or non-degenerate closed ball) in if and only if their operator norms are uniformly bounded; that is, if and only if Assume is equicontinuous and let be a neighborhood of the origin in Since is equicontinuous, there exists a neighborhood of the origin in such that for every Because is bounded in there exists some real such that if then So for every and every which implies that Thus is bounded in Q.E.D. Let be a balanced neighborhood of the origin in and let be a closed balanced neighborhood of the origin in such that Define which is a closed subset of (since is closed while every is continuous) that satisfies for every Note that for every non-zero scalar the set is closed in (since scalar multiplication by is a homeomorphism) and so every is closed in It will now be shown that from which follows. If then being bounded guarantees the existence of some positive integer such that where the linearity of every now implies thus and hence as desired. Thus expresses as a countable union of closed (in ) sets. Since is a nonmeager subset of itself (as it is a Baire space by the Baire category theorem), this is only possible if there is some integer such that has non-empty interior in Let be any point belonging to this open subset of Let be any balanced open neighborhood of the origin in such that The sets form an increasing (meaning implies ) cover of the compact space so there exists some such that (and thus ). It will be shown that for every thus demonstrating that is uniformly bounded in and completing the proof. So fix and Let The convexity of guarantees and moreover, since Thus which is a subset of Since is balanced and we have which combined with gives Finally, and imply as desired. Q.E.D. Since every singleton subset of is also a bounded subset, it follows that if is an equicontinuous set of continuous linear operators between two topological vector spaces and (not necessarily Hausdorff or locally convex), then the orbit of every is a bounded subset of Bounded subsets of topological modules The definition of bounded sets can be generalized to topological modules. A subset of a topological module over a topological ring is bounded if for any neighborhood of there exists a neighborhood of such that See also References Notes Bibliography Topological vector spaces
Bounded set (topological vector space)
Mathematics
2,159
350,238
https://en.wikipedia.org/wiki/Trichome
Trichomes (; ) are fine outgrowths or appendages on plants, algae, lichens, and certain protists. They are of diverse structure and function. Examples are hairs, glandular hairs, scales, and papillae. A covering of any kind of hair on a plant is an indumentum, and the surface bearing them is said to be pubescent. Algal trichomes Certain, usually filamentous, algae have the terminal cell produced into an elongate hair-like structure called a trichome. The same term is applied to such structures in some cyanobacteria, such as Spirulina and Oscillatoria. The trichomes of cyanobacteria may be unsheathed, as in Oscillatoria, or sheathed, as in Calothrix. These structures play an important role in preventing soil erosion, particularly in cold desert climates. The filamentous sheaths form a persistent sticky network that helps maintain soil structure. Plant trichomes Plant trichomes have many different features that vary between both species of plants and organs of an individual plant. These features affect the subcategories that trichomes are placed into. Some defining features include the following: Unicellular or multicellular Straight (upright with little to no branching), spiral (corkscrew-shaped) or hooked (curved apex) Presence of cytoplasm Glandular (secretory) vs. eglandular Tortuous, simple (unbranched and unicellular), peltate (scale-like), stellate (star-shaped) Adaxial vs. abaxial, referring to whether trichomes are present, respectively, on the upper surface (adaxial) or lower surface (abaxial) of a leaf or other lateral organ. In a model organism, Cistus salviifolius, there are more adaxial trichomes present on this plant because this surface suffers from more ultraviolet (UV), solar irradiance light stress than the abaxial surface. Trichomes can protect the plant from a large range of detriments, such as UV light, insects, transpiration, and freeze intolerance. Aerial surface hairs Trichomes on plants are epidermal outgrowths of various kinds. The terms emergences or prickles refer to outgrowths that involve more than the epidermis. This distinction is not always easily applied (see Wait-a-minute tree). Also, there are nontrichomatous epidermal cells that protrude from the surface, such as root hairs. A common type of trichome is a hair. Plant hairs may be unicellular or multicellular, and branched or unbranched. Multicellular hairs may have one or several layers of cells. Branched hairs can be dendritic (tree-like) as in kangaroo paw (Anigozanthos), tufted, or stellate (star-shaped), as in Arabidopsis thaliana. Another common type of trichome is the scale or peltate hair, that has a plate or shield-shaped cluster of cells attached directly to the surface or borne on a stalk of some kind. Common examples are the leaf scales of bromeliads such as the pineapple, Rhododendron and sea buckthorn (Hippophae rhamnoides). Any of the various types of hairs may be , producing some kind of secretion, such as the essential oils produced by mints and many other members of the family Lamiaceae. Many terms are used to describe the surface appearance of plant organs, such as stems and leaves, referring to the presence, form and appearance of trichomes. Examples include: glabrous, glabrate – lacking hairs or trichomes; surface smooth hirsute – coarsely hairy hispid – having bristly hairs articulate – simple pluricellular-uniseriate hairs downy – having an almost wool-like covering of long hairs pilose – pubescent with long, straight, soft, spreading or erect hairs puberulent – minutely pubescent; having fine, short, usually erect, hairs puberulous – slightly covered with minute soft and erect hairs pubescent – bearing hairs or trichomes of any type strigillose – minutely strigose strigose – having straight hairs all pointing in more or less the same direction as along a margin or midrib tomentellous – minutely tomentose tomentose – covered with dense, matted, woolly hairs villosulous – minutely villous villous – having long, soft hairs, often curved, but not matted The size, form, density and location of hairs on plants are extremely variable in their presence across species and even within a species on different plant organs. Several basic functions or advantages of having surface hairs can be listed. It is likely that in many cases, hairs interfere with the feeding of at least some small herbivores and, depending upon stiffness and irritability to the palate, large herbivores as well. Hairs on plants growing in areas subject to frost keep the frost away from the living surface cells. In windy locations, hairs break up the flow of air across the plant surface, reducing transpiration. Dense coatings of hairs reflect sunlight, protecting the more delicate tissues underneath in hot, dry, open habitats. In addition, in locations where much of the available moisture comes from fog drip, hairs appear to enhance this process by increasing the surface area on which water droplets can accumulate. Glandular trichomes Glandular trichomes have been vastly studied, even though they are only found on about 30% of plants. Their function is to secrete metabolites for the plant. Some of these metabolites include: terpenoids, which have many functions related to defense, growth, and development phenylpropanoids, which have a role in many plant pathways, such as secondary metabolites, stress response, and act as the mediators of plant interactions in the environment flavonoids methyl ketones acylsugars Non-glandular trichomes Non-glandular trichomes serve as structural protection against a variety of abiotic stressors, including water losses, extreme temperatures and UV radiation, and biotic threats, such as pathogen or herbivore attack. For example, the model plant C. salviifolius is found in areas of high-light stress and poor soil conditions, along the Mediterranean coasts. It contains non-glandular, stellate and dendritic trichomes that have the ability to synthesize and store polyphenols that both affect absorbance of radiation and plant desiccation. These trichomes also contain acetylated flavonoids, which can absorb UV-B, and non-acetylated flavonoids, which absorb the longer wavelength of UV-A. In non-glandular trichomes, the only known role of flavonoids is to block out the shortest wavelengths to protect the plant; this differs from their role in glandular trichomes. In Salix and gossypium genus, modified trichomes create the cottony fibers that allow anemochory, or wind aided dispersal. These seed trichomes are among the longest plant cells Polyphenols Non-glandular trichomes in the genus Cistus were found to contain presences of ellagitannins, glycosides, and kaempferol derivatives. The ellagitannins have the main purpose of helping adapt in times of nutrient-limiting stress. Trichome and root hair development Both trichomes and root hairs, the rhizoids of many vascular plants, are lateral outgrowths of a single cell of the epidermal layer. Root hairs form from trichoblasts, the hair-forming cells on the epidermis of a plant root. Root hairs vary between 5 and 17 micrometers in diameter, and 80 to 1,500 micrometers in length (Dittmar, cited in Esau, 1965). Root hairs can survive for two to three weeks and then die off. At the same time new root hairs are continually being formed at the top of the root. This way, the root hair coverage stays the same. It is therefore understandable that repotting must be done with care, because the root hairs are being pulled off for the most part. This is why planting out may cause plants to wilt. The genetic control of patterning of trichomes and roots hairs shares similar control mechanisms. Both processes involve a core of related transcription factors that control the initiation and development of the epidermal outgrowth. Activation of genes that encode specific protein transcription factors (named GLABRA1 (GL1), GLABRA3 (GL3) and TRANSPARENT TESTA GLABRA1 (TTG1)) are the major regulators of cell fate to produce trichomes or root hairs. When these genes are activated in a leaf epidermal cell, the formation of a trichrome is initiated within that cell. GL1, GL3. and TTG1 also activate negative regulators, which serve to inhibit trichrome formation in neighboring cells. This system controls the spacing of trichomes on the leaf surface. Once trichome are developed they may divide or branch. In contrast, root hairs only rarely branch. During the formation of trichomes and root hairs, many enzymes are regulated. For example, just prior to the root hair development, there is a point of elevated phosphorylase activity. Many of what scientists know about trichome development comes from the model organism Arabidopsis thaliana, because their trichomes are simple, unicellular, and non-glandular. The development pathway is regulated by three transcription factors: R2R3 MYB, basic helix-loop-helix, and WD40 repeat. The three groups of TFs form a trimer complex (MBW) and activate the expression of products downstream, which activates trichome formation. However, just MYBs alone act as an inhibitor by forming a negative complex. Phytohormones Plant phytohormones have an effect on the growth and response of plants to environmental stimuli. Some of these phytohormones are involved in trichome formation, which include gibberellic acid (GA), cytokinins (CK), and jasmonic acids (JA). GA stimulates growth of trichomes by stimulating GLABROUS1 (GL1); however, both SPINDLY and DELLA proteins repress the effects of GA, so less of these proteins create more trichomes. Some other phytohormones that promote growth of trichomes include brassinosteroids, ethylene, and salicylic acid. This was understood by conducting experiments with mutants that have little to no amounts of each of these substances. In every case, there was less trichome formation on both plant surfaces, as well as incorrect formation of the trichomes present. Significance for taxonomy The type, presence and absence and location of trichomes are important diagnostic characters in plant identification and plant taxonomy. In forensic examination, plants such as Cannabis sativa can be identified by microscopic examination of the trichomes. Although trichomes are rarely found preserved in fossils, trichome bases are regularly found and, in some cases, their cellular structure is important for identification. Arabidopsis thaliana trichome classification Arabidopsis thaliana trichomes are classified as being aerial, epidermal, unicellular, tubular structures. Significance for plant molecular biology In the model plant Arabidopsis thaliana, trichome formation is initiated by the GLABROUS1 protein. Knockouts of the corresponding gene lead to glabrous plants. This phenotype has already been used in genome editing experiments and might be of interest as visual marker for plant research to improve gene editing methods such as CRISPR/Cas9. Trichomes also serve as models for cell differentiation as well as pattern formation in plants. Uses Bean leaves have been used historically to trap bedbugs in houses in Eastern Europe. The trichomes on the bean leaves capture the insects by impaling their feet (tarsi). The leaves would then be destroyed. Trichomes are an essential part of nest building for the European wool carder bee (Anthidium manicatum). This bee species incorporates trichomes into their nests by scraping them off of plants and using them as a lining for their nest cavities. Defense Plants may use trichomes in order to deter herbivore attacks via physical and/or chemical means, e.g. in specialized, stinging hairs of Urtica (Nettle) species that deliver inflammatory chemicals such as histamine. Studies on trichomes have been focused towards crop protection, which is the result of deterring herbivores (Brookes et al. 2016). However, some organisms have developed mechanisms to resist the effects of trichomes. The larvae of Heliconius charithonia, for example, are able to physically free themselves from trichomes, are able to bite off trichomes, and are able to form silk blankets in order to navigate the leaves better. Stinging trichomes Stinging trichomes vary in their morphology and distribution between species, however similar effects on large herbivores implies they serve similar functions. In areas susceptible to herbivory, higher densities of stinging trichomes were observed. In Urtica, the stinging trichomes induce a painful sensation lasting for hours upon human contact. This sensation has been attributed as a defense mechanism against large animals and small invertebrates, and plays a role in defense supplementation via secretion of metabolites. Studies suggest that this sensation involves a rapid release of toxin (such as histamine) upon contact and penetration via the globular tips of said trichomes. See also Thorns, spines, and prickles Colleter (botany) Seta Urticating hair References Bibliography Esau, K. 1965. Plant Anatomy, 2nd Edition. John Wiley & Sons. 767 pp. Plant morphology
Trichome
Biology
2,986
55,803,461
https://en.wikipedia.org/wiki/Agrocybe%20sororia
Agrocybe sororia is a species of Basidiomycota mushroom in the genus Agrocybe. The cap is convex to plane, tawny fading to pale yellow-buff; and is sometimes cracked, or wrinkled. It is 5-10 cm in diameter and non-hygrophanous. The gills have an adnate attachment to the stipe. They are 2-5 mm thick and white when young, turning yellowish brown to dull brown with age. The spores are cinnamon-brown and subovoid to ellipsoid, with 1 μm truncated germ pores. The basidia have 2-4 sterigmata and inconspicuous hilar appendages . The stipe is cylindrical, concolor with the cap and lacks a ring or partial veil. The base of the stipe is club-shaped, fibrillose and 3.4-5(1.2) x 0.4-0.9 cm, in size. It has white mycelium and rhizomorphs. The odour and taste is mealy (not bitter). Agrocybe sororia is distributed in eastern North America. It is found in wood mulch, composts and grassy areas and is saprotrophic. Similar species A. firma is similar but it has dark-brown pileus and lacks of mealy odour. A. putaminum has a mealy odour, bitter taste and pileocystidia. References Strophariaceae Fungus species
Agrocybe sororia
Biology
324
30,876,641
https://en.wikipedia.org/wiki/Human%20skull%20symbolism
Skull symbolism is the attachment of symbolic meaning to the human skull. The most common symbolic use of the skull is as a representation of death. Humans can often recognize the buried fragments of an only partially revealed cranium even when other bones may look like shards of stone. The human brain has a specific region for recognizing faces, and is so attuned to finding them that it can see faces in a few dots and lines or punctuation marks; the human brain cannot separate the image of the human skull from the familiar human face. Because of this, both the death and the now-past life of the skull are symbolized. Hindu temples and depiction of some Hindu deities have displayed association with skulls. Moreover, a human skull with its large eye sockets displays a degree of neoteny, which humans often find visually appealing—yet a skull is also obviously dead, and to some can even seem to look sad due to the downward facing slope on the ends of the eye sockets. A skull with the lower jaw intact may also appear to be grinning or laughing due to the exposed teeth. As such, human skulls often have a greater visual appeal than the other bones of the human skeleton, and can fascinate even as they repel. Societies predominantly associate skulls with death and evil. Unicode reserves character U+1F480 (💀) for a human skull pictogram. Examples Throughout the centuries skulls symbolized either warnings of various threats or as reminder of the vanity of earthly pleasures in contrast with our own mortality. Nevertheless, the skull seems to be omnipresent in the first decade of the twenty-first century, appearing on jewelry, bags, clothing and in the shape of various decorative items. However, the increasing use of the skull as a visual symbol in popular culture reduces its original meaning as well as its traditional connotation. Literature One of the best-known examples of skull symbolism occurs in Shakespeare's Hamlet, where the title character recognizes the skull of an old friend: "Alas, poor Yorick! I knew him, Horatio; a fellow of infinite jest..." Hamlet is inspired to utter a bitter soliloquy of despair and rough ironic humor. Compare Hamlet's words "Here hung those lips that I have kissed I know not how oft" to Talmudic sources: "...Rabi Ishmael [the High Priest]... put [the severed head of a martyr] in his lap... and cried: oh sacred mouth!...who buried you in ashes...!". The skull was a symbol of melancholy for Shakespeare's contemporaries. An old Yoruba folktale tells of a man who encountered a skull mounted on a post by the wayside. To his astonishment, the skull spoke. The man asked the skull why it was mounted there. The skull said that it was mounted there for talking. The man then went to the king, and told the king of the marvel he had found, a talking skull. The king and the man returned to the place where the skull was mounted; the skull remained silent. The king then commanded that the man be beheaded, and ordered that his head be mounted in place of the skull. The skull speaks in the catacombs of the Capuchin brothers beneath the church of Santa Maria della Concezione in Rome, where disassembled bones and teeth and skulls of the departed Capuchins have been rearranged to form a rich Baroque architecture of the human condition, in a series of anterooms and subterranean chapels with the inscription, set in bones: Noi eravamo quello che voi siete, e quello che noi siamo voi sarete. "We were what you are; and what we are, you will be." Art The Serpent crawling through the eyes of a skull is a familiar image that survives in contemporary Goth subculture. The serpent is a chthonic god of knowledge and of immortality, because he sloughs off his skin. The serpent guards the Tree in the Greek Garden of the Hesperides and, later, a Tree in the Garden of Eden. The serpent in the skull is always making its way through the socket that was the eye: knowledge persists beyond death, the emblem says, and the serpent has the secret. The late medieval and Early Renaissance Northern and Italian painters place the skull where it lies at the foot of the Cross at Golgotha (Aramaic for the place of the skull). But for them it has become quite specifically the skull of Adam. In Elizabethan England, the Death's-Head Skull, usually a depiction without the lower jawbone, was emblematic of bawds, rakes, sexual adventurers and prostitutes; the term Death's-Head was actually parlance for these rakes, and most of them wore half-skull rings to advertise their station, either professionally or otherwise. The original Rings were wide silver objects, with a half-skull decoration not much wider than the rest of the band; This allowed it to be rotated around the finger to hide the skull in polite company, and to reposition it in the presence of likely conquests. Venetian painters of the 16th century elaborated moral allegories for their patrons, and memento mori was a common theme. The theme carried by an inscription on a rustic tomb, "Et in Arcadia ego"—"I too [am] in Arcadia", if it is Death that is speaking—is made famous by two paintings by Nicolas Poussin, but the motto made its pictorial debut in Guercino's version, 1618–22 (in the Galleria Barberini, Rome): in it, two awestruck young shepherds come upon an inscribed plinth, in which the inscription ET IN ARCADIA EGO gains force from the prominent presence of a wormy skull in the foreground. In C. Allan Gilbert's much-reproduced lithograph of a lovely Gibson Girl seated at her fashionable vanity table, an observer can witness its transformation into an alternate image. A ghostly echo of the worldly Magdalene's repentance motif lurks behind this turn-of-the-20th century icon. The skull becomes an icon itself when its painted representation becomes a substitute for the real thing. Simon Schama chronicled the ambivalence of the Dutch to their own worldly success during the Dutch Golden Age of the first half of the 17th century in The Embarrassment of Riches. The possibly frivolous and merely decorative nature of the still life genre was avoided by Pieter Claesz in his Vanitas: Skull, opened case-watch, overturned emptied wineglasses, snuffed candle, book: "Lo, the wine of life runs out, the spirit is snuffed, oh Man, for all your learning, time yet runs on: Vanity!" The visual cues of the hurry and violence of life are contrasted with eternity in this somber, still and utterly silent painting. The skull speaks. It says "Et in Arcadia ego" or simply "Vanitas." In a first-century mosaic tabletop from a Pompeiian triclinium (now in Naples), the skull is crowned with a carpenter's square and plumb-bob, which dangles before its empty eyesockets (Death as the great leveler), while below is an image of the ephemeral and changeable nature of life: a butterfly atop a wheel—a table for a philosopher's symposium. Similarly, a skull might be seen crowned by a chaplet of dried roses, a carpe diem, though rarely as bedecked as Mexican printmaker José Guadalupe Posada's Catrina. In Mesoamerican architecture, stacks of skulls (real or sculpted) represented the result of human sacrifices. Pirates The pirate death's-head epitomizes the pirates' ruthlessness and despair; their usage of death imagery might be paralleled with their occupation challenging the natural order of things. "Pirates also affirmed their unity symbolically", Marcus Rediker asserts, remarking the skeleton or skull symbol with bleeding heart and hourglass on the black pirate ensign, and asserting "it triad of interlocking symbols—death, violence, limited time—simultaneously pointed to meaningful parts of the seaman's experience, and eloquently bespoke the pirates' own consciousness of themselves as preyed upon in turn. Pirates seized the symbol of mortality from ship captains who used the skull 'as a marginal sign in their logs to indicate the record of a death'" Religion Skull art is found in depictions of some Hindu Gods. Shiva has been depicted as carrying skull. Goddess Chamunda is described as wearing a garland of severed heads or skulls (Mundamala). Kedareshwara Temple, Hoysaleswara Temple, Chennakeshava Temple, Lakshminarayana Temple are some of the Hindu temples that include sculptures of skulls and Goddess Chamunda. The temple of Kali is veneered with skulls, but the goddess Kali offers life through the welter of blood. In Vajrayana Buddhist iconography, skull symbolism is often used in depictions of wrathful deities and of dakinis. In some Korean life replacement narratives, a person discovers an abandoned skull and worships it. The skull later gives advice on how to cheat the gods of death and prevent an early death. Political symbol A skull was worn as a trophy on the belt of the Lombard king Alboin, it was a constant grim triumph over his old enemy, and he drank from it. In the same way a skull is a warning when it decorates the palisade of a city, or deteriorates on a pike at a Traitor's Gate. The Skull Tower, with the embedded skulls of Serbian rebels, was built in 1809 on the highway near Niš, Serbia, as a stark political warning from the Ottoman government. In this case the skulls are the statement: that the current owner had the power to kill the former. "Drinking out of a skull the blood of slain (sacrificial) enemies is mentioned by Ammianus and Livy, and Solinus describes the Irish custom of bathing the face in the blood of the slain and drinking it." The rafters of a traditional Jívaro medicine house in Peru, or in New Guinea. When the skull appears in Nazi SS insignia, the death's-head (Totenkopf) represents loyalty unto death. Holidays Skulls and skeletons are the main symbol of the Day of the Dead, a Mexican holiday. Skull-shaped decorations called calaveras are a common sight during the festivities. Other uses When tattooed on the forearm its apotropaic power is thought to help an outlaw biker cheat death. The skull and crossbones signify "Poison" when they appear on a glass bottle containing a white powder, or any container in general. The skull that is often engraved or carved on the head of early New England tombstones might be merely a symbol of mortality, but the skull is also often backed by an angelic pair of wings, lofting mortality beyond its own death. In pop culture Skulls and memento mori, as for example the diamond-studded skull For the Love of God by Damien Hirst, have become a popular trend in pop culture. In fashion Skulls have been also found on clothing items for men, women and children. Some sources credited Alexander McQueen for introducing skulls as a fashion trend with stylized skulls, starting with skull-decorated bags and scarves. The trend is extant by the early 2010s. See also Danse macabre Death (personification) Death's head (disambiguation) Jolly Roger Life replacement narratives, Korean myths in which a skull saves a person who worships it Memento mori Punisher's skull emblem Skull and Bones Skull and crossbones (disambiguation) Skull art Skull cup, the use of a defeated enemy's skull as a drinking cup Symbols of death The Ambassadors (Holbein) Totenkopf, the German word for death's head Tzompantli, a type of wooden rack or palisade documented in several Mesoamerican civilizations, which was used for the public display of human skulls Vanitas References Footnotes General Ariès, Philippe, L'Homme devant la mort: (Seuil, 1985), 2 vol. , Veyne, Paul (1987). A History of Private Life : 1. From Pagan Rome to Byzantium ( p. 208 illustrates the skull mosaic from Pompeii) External links Skull representations and symbolism in art and optical illusions An Image Gallery of Crystal Skulls The Skull in Tattoo Art Examining the symbolism of the anamorphic skull in Hans Holbein's double portrait The Ambassadors Skulls Everywhere - slideshow by Life magazine "Skulls and the Motorcycle" Symbolism Cultural aspects of death Skulls in art Visual motifs
Human skull symbolism
Mathematics
2,653
66,577
https://en.wikipedia.org/wiki/Tropics
The tropics are the regions of Earth surrounding the Equator, where the sun may shine directly overhead. This contrasts with the temperate or polar regions of Earth, where the Sun can never be directly overhead. This is because of Earth's axial tilt; the width of the tropics (in latitude) is twice the tilt. The tropics are also referred to as the tropical zone and the torrid zone (see geographical zone). Due to the overhead sun, the tropics receive the most solar energy over the course of the year, and consequently have the highest temperatures on the planet. Even when not directly overhead, the sun is still close to overhead throughout the year, therefore the tropics also have the lowest seasonal variation on the planet; "winter" and "summer" lose their contrast. Instead, seasons are more commonly divided by precipitation variations than by temperature variations. The tropics maintain wide diversity of local climates, such as rain forests, monsoons, savannahs, deserts, and high altitude snow-capped mountains. The word "tropical" can specifically refer to certain kinds of weather, rather than to the geographic region; these usages ought not be confused. The Earth's axial tilt is currently around 23.4°, and therefore so are the latitudes of the tropical circles, marking the boundary of the tropics: specifically, ±. The northern one is called the Tropic of Cancer, and the southern is the Tropic of Capricorn. As the Earth's axial tilt changes, so too do the tropical and polar circles. The tropics constitute 39.8% of Earth's surface area and contain 36% of Earth's landmass. , the region was home also to 40% of the world's population, and this figure was then projected to reach 50% by 2050. Because of global warming, the weather conditions of the tropics are expanding with areas in the subtropics, having more extreme weather events such as heatwaves and more intense storms. These changes in weather conditions may make certain parts of the tropics uninhabitable. Etymology The word "tropic" comes via Latin from Ancient Greek (), meaning "to turn" or "change direction". Astronomical definition The tropics are defined as the region between the Tropic of Cancer in the Northern Hemisphere at N and the Tropic of Capricorn in the Southern Hemisphere at S; these latitudes correspond to the axial tilt of the Earth. The Tropic of Cancer is the Northernmost latitude from which the Sun can ever be seen directly overhead, and the Tropic of Capricorn is the Southernmost. This means that the tropical zone includes everywhere on Earth which is a subsolar point at least once during the solar year. Thus the maximum latitudes of the tropics have equal distances from the equator on either side. Likewise, they approximate the angle of the Earth's axial tilt. This angle is not perfectly fixed, mainly due to the influence of the moon, but the limits of the tropics are a geographic convention, and their variance from the true latitudes is very small. Seasons and climate Many tropical areas have both a dry and a wet season. The wet season, rainy season or green season is the time of year, ranging from one or more months when most of the average annual rainfall in a region falls. Areas with wet seasons are disseminated across portions of the tropics and subtropics, some even in temperate regions. Under the Köppen climate classification, for tropical climates, a wet-season month is defined as one or more months where average precipitation is or more. Some areas with pronounced rainy seasons see a break in rainfall during mid-season when the Intertropical Convergence Zone or monsoon trough moves poleward of their location during the middle of the warm season; Typical vegetation in these areas ranges from moist seasonal tropical forests to savannahs.When the wet season occurs during the warm season, or summer, precipitation falls mainly during the late afternoon and early evening hours. The wet season is a time when air quality improves, freshwater quality improves and vegetation grows significantly due to the wet season supplementing flora, leading to crop yields late in the season. Floods and rains cause rivers to overflow their banks, and some animals to retreat to higher ground. Soil nutrients are washed away and erosion increases. The incidence of malaria increases in areas where the rainy season coincides with high temperatures. Animals have adaptation and survival strategies for the wetter regime. The previous dry season leads to food shortages into the wet season, as the crops have yet to mature. However, regions within the tropics may well not have a tropical climate. Under the Köppen climate classification, much of the area within the geographical tropics is classed not as "tropical" but as "dry" (arid or semi-arid), including the Sahara Desert, the Atacama Desert and Australian Outback. Also, there are alpine tundra and snow-capped peaks, including Mauna Kea, Mount Kilimanjaro, Puncak Jaya and the Andes as far south as the northernmost parts of Chile and Peru. Climate change The climate is changing in the tropics, as it is in the rest of the world. The effects of steadily rising concentrations of greenhouse gases on the climate may be less obvious to tropical residents, however, because they are overlain by considerable natural variability. Much of this variability is driven by the El Niño-Southern Oscillation (ENSO). The Tropics has warmed by 0.7–0.8 °C over the last century—only slightly less than the global average—but a strong El Niño made 1998 the warmest year in most areas, with no significant warming since. Climate models predict a further 1–2 °C warming by 2050 and 1–4 °C by 2100. Ecosystems Tropical plants and animals are those species native to the tropics. Tropical ecosystems may consist of tropical rainforests, seasonal tropical forests, dry (often deciduous) forests, spiny forests, deserts, savannahs, grasslands and other habitat types. There are often wide areas of biodiversity, and species endemism present, particularly in rainforests and seasonal forests. Some examples of important biodiversity and high-endemism ecosystems are El Yunque National Forest in Puerto Rico, Costa Rican and Nicaraguan rainforests, Amazon Rainforest territories of several South American countries, Madagascar dry deciduous forests, the Waterberg Biosphere of South Africa, and eastern Madagascar rainforests. Often the soils of tropical forests are low in nutrient content, making them quite vulnerable to slash-and-burn deforestation techniques, which are sometimes an element of shifting cultivation agricultural systems. In biogeography, the tropics are divided into Paleotropics (Africa, Asia and Australia) and Neotropics (Caribbean, Central America, and South America). Together, they are sometimes referred to as the Pantropic. The system of biogeographic realms differs somewhat; the Neotropical realm includes both the Neotropics and temperate South America, and the Paleotropics correspond to the Afrotropical, Indomalayan, Oceanian, and tropical Australasian realms. Flora Flora are plants found in a specific region at a specific time. Some well-known plants that are exclusively found in, originate from, or are often associated with the tropics include: Bamboo Banana trees Citrus fruits such as oranges, lemons, mandarins, etc. Coconut trees Coffee Dragon fruit Ferns Jackfruit Orchids Palm trees Papaya trees Rubber tree Stone fruits such as mangos, avocado, sapote etc. Bird of paradise flower Cacao Giant water lily Tropicality Tropicality refers to the image of the tropics that people from outside the tropics have of the region, ranging from critical to verging on fetishism. Tropicality gained renewed interest in geographical discourse when French geographer Pierre Gourou published Les pays tropicaux (The Tropical World in English), in the late 1940s. Tropicality encompassed two major images. One, is that the tropics represent a 'Garden of Eden', a heaven on Earth, a land of rich biodiversity or a tropical paradise. The alternative is that the tropics consist of wild, unconquerable nature. The latter view was often discussed in old Western literature more so than the first. Evidence suggests over time that the view of the tropics as such in popular literature has been supplanted by more well-rounded and sophisticated interpretations. Western scholars tried to theorise why tropical areas were relatively more inhospitable to human civilisations than colder regions of the Northern Hemisphere. A popular explanation focused on the differences in climate. Tropical jungles and rainforests have much more humid and hotter weather than colder and drier temperaments of the Northern Hemisphere, giving to a more diverse biosphere. This theme led some scholars to suggest that humid hot climates correlate to human populations lacking control over nature e.g. 'the wild Amazonian rainforests'. See also Hardiness zone Tropical ecology Tropical marine climate Tropical year Polar circle Notes References External links Seasons
Tropics
Physics
1,884
4,197,478
https://en.wikipedia.org/wiki/Greenville%20Memorial%20Auditorium
Greenville Memorial Auditorium was a 7,500-seat multi-purpose arena built in 1958 that was located in Greenville, South Carolina. It hosted local sporting events, concerts and the Ringling Brothers Circus until the Bon Secours Wellness Arena opened in 1998. It hosted professional wrestling throughout its history, especially in the 1970s and 1980s, with NWA Jim Crockett Promotions cards held every Monday night. It hosted the Southern Conference men's basketball tournaments in 1972, 1975, and 1976. Lynyrd Skynyrd performed there on October 19, 1977, the last concert played by the original band prior to its fatal plane crash that took three of its members the next day en route to Baton Rouge, Louisiana. The arena was imploded on September 20, 1997. References Defunct college basketball venues in the United States Indoor arenas in South Carolina Furman Paladins basketball Monuments and memorials in South Carolina Sports venues completed in 1958 Sports venues demolished in 1997 Sports venues in Greenville, South Carolina Demolished sports venues in the United States 1958 establishments in South Carolina 1997 disestablishments in South Carolina Buildings and structures demolished by controlled implosion
Greenville Memorial Auditorium
Engineering
230
177,206
https://en.wikipedia.org/wiki/VoiceXML
VoiceXML (VXML) is a digital document standard for specifying interactive media and voice dialogs between humans and computers. It is used for developing audio and voice response applications, such as banking systems and automated customer service portals. VoiceXML applications are developed and deployed in a manner analogous to how a web browser interprets and visually renders the Hypertext Markup Language (HTML) it receives from a web server. VoiceXML documents are interpreted by a voice browser and in common deployment architectures, users interact with voice browsers via the public switched telephone network (PSTN). The VoiceXML document format is based on Extensible Markup Language (XML). It is a standard developed by the World Wide Web Consortium (W3C). Usage VoiceXML applications are commonly used in many industries and segments of commerce. These applications include order inquiry, package tracking, driving directions, emergency notification, wake-up, flight tracking, voice access to email, customer relationship management, prescription refilling, audio news magazines, voice dialing, real-estate information and national directory assistance applications. VoiceXML has tags that instruct the voice browser to provide speech synthesis, automatic speech recognition, dialog management, and audio playback. The following is an example of a VoiceXML document: <vxml version="2.0" xmlns="http://www.w3.org/2001/vxml"> <form> <block> <prompt> Hello world! </prompt> </block> </form> </vxml> When interpreted by a VoiceXML interpreter this will output "Hello world" with synthesized speech. Typically, HTTP is used as the transport protocol for fetching VoiceXML pages. Some applications may use static VoiceXML pages, while others rely on dynamic VoiceXML page generation using an application server like Tomcat, Weblogic, IIS, or WebSphere. Historically, VoiceXML platform vendors have implemented the standard in different ways, and added proprietary features. But the VoiceXML 2.0 standard, adopted as a W3C Recommendation on 16 March 2004, clarified most areas of difference. The VoiceXML Forum, an industry group promoting the use of the standard, provides a conformance testing process that certifies vendors' implementations as conformant. History AT&T Corporation, IBM, Lucent, and Motorola formed the VoiceXML Forum in March 1999, in order to develop a standard markup language for specifying voice dialogs. By September 1999 the Forum released VoiceXML 0.9 for member comment, and in March 2000 they published VoiceXML 1.0. Soon afterwards, the Forum turned over the control of the standard to the W3C. The W3C produced several intermediate versions of VoiceXML 2.0, which reached the final "Recommendation" stage in March 2004. VoiceXML 2.1 added a relatively small set of additional features to VoiceXML 2.0, based on feedback from implementations of the 2.0 standard. It is backward compatible with VoiceXML 2.0 and reached W3C Recommendation status in June 2007. Future versions of the standard VoiceXML 3.0 was slated to be the next major release of VoiceXML, with new major features. However, with the disbanding of the VoiceXML Forum in May 2022, the development of the new standard was scrapped. Implementations As of December 2022, there are few VoiceXML 2.0/2.1 platform implementations being offered. Hewlett-Packard (OCMP) OnMobile (Ozone Speech Platform) Alvaria Avaya (Avaya Experience Portal) OpenVXI Cisco Genesys (company) Nuance Communications Phonologies Plum Voice Telesoft Technologies Related standards The W3C's Speech Interface Framework also defines these other standards closely associated with VoiceXML. SRGS and SISR The Speech Recognition Grammar Specification (SRGS) is used to tell the speech recognizer what sentence patterns it should expect to hear: these patterns are called grammars. Once the speech recognizer determines the most likely sentence it heard, it needs to extract the semantic meaning from that sentence and return it to the VoiceXML interpreter. This semantic interpretation is specified via the Semantic Interpretation for Speech Recognition (SISR) standard. SISR is used inside SRGS to specify the semantic results associated with the grammars, i.e., the set of ECMAScript assignments that create the semantic structure returned by the speech recognizer. SSML The Speech Synthesis Markup Language (SSML) is used to decorate textual prompts with information on how best to render them in synthetic speech, for example which speech synthesizer voice to use or when to speak louder or softer. PLS The Pronunciation Lexicon Specification (PLS) is used to define how words are pronounced. The generated pronunciation information is meant to be used by both speech recognizers and speech synthesizers in voice browsing applications. CCXML The Call Control eXtensible Markup Language (CCXML) is a complementary W3C standard. A CCXML interpreter is used on some VoiceXML platforms to handle the initial call setup between the caller and the voice browser, and to provide telephony services like call transfer and disconnect to the voice browser. CCXML can also be used in non-VoiceXML contexts. MSML, MSCML, MediaCTRL In media server applications, it is often necessary for several call legs to interact with each other, for example in a multi-party conference. Some deficiencies were identified in VoiceXML for this application and so companies designed specific scripting languages to deal with this environment. The Media Server Markup Language (MSML) was Convedia's solution, and Media Server Control Markup Language (MSCML) was Snowshore's solution. Snowshore is now owned by Dialogic and Convedia is now owned by Radisys. These languages also contain 'hooks' so that external scripts (like VoiceXML) can run on call legs where IVR functionality is required. There was an IETF working group called mediactrl ("media control") that was working on a successor for these scripting systems, which it is hoped will progress to an open and widely adopted standard. The mediactrl working group concluded in 2013. See also ECMAScript – the scripting language used in VoiceXML OpenVXI – an open source VoiceXML interpreter library SCXML – State Chart XML References External links W3C's Voice Browser Working Group, Official VoiceXML Standards VoiceXML Forum, VoiceXML Trademark Holder VoiceXML tutorials World Wide Web Consortium standards XML-based standards Markup languages Speech synthesis XML-based programming languages VoIP protocols 2000 software
VoiceXML
Technology
1,414
33,938,182
https://en.wikipedia.org/wiki/Phylloporus%20scabripes
Phylloporus scabripes is a species of bolete fungus in the family Boletaceae. Found in Belize on sandy soil under Quercus spp. and Pinus caribaea, it was described as new to science in 2007. References External links scabripes Fungi described in 2007 Fungi of Central America Fungus species
Phylloporus scabripes
Biology
70
58,474,050
https://en.wikipedia.org/wiki/List%20of%20fireworks%20accidents%20and%20incidents%20in%20Sivakasi
Sivakasi is a town in Virudhunagar District in the Indian state of Tamil Nadu. The town is known for its firecracker, matchbox and printing industries. The industries in Sivakasi employ over 250,000 people with an estimated turn over of . The major issues in the fireworks industry in Sivakasi are child labour and frequent accidents. Background The economy of Sivakasi is dependent on three major industries: firecrackers, matchbox manufacturing, and printing. The town has 520 registered printing industries, 53 match factories, 32 chemical factories, seven soda factories, four flour mills and two rice and oil mills. The town is the nodal center for firecracker manufacturing at the national level. In 2011, the industry employed over 25,000 people and some of the private enterprises had an annual turnover of . In 2011, the combined estimated turnover of the firecracker, matchbox making and printing industry in the town was around . Approximately 70% of the firecrackers and matches produced in India are from Sivakasi. The hot and dry climate of the town is conducive to the firecracker and matchbox making industries. The raw materials for these industries were procured from Sattur earlier but were discontinued due to the high power and production cost. The source of raw materials is Kerala and Andaman. The paper for the printing industry is procured from various states. The town is a major producer of diaries, contributing to 30% of the total diaries produced in India. Printing industry in the town was initially utilized for printing labels for the firecrackers and later evolved with modern machinery to grow as a printing hub. In 2012, all the industries suffered 15–20% production loss due to power shortage and escalating labor cost. The major issues in the fireworks industry in Sivakasi is child labour and frequent accidents. In a blast in 1991 in a factory, 39 people were killed and 65 others were injured. In July 2009, more than 40 people were killed in a fire accident in a firecracker unit. The police traced out unregistered units and irregularities that led to the accident. In a fire accident in August 2011, seven people were killed and five were seriously injured. A similar fire accident and blast in a private unit in September 2012 killed 40 people and injured 38 others. The common reasons cited for the accidents are inadequate training of workers and supervisors involved in different stages of production and marketing of firecracker items. Other reasons are found to be overstocking of explosives, raw material and finished goods, and employment of workers in excess of the permitted strength. Fireworks accidents and explosions Sri Krishna Fireworks – Namaskarithanpatti – 20.07.2009 Anil Fireworks - Keezha Tiruthangal - 28.07.2009 Classic Fireworks - Meenampatti – 03.08.2009 Om Sakthi Fire works – Mudhalaipatti – 05.09.2012 Meenakshi Fireworks - Kichanayakkanpatti – 15.05.2013 Chidambaram Fireworks – Vilampatty – 22.08.2013 Jonal Fireworks – Chokkalingapuram – 25.02.2016 Krishnasamy Fireworks – Maraneri – 09.06.2016 Whole Sale Shop - Raghavendra Agency – Sivakasi – 20.10.2016 Nagamalli Fireworks - Vetrilai Oorani - 11.03.2017 ARV fireworks – Ramuthevanpatti – 06.04.2018 SKS fireworks – Kakkivadanpatti – 06.04.2018 Factory belonging to Krishnasamy Industries - Kakkivadanpatti - 08.09.2018 References Virudhunagar district -Sivakasi Lists of fires Lists of explosions Lists of disasters in India Tamil Nadu-related lists
List of fireworks accidents and incidents in Sivakasi
Chemistry
790
30,007,269
https://en.wikipedia.org/wiki/Pencil-beam%20scanning
Pencil beam scanning is the practice of steering a beam of radiation or charged particles across an object. It is often used in proton therapy, to reduce unnecessary radiation exposure to surrounding non-cancerous cells. Ionizing radiation Ionizing radiation photons or x-rays (IMRT) use pencil beam scanning to precisely target a tumor. Photon pencil beam scans are defined as crossing of two beams to a fine point. Charged particles Several charged particles devices used with Proton therapy cancer centers use pencil beam scanning. The newer proton therapy machines use a pencil beam scanning technology. This technique is also called spot scanning. The Paul Scherrer Institute was the developer of spot beam. Intensity Modulated Proton Therapy Varian's IMPT system uses all pencil-beam controlled protons where the beam intensity can also be controlled at this small level. This can be done by going back and forth over a previously radiated area during the same radiation session. See also Pencil (mathematics) Pencil (optics) Radiation treatment planning mean free path Monte Carlo method for photon transport Hybrid theory for photon transport in tissue Diffusion theory Monte Carlo method Varian Medical Systems References Medical physics Radiobiology Radiation therapy
Pencil-beam scanning
Physics,Chemistry,Biology
232
5,497,456
https://en.wikipedia.org/wiki/Kinetic%20resolution
In organic chemistry, kinetic resolution is a means of differentiating two enantiomers in a racemic mixture. In kinetic resolution, two enantiomers react with different reaction rates in a chemical reaction with a chiral catalyst or reagent, resulting in an enantioenriched sample of the less reactive enantiomer. As opposed to chiral resolution, kinetic resolution does not rely on different physical properties of diastereomeric products, but rather on the different chemical properties of the racemic starting materials. The enantiomeric excess (ee) of the unreacted starting material continually rises as more product is formed, reaching 100% just before full completion of the reaction. Kinetic resolution relies upon differences in reactivity between enantiomers or enantiomeric complexes. Kinetic resolution can be used for the preparation of chiral molecules in organic synthesis. Kinetic resolution reactions utilizing purely synthetic reagents and catalysts are much less common than the use of enzymatic kinetic resolution in application towards organic synthesis, although a number of useful synthetic techniques have been developed in the past 30 years. History The first reported kinetic resolution was achieved by Louis Pasteur. After reacting aqueous racemic ammonium tartrate with a mold from Penicillium glaucum, he reisolated the remaining tartrate and found it was levorotatory. The chiral microorganisms present in the mold catalyzed the metabolization of (R,R)-tartrate selectively, leaving an excess of (S,S)-tartrate. Kinetic resolution by synthetic means was first reported by Marckwald and McKenzie in 1899 in the esterification of racemic mandelic acid with optically active (−)-menthol. With an excess of the racemic acid present, they observed the formation of the ester derived from (+)-mandelic acid to be quicker than the formation of the ester from (−)-mandelic acid. The unreacted acid was observed to have a slight excess of (−)-mandelic acid, and the ester was later shown to yield (+)-mandelic acid upon saponification. The importance of this observation was that, in theory, if a half equivalent of (−)-menthol had been used, a highly enantioenriched sample of (−)-mandelic acid could have been prepared. This observation led to the successful kinetic resolution of other chiral acids, the beginning of the use of kinetic resolution in organic chemistry. Theory Kinetic resolution is a possible method for irreversibly differentiating a pair of enantiomers due to (potentially) different activation energies. While both enantiomers are at the same Gibbs free energy level by definition, and the products of the reaction with both enantiomers are also at equal levels, the , or transition state energy, can differ. In the image below, the R enantiomer has a lower and would thus react faster than the S enantiomer. The ideal kinetic resolution is that in which only one enantiomer reacts, i.e. kR>>kS. The selectivity (s) of a kinetic resolution is related to the rate constants of the reaction of the R and S enantiomers, kR and kS respectively, by s=kR/kS, for kR>kS. This selectivity can also be referred to as the relative rates of reaction. This can be written in terms of the free energy difference between the high- and low-energy transitions states, . The selectivity can also be expressed in terms of ee of the recovered starting material and conversion (c), if first-order kinetics (in substrate) are assumed. If it is assumed that the S enantiomer of the starting material racemate will be recovered in excess, it is possible to express the concentrations (mole fractions) of the S and R enantiomers as where ee is the ee of the starting material. Note that for c=0, which signifies the beginning of the reaction, , where these signify the initial concentrations of the enantiomers. Then, for stoichiometric chiral resolving agent B*, Note that, if the resolving agent is stoichiometric and achiral, with a chiral catalyst, the [B*] term does not appear. Regardless, with a similar expression for R, we can express s as If we wish to express this in terms of the enantiomeric excess of the product, ee", we must make use of the fact that, for products R' and S' from R and S, respectively From here, we see that which gives us which, when we plug into our expression for s derived above, yield The conversion (c) and selectivity factor (s) can be expressed in terms of starting material and product enantiomeric excesses (ee and ee'', respectively) only: Additionally, the expressions for c and ee can be parametrized to give explicit expressions for C and ee in terms of t. First, solving explicitly for [S] and [R] as functions of t yields which, plugged into expressions for ee and c, gives Without loss of generality, we can allow kS=1, which gives kR=s, simplifying the above expressions. Similarly, an expression for ee″ as a function of t can be derived Thus, plots of ee and ee″ vs. c can be generated with t as the parameter and different values of s generating different curves, as shown below. As can be seen, high enantiomeric excesses are much more readily attainable for the unreacted starting material. There is however a tradeoff between ee and conversion, with higher ee (of the recovered substrate) obtained at higher conversion, and therefore lower isolated yield. For example, with a selectivity factor of just 10, 99% ee is possible with approximately 70% conversion, resulting in a yield of about 30%. In contrast, in order to get good ee's and yield of the product, very high selectivity factors are necessary. For example, with a selectivity factor of 10, ee″ above approximately 80% is unattainable, and significantly lower ee″ values are obtained for more realistic conversions. A selectivity in excess of 50 is required for highly enantioenriched product, in reasonable yield. This is a simplified version of the true kinetics of kinetic resolution. The assumption that the reaction is first order in substrate is limiting, and it is possible that the dependence on substrate may depend on conversion, resulting in a much more complicated picture. As a result, a common approach is to measure and report only yields and ee's, as the formula for krel only applies to an idealized kinetic resolution. It is simple to consider an initial substrate-catalyst complex forming, which could negate the first-order kinetics. However, the general conclusions drawn are still helpful to understand the effect of selectivity and conversion on ee. Practicality With the advent of asymmetric catalysis, it is necessary to consider the practicality of utilizing kinetic resolution for the preparation of enantiopure products. Even for a product which can be attained through an asymmetric catalytic or auxiliary-based route, the racemate may be significantly less expensive than the enantiopure material, resulting in heightened cost-effectiveness even with the inherent "loss" of 50% of the material. The following have been proposed as necessary conditions for a practical kinetic resolution: inexpensive racemate and catalyst no appropriate enantioselective, chiral pool, or classical resolution route is possible resolution proceeds selectively at low catalyst loadings separation of starting material and product is easy To date, a number of catalysts for kinetic resolution have been developed that satisfy most, if not all of the above criteria, making them highly practical for use in organic synthesis. The following sections will discuss a number of key examples. Reactions utilizing synthetic reagents Acylation reactions Gregory Fu and colleagues have developed a methodology utilizing a chiral DMAP analogue to achieve excellent kinetic resolution of secondary alcohols. Initial studies utilizing ether as a solvent, low catalyst loadings (2 mol %), acetic anhydride as the acylating agent, and triethylamine at room temperature gave selectivities ranging from 14-52, corresponding to ee's of the recovered alcohol product as high as 99.2%. However, solvent screening proved that the use of tert-amyl alcohol increased both the reactivity and selectivity. With the benchmark substrate 1-phenylethanol, this corresponded to 99% ee of the unreacted alcohol at 55% conversion when run at 0 °C. This system proved to be adept at resolution of a number of arylalkylcarbinols, with selectivities as high as 95 and low catalyst loadings of 1%, as shown below utilizing the (-)-enantiomer of the catalyst. This resulted in highly enantioenriched alcohols at very low conversions, giving excellent yields as well. In addition, the high selectivities result in highly enantioenriched acylated products, with a 90% ee sample of acylated alcohol for o-tolylmethylcarbinol, with s=71. In addition, Fu reported the first highly selective acylation of racemic diols (as well as desymmetrization of meso diols). With low catalyst loading of 1%, enantioenriched diol was recovered in 98% ee and 43% yield, with the diacetate in 39% yield and 99% ee. The remainder of the material was recovered as a mixture of monoacetate. The planar-chiral DMAP catalyst was also shown to be effective at kinetically resolving propargylic alcohols. In this case, though, selectivities were found to be highest without any base present. When run with 1 mol% of the catalyst at 0 °C, selectivities as high as 20 could be attained. The limitations of this method include the requirement of an unsaturated functionality, such as carbonyl or alkenes, at the remote alkynyl position. Alcohols resolved using the (+)-enantiomer of the DMAP catalyst are shown below. Fu also showed his chiral DMAP catalyst's ability to resolve allylic alcohols. Effective selectivity was dependent upon the presence of either a geminal or cis substituent to the alcohol-bearing group, with a notable exception of a trans-phenyl alcohol which exhibited the highest selectivity. Using 1-2.5 mol% of the (+)-enantiomer of the DMAP catalyst, the alcohols shown below were resolved in the presence of triethylamine. While Fu's DMAP analogue catalyst worked exceptionally well to kinetically resolve racemic alcohols, it was not successful in use for the kinetic resolution of amines. A similar catalyst, PPY*, was developed that, in use with a novel acylating agent, allowed for the successful kinetic resolution acylation of amines. With 10 mol% (-)-PPY* in chloroform at –50 °C, good to very good selectivities were observed in the acylation of amines, shown below. A similar protocol was developed for the kinetic resolution of indolines. Epoxidations and dihydroxylations The Sharpless epoxidation, developed by K. Barry Sharpless in 1980, has been utilized for the kinetic resolution of a racemic mixture of allylic alcohols. While extremely effective at resolving a number of allylic alcohols, this method has a number of drawbacks. Reaction times can run as long as 6 days, and the catalyst is not recyclable. However, the Sharpless asymmetric epoxidation kinetic resolution remains one of the most effective synthetic kinetic resolutions to date. A number of different tartrates can be used for the catalyst; a representative scheme is shown below utilizing diisopropyl tartrate. This method has seen general use on a number of secondary allylic alcohols. Sharpless asymmetric dihydroxylation has also seen use as a method for kinetic resolution. This method is not widely used, however, since the same resolution can be accomplished in different manners that are more economical. Additionally, the Shi epoxidation has been shown to affect kinetic resolution of a limited selection of olefins. This method is also not widely used, but is of mechanistic interest. Epoxide openings While enantioselective epoxidations have been successfully achieved utilizing Sharpless epoxidation, Shi epoxidation, and Jacobsen epoxidation, none of these methods allows for the efficient asymmetric synthesis of terminal epoxides, which are key chiral building blocks. Due to the inexpensiveness of most racemic terminal epoxides and their inability to generally be subjected to classical resolution, an effective kinetic resolution of terminal epoxides would serve as a highly important synthetic methodology. In 1996, Jacobsen and coworkers developed a methodology for the kinetic resolution of epoxides via nucleophilic ring-opening with attack by an azide anion. The (R,R) catalyst is shown. The catalyst could effectively, with loadings as low as 0.5 mol%, open the epoxide at the terminal position enantioselectively, yielding enantioenriched epoxide starting material and 1,2-azido alcohols. Yields are nearly quantitative and ee's were excellent (≥95% in nearly all cases). The 1,2-azido alcohols can be hydrogenated to give 1,2-amino alcohols, as shown below. In 1997, Jacobsen's group published a methodology which improved upon their earlier work, allowing for the use of water as the nucleophile in the epoxide opening. Utilizing a nearly identical catalyst, ee's in excess of 98% for both the recovered starting material epoxide and 1,2-diol product were observed. In the example below, hydrolytic kinetic resolution (HKR) was carried out on a 58 gram scale, resulting in 26 g (44%) of the enantioriched epoxide in >99% ee and 38 g (50%) of the diol in 98% ee. A multitude of other substrates were examined, with yields of the recovered epoxide ranging from 36-48% for >99% ee. Jacobsen hydrolytic kinetic resolution can be used in tandem with Jacobsen epoxidation to yield enantiopure epoxides from certain olefins, as shown below. The first epoxidation yields a slightly enantioenriched epoxide, and subsequent kinetic resolution yields essentially a single enantiomer. The advantage of this approach is the ability to reduce the amount of hydrolytic cleavage necessary to achieve high enantioselectivity, allowing for overall yields up to approximately 90%, based on the olefin. Ultimately, the Jacobsen epoxide opening kinetic resolutions produce high enantiomeric purity in the epoxide and product, in solvent-free or low-solvent conditions, and have been applicable on a large scale. The Jacobsen methodology for HKR in particular is extremely attractive since it can be carried out on a multiton scale and utilizes water as the nucleophile, resulting in extremely cost-effective industrial processes. Despite impressive achievements, HKR has generally been applied to the resolution of simple terminal epoxides with one stereocentre. Quite recently, D. A. Devalankar et al. reported an elegant protocol involving a two-stereocentered Co-catalyzed HKR of racemic terminal epoxides bearing adjacent C–C binding substituents. Oxidations Ryōji Noyori and colleagues have developed a methodology for the kinetic resolution of benzylic and allylic secondary alcohols via transfer hydrogenation. The ruthenium complex catalyzes oxidation of the more reactive enantiomer from acetone, yielding an unreacted enantiopure alcohol, an oxidized ketone, and isopropanol. In the example illustrated below, exposure of 1-phenylethanol to the (S,S) enantiomer of the catalyst in the presence of acetone results in a 51% yield of 94% ee (R)-1-phenylethanol, along with 49% acetophenone and isopropanol as a byproduct. This methodology is essentially the reverse of Noyori's asymmetric transfer hydrogenation of ketones, which yield enantioenriched alcohols via reduction. This limits the attractiveness of the kinetic resolution method, since there is a similar method to achieve the same products without the loss of half the material. Thus, the kinetic resolution would only be carried out in an instance for which the racemic alcohol was at least one half the price of the ketone or significantly easier to access. In addition, Uemura and Hidai have developed a ruthenium catalyst for the kinetic resolution oxidation of benzylic alcohols, yielding highly enantioenriched alcohols in good yields. The complex can, like Noyori's catalyst, affect transfer hydrogenation between a ketone and isopropanol to give an enantioenriched alcohol as well as affect kinetic resolution of a racemic alcohol, giving enantiopure alcohol (>99% ee) and oxidized ketone, with acetone as the byproduct. It is highly effective at reducing ketones enantioselectively, giving most benzylic alcohols in >99% ee and can resolve a number of racemic benzylic alcohols to give high yields (up to 49%) of single enantiomers, as shown below. This method has the same disadvantages as the Noyori kinetic resolution, namely that the alcohols can also be accessed via reduction of the ketones enantioselectively. Additionally, only one enantiomer of the catalyst has been reported. Hydrogenation Noyori has also demonstrated the kinetic resolution of allylic alcohols by asymmetric hydrogenation of the olefin. Utilizing the Ru[BINAP] complex, selective hydrogenation can give high ee's of the unsaturated alcohol in addition to the hydrogenated alcohol, as shown below. Thus, a second hydrogenation of the enantioenriched allylic alcohol remaining will give enantiomerically pure samples of both enantiomers of the saturated alcohol. Noyori has resolved a number of allylic alcohols with good to excellent yields and good to excellent ee's (up to >99%). Ring closing metathesis Hoveyda and Schrock have developed a catalyst for ring-closing metathesis kinetic resolution of dienyl allylic alcohols. The molybdenum alkylidene catalyst selectively catalyzes one enantiomer to perform ring closing metathesis, resulting in an enantiopure alcohol, and an enantiopure closed ring, as shown below. The catalyst is most effective at resolving 1,6-dienes. However, slight structural changes in the substrate, such as increasing the inter-alkene distance to 1,7, can sometimes necessitate the use of a different catalyst, reducing the efficacy of this method. Enzymatic reactions Acylations As with synthetic kinetic resolution procedures, enzymatic acylation kinetic resolutions have seen the broadest application in a synthetic context. Especially important has been the use of enzymatic kinetic resolution to efficiently and cheaply prepare amino acids. On a commercial scale, Degussa's methodology employing acylases is capable of resolving numerous natural and unnatural amino acids. The racemic mixtures can be prepared via Strecker synthesis, and the use of either porcine kidney acylase (for straight chain substrates) or an enzyme from the mold Aspergillus oryzae (for branched side chain substrates) can effectively yield enantioenriched amino acids in high (85-90%) yields. The unreacted starting material can be racemized in situ, thus making this a dynamic kinetic resolution. In addition, lipases are used extensively for kinetic resolution in both academic and industrial settings. Lipases have been used to resolve primary alcohols, secondary alcohols, a limited number of tertiary alcohols, carboxylic acids, diols, and even chiral allenes. Lipase from Pseudomonas cepacia (PSL) is the most widely used in the resolution of primary alcohols and has been used with vinyl acetate as an acylating agent to kinetically resolve the primary alcohols shown below. For the resolution of secondary alcohols, pseudomonas cepecia lipase (PSL-C) has been employed effectively to generate excellent ee's of the (R)-enantiomer of the alcohol. The use of isopropenyl acetate as the acylating agent results in acetone as the byproduct, which is effectively removed from the reaction using molecular sieves. Oxidations and reductions Baker's yeast (BY) has been utilized for the kinetic resolution of α-stereogenic carbonyl compounds. The enzyme selectively reduces one enantiomer, yielding a highly enantioenriched alcohol and ketone, as shown below. Baker's yeast has also been used in the kinetic resolution of secondary benzylic alcohols by oxidation. While excellent ee's of the recovered alcohol have been reported, they typically require >60% conversion, resulting in diminished yields. Baker's yeast has also been used in the kinetic resolution via reduction of β-ketoesters. However, given the success of Noyori's resolution of the same substrates, detailed later in this article, this has not seen much use. Dynamic kinetic resolution Dynamic kinetic resolution (DKR) occurs when the starting material racemate is able to epimerize easily, resulting in an essentially racemic starting material mix at all points during the reaction. Then, the enantiomer with the lower barrier to activation can form in, theoretically, up to 100% yield. This is in contrast to standard kinetic resolution, which necessarily has a maximum yield of 50%. For this reason, dynamic kinetic resolution has extremely practical applications to organic synthesis. The observed dynamics are based on the Curtin-Hammett principle. The barrier to reaction of either enantiomer is necessarily higher than the barrier to epimerization, resulting in a kinetic well containing the racemate. This is equivalent to writing, for kR>kS, A number of excellent reviews have been published, most recently in 2008, detailing the theory and practical applications of DKR. Noyori asymmetric hydrogenation The Noyori asymmetric hydrogenation of ketones is an excellent example of dynamic kinetic resolution at work. The enantiomeric β-ketoesters can undergo epimerization, and the choice of chiral catalyst, typically of the form Ru[(R)-BINAP]X2, where X is a halogen, leads to one of the enantiomers reacting preferentially faster. The relative free energy for a representative reaction is shown below. As can be seen, the epimerization intermediate is lower in free energy than the transition states for hydrogenation, resulting in rapid racemization and high yields of a single enantiomer of the product. The enantiomers interconvert through their common enol, which is the energetic minimum located between the enantiomers. The shown reaction yields a 93% ee sample of the anti product shown above. Solvent choice appears to have a major influence on the diastereoselectivity, as dichloromethane and methanol both show effectiveness for certain substrates. Noyori and others have also developed newer catalysts which have improved on both ee and diastereomeric ratio (dr). Genêt and coworkers developed SYNPHOS, a BINAP analogue which forms ruthenium complexes, which perform highly selective asymmetric hydrogenations. Enantiopure Ru[SYNPHOS]Br2 was shown to selectively hydrogenate racemic α-amino-β-ketoesters to enantiopure aminoalcohols, as shown below utilizing (R)-SYNPHOS. 1,2-syn amino alcohols were prepared from benzoyl protected amino compounds, whereas anti products were prepared from hydrochloride salts of the amine. Fu acylation modification Recently, Gregory Fu and colleagues reported a modification of their earlier kinetic resolution work to produce an effective dynamic kinetic resolution. Using the ruthenium racemization catalyst shown to the right, and his planar chiral DMAP catalyst, Fu has demonstrated the dynamic kinetic resolution of secondary alcohols yielding up to 99% and 93% ee, as shown below. Work is ongoing to further develop the applications of the widely used DMAP catalyst to dynamic kinetic resolution. Enzymatic dynamic kinetic resolutions A number of enzymatic dynamic kinetic resolutions have been reported. A prime example using PSL effectively resolves racemic acyloins in the presence of triethylamine and vinyl acetate as the acylating agent. As shown below, the product was isolated in 75% yield and 97% ee. Without the presence of the base, regular kinetic resolution occurred, resulting in 45% yield of >99% ee acylated product and 53% of the starting material in 92% ee. Another excellent, though not high-yielding, example is the kinetic resolution of (±)-8-amino-5,6,7,8-tetrahydroquinoline. When exposed to Candida antarctica lipase B (CALB) in toluene and ethyl acetate for 3–24 hours, normal kinetic resolution occurs, resulting in 45% yield of 97% ee of starting material and 45% yield of >97% ee acylated amine product. However, when the reaction is allowed to stir for 40–48 hours, racemic starting material and >60% of >95% ee acylated product are recovered. Here, the unreacted starting material racemizes in situ via a dimeric enamine, resulting in a recovery of greater than 50% yield of the enantiopure acylated amine product. Chemoenzymatic dynamic kinetic resolutions There have been a number of reported procedures which take advantage of a chemical reagent/catalyst to perform racemization of the starting material and an enzyme to selectively react with one enantiomer, called chemoenzymatic dynamic kinetic resolutions. PSL-C was utilized along with a ruthenium catalyst (for racemization) to produce enantiopure (>95% ee) δ-hydroxylactones. More recently, secondary alcohols have been resolved by Bäckvall with yields up to 99% and ee's up to >99% utilizing CALB and a ruthenium racemization complex. A second type of chemoenzymatic dynamic kinetic resolution involves a π-allyl complex from an allylic acetate with palladium. Here, racemization occurs with loss of the acetate, forming a cationic complex with the transition metal center, as shown below. Palladium has been shown to facilitate this reaction, while ruthenium has been shown to affect a similar reaction, also shown below. Parallel kinetic resolution In parallel kinetic resolution (PKR), a racemic mixture reacts to form two non-enantiomeric products, often through completely different reaction pathways. With PKR, there is no tradeoff between conversion and ee, as the formed products are not enantiomers. One strategy for PKR is to remove the less reactive enantiomer (towards the desired chiral catalyst) from the reaction mixture by subjecting it to a second set of reaction conditions that preferentially react with it, ideally with an approximately equal reaction rate. Thus, both enantiomers are consumed in different pathways at equal rates. PKR experiments can be stereodivergent, regiodivergent, or structurally divergent. One of the most highly efficient PKR's reported to date was accomplished by Yoshito Kishi in 1998; CBS reduction of a racemic steroidal ketone resulted in stereoselective reduction, producing two diastereomers of >99% ee, as shown below. PKR have also been accomplished with the use of enzyme catalysts. Using the fungus Mortierella isabellina NRRL 1757, reduction of racemic β-ketonitriles affords two diastereomers, which can be separated and re-oxidized to give highly enantiopure β-ketonitriles. Highly synthetically useful parallel kinetic resolutions have truly yet to be discovered, however. A number of procedures have been discovered that give acceptable ee's and yields, but there are very few examples which give highly selective parallel kinetic resolution and not simply somewhat selective reactions. For example, Fu's parallel kinetic resolution of 4-alkynals yields very enantioenriched cyclobutanone in low yield and slightly enantioenriched cyclopentenone, as shown below. In theory, parallel kinetic resolution can give the highest ee's of products, since only one enantiomer gives each desired product. For example, for two complementary reactions both with s=49, 100% conversion would give products in 50% yield and 96% ee. These same values would require s=200 for a simple kinetic resolution. As such, the promise of PKR continues to attract much attention. The Kishi CBS reduction remains one of the few examples to fulfill this promise. See also Chiral auxiliaries Chiral pool synthesis Chiral resolution Enantioselective synthesis References Further reading Dynamic Kinetic Resolutions. A MacMillan Group Meeting. Jake Wiener Link Dynamic Kinetic Resolution:A Powerful Approach to Asymmetric Synthesis. Erik Alexanian Supergroup Meeting March 30, 2005 Link Dynamic Kinetic Resolution: Practical Applications in Synthesis. Valerie Keller 3rd-Year Seminar November 1, 2001 Link Kinetic Resolution. David Ebner Stoltz Group Literature Seminar. June 4, 2003 link Kinetic Resolutions. UT Southwestern Presentation. link Stereochemistry
Kinetic resolution
Physics,Chemistry
6,390
24,253,774
https://en.wikipedia.org/wiki/Rossmo%27s%20formula
Rossmo's formula is a geographic profiling formula to predict where a serial criminal lives. It relies upon the tendency of criminals to not commit crimes near places where they might be recognized, but also to not travel excessively long distances. The formula was developed and patented in 1996 by criminologist Kim Rossmo and integrated into a specialized crime analysis software product called Rigel. The Rigel product is developed by the software company Environmental Criminology Research Inc. (ECRI), which Rossmo co-founded. Formula Imagine a map with an overlaying grid of little squares named sectors. If this map is a raster image file on a computer, these sectors are pixels. A sector is the square on row i and column j, located at coordinates . The following function gives the probability of the position of the serial criminal residing within a specific sector (or point) : where: Here the summation is over past crimes located at coordinates , , where is the number of past crimes. Furthermore, is an indicator function that returns 0 when a point is an element of the buffer zone B (the neighborhood of a criminal residence that is swept out by a radius of B from its center). The indicator allows the computation to switch between the two terms. If a crime occurs within the buffer zone, then and, thus, the first term does not contribute to the overall result. This is a prerogative for defining the first term in the case when the distance between a point (or pixel) becomes equal to zero. When , the 1st term is used to calculate . is the Manhattan distance between a point and the n-th crime site , . Finally, is an appopriately selected normalization constant to ensure that . Alternative Implementation is not well suited for image processing because of the asymptotic behavior near the coordinates of a crime site. Alternatively, Rossmo's function may use other distance decay functions instead of . One method would be to use a probability distribution similar to the Gaussian Distribution as a distance decay function: If implementing on a computer, the maximum value of p() matches the maximum value of a set of colors being used to create the n by m Jeopardy Surface matrix J. The elements of the matrix J may represent the pixel values of an image. Where: Explanation The summation in the formula consists of two terms. The first term describes the idea of decreasing probability with increasing distance. The second term deals with the concept of a buffer zone. The variable is used to put more weight on one of the two ideas. The variable describes the radius of the buffer zone. The constant is empirically determined. The main idea of the formula is that the probability of crimes first increases as one moves through the buffer zone away from the hotzone, but decreases afterwards. The variable can be chosen so that it works best on data of past crimes. The same idea goes for the variable . The distance is calculated with the Manhattan distance formula. Applications The formula has been applied to fields other than forensics. Because of the buffer zone idea, the formula works well for studies concerning predatory animals such as white sharks. This formula and math behind it were used in crime detecting in the Pilot episode of the TV series Numb3rs and in the 100th episode of the same show, called "Disturbed". References Further reading Offender profiling Criminology Crime mapping Spatial analysis Forensic techniques
Rossmo's formula
Physics
689
34,198,843
https://en.wikipedia.org/wiki/SnapTag
SnapTag, invented by SpyderLynk, is a 2D mobile barcode alternative similar to a QR code, but that uses an icon or company logo and code ring rather than a square pattern of black dots. Similar to a QR code, SnapTags can be used to take consumers to a brand’s website, but can also facilitate mobile purchases, coupon downloads, free sample requests, video views, promotional entries, Facebook Likes, Pinterest Pins, Twitter Follows, Posts and Tweets. SnapTags offer back-end data mining capabilities. Use in mobile operating systems SnapTags can be used in Google's mobile Android operating system and iOS devices (iPhone/iPod/iPad) using The SnapTag Reader App or third party apps that have integrated the SnapTag Reader SDK. SnapTags can also be used by standard camera phones by taking a picture of the SnapTag and texting it to the designated short code or email address. References Barcodes Encodings Automatic identification and data capture
SnapTag
Technology
209
76,390,321
https://en.wikipedia.org/wiki/Erbium%20iodate
Erbium iodate is an inorganic compound with the chemical formula Er(IO3)3. Preparation Erbium iodate can be obtained by reacting erbium periodate and periodic acid in water at 160 °C. The reaction will produce anhydrous and dihydrate crystals. Properties Erbium iodate dihydrate is stable below 266 °C, loses two molecules of water at 289 °C, and decomposes at 589 °C to generate iodine and release oxygen. References Erbium compounds Iodates
Erbium iodate
Chemistry
110
9,728,495
https://en.wikipedia.org/wiki/Popcorn%20ceiling
A popcorn ceiling, also known as a stipple ceiling or acoustic ceiling, is a ceiling with one of a variety of spray-on or paint-on treatments. The bumpy surface is created by tiny particles of vermiculite or polystyrene, which gives the ceiling sound-deadening properties. Mixtures are available in fine, medium, and coarse grades. In many parts of the world, it was the standard for bedroom and residential hallway ceilings for its bright, white appearance, ability to hide imperfections, and acoustic characteristics. In comparison, kitchen and living room ceilings would normally be finished in smoother skip-trowel or orange peel texture for their higher durability and ease of cleaning. Popcorn ceilings, in pre-1970s and early formulations, often contained white asbestos fibers. When asbestos was banned in ceiling treatments by the Clean Air Act in the United States, popcorn ceilings fell out of favor in much of the country. However, in order to minimize economic hardship to suppliers and installers, existing inventories of asbestos-bearing texturing materials were exempt from the ban, so it is possible to find asbestos in popcorn ceilings that were applied through the 1980s. After the ban, popcorn ceiling materials were created using a paper-based or Styrofoam product to create the texture, rather than asbestos. Textured ceilings remain common in residential construction in the United States. Since the mid-2000s, the popularity of textured popcorn ceilings has diminished significantly across North America. A trend toward more modern, clean-lined design features has influenced home improvement professionals to provide popcorn ceiling removal services. In comparison to smooth ceilings, textured ceilings are generally less reflective of natural light, may harbor more dust and allergens, and may be more difficult to patch and touch up after drywall repair. See also Artex References Ceilings
Popcorn ceiling
Engineering
375
48,640,009
https://en.wikipedia.org/wiki/Complete%20active%20space%20perturbation%20theory
Complete active space perturbation theory (CASPTn) is a multireference electron correlation method for computational investigation of molecular systems, especially for those with heavy atoms such as transition metals, lanthanides, and actinides. It can be used, for instance, to describe electronic states of a system, when single reference methods and density functional theory cannot be used, and for heavy atom systems for which quasi-relativistic approaches are not appropriate. Although perturbation methods such as CASPTn are successful in describing the molecular systems, they still need a Hartree-Fock wavefunction to provide a valid starting point. The perturbation theories cannot reach convergence if the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) are degenerate. Therefore, the CASPTn method is usually used in conjunction with the multi-configurational self-consistent field method (MCSCF) to avoid near-degeneracy correlation effects. History In the early 1960s, the perturbation theory in quantum chemical applications was introduced. Since, there has been a wide spread of uses of the theory through software such as Gaussian. The perturbation theory correlation method is used routinely by the non-specialists. This is because it can easily achieve the property of size extensivity comparing to other correlation methods. During the starting point of the uses of perturbation theory, the applications using the method were based on nondegenerate many-body perturbation theory (MBPT). MBPT is a reasonable method for atomic and molecular system which a single non-degenerate Slater determinant can represent zeroth-order electronic description. Therefore, MBPT method would exclude atomic and molecular states, especially excited states, which cannot be represented in zeroth order as single Slater determinants. Moreover, the perturbation expansion would converges very slowly or not at all if the state is degenerate or near degenerate. Such degenerate states are often the case of atomic and molecular valence states. To counter the restrictions, there was an attempt to implement second-order perturbation theory in conjunction with complete active space self-consistent field (CASSCF) wave functions. At the time, it was rather difficult to compute three- and four-particle density matrices which are needed for matrix elements involving internal and semi-internal excitations. The results was rather disappointing with little or no improvement from usual CASSCF results. Another attempt was made in 1990, where the full interacting space was included in the first-order wave function while zeroth-order Hamiltonian was constructed from a Fock-type one-electron operator. For cases which has no active orbitals, the Fock-type one-electron operator that reduces to the Møller–Plesset-Plesset Hartree-Fock (HF) operator. A diagonal Fock operator was also used to make a computer implementation simple and effective. References Electronic structure methods
Complete active space perturbation theory
Physics,Chemistry
620
1,236,765
https://en.wikipedia.org/wiki/Blutfahne
The Blutfahne (), or Blood Flag, is or was a Nazi Party swastika flag that was carried during the attempted coup d'état Beer Hall Putsch in Munich, Germany on 9 November 1923, during which it became soaked in the blood of one of the SA men who died. It subsequently became one of the most revered objects of the Nazi Party. It was used in ceremonies in which new flags for party organisations were "consecrated" by the Blood Flag when touched by it. Beer Hall Putsch The flag was that of the 5th SA Sturm, which was carried in the march towards the Feldherrnhalle. When the Munich police fired on the National Socialists (Nazis), the flagbearer Heinrich Trambauer was hit and dropped the flag. Andreas Bauriedl, an SA man marching alongside the flag, was killed and fell onto it, staining the flag with his blood. There were two stories about what happened to the flag in the aftermath of the Putsch: one was that the wounded Trambauer took the flag to a friend where he removed it from its staff before leaving with it hidden inside his jacket and later giving it to a man named Karl Eggers for safekeeping. The other story was that the flag was confiscated by the Munich authorities and was later returned to the Nazis via Eggers. In the mid-1930s, after a myth emerged that Bauriedl had been carrying the flag, an investigation by Nazi archivists concluded that Trambauer was the standard-bearer and that the flag had been concealed by an SA man, not taken by the police, though they had confiscated other flags which they later returned. Regardless of which story was the correct one, after Adolf Hitler was released from Landsberg Prison (having served nine months of a five-year prison sentence for his part in the putsch), Eggers gave the flag to him. Sacred Nazi symbol After Hitler received the flag, he had it fitted to a new staff and finial; just below the finial was a silver dedication sleeve which bore the names of the 16 dead participants of the putsch. Bauriedl was one of the 16 honorees. In addition, the flag was no longer attached to the staff by its original sewn-in sleeve, but by a red-white-black intertwined cord which ran through the sleeve instead. In 1926, at the second Nazi Party congress at Weimar, Hitler ceremonially bestowed the flag on Joseph Berchtold, then head of the SS. The flag was thereafter treated as a sacred object by the Nazi Party and carried by SS–Sturmbannführer Jakob Grimminger at various Nazi Party ceremonies. One of the most visible uses of the flag was when Hitler, at the Party's annual Nuremberg rallies, touched other Nazi banners with the Blutfahne, thereby "sanctifying" them. This was done in a special ceremony called the "flag consecration" (Fahnenweihe). When not in use, the Blutfahne was kept at the headquarters of the Nazi Party, the Brown House in Munich with an SS guard of honour. The flag had a small tear in it, believed to have been caused during the Putsch, that went unrepaired for a number of years. Disappearance The Blutfahne was last seen in public at the Volkssturm induction ceremony on 18 October 1944 (not, as frequently reported, at Gauleiter Adolf Wagner's funeral six months previously). This ceremony was conducted by Heinrich Himmler and attended by Wilhelm Keitel, Heinz Guderian, Hans Lammers, Martin Bormann, Karl Fiehler, Wilhelm Schepmann, and Erwin Kraus. After this last public display, the Blutfahne vanished. Its current whereabouts are unknown, but it is generally assumed to have been destroyed amidst the Allied bombing of Munich in 1945. See also Glossary of Nazi Germany List of German flags Nazism Personal standard of Adolf Hitler References Further reading Orth, R: „Von einem verantwortungslosen Kameraden zum geistigen Krüppel geschlagen.“ Der Fall des Hitler-Putschisten Heinrich Trambauer. in: Historische Mitteilungen der Ranke-Gesellschaft 25 (2012), p. 208–236. External links Blutfahne at Flags of the World. 1923 establishments in Germany Special events flags Lost objects Beer Hall Putsch Flags of Nazi Germany Swastika
Blutfahne
Physics
928
20,445,622
https://en.wikipedia.org/wiki/Geography%20of%20Halloween
Halloween is a celebration observed on October 31, the day before the feast of All Hallows, also known as Hallowmas or All Saint's Day. The celebrations and observances of this day occur primarily in regions of the Western world, albeit with some traditions varying significantly between geographical areas. Origins Halloween is the eve of vigil before the Western Christian feast of All Hallows (or All Saints) which is observed on November 1. This day begins the triduum of Hallowtide, which culminates with All Souls' Day. In the Middle Ages, many Christians held a folk belief that All Hallows' Eve was the "night where the veil between the material world and the afterlife was at its most transparent". Americas Canada Scottish emigration, primarily to Canada before 1870 and to the United States thereafter, brought the Scottish version of the holiday to each country. The earliest known reference to ritual begging on Halloween in English speaking North America occurs in 1911 when a newspaper in Kingston, Ontario reported that it was normal for the smaller children to go street "guising" on Halloween between 6 and 7 p.m., visiting shops, and neighbours to be rewarded with nuts and candies for their rhymes and songs. Canadians spend more on candy at Halloween than at any time apart from Christmas. Halloween is also a time for charitable contributions. Until 2006 when UNICEF moved to an online donation system, collecting small change was very much a part of Canadian trick-or-treating. Quebec offers themed tours of parts of the old city and historic cemeteries in the area. In 2014 the hamlet of Arviat, Nunavut moved their Halloween festivities to the community hall, cancelling the practice of door-to-door "trick or treating", due to the risk of roaming polar bears. In British Columbia it is a tradition to set off fireworks at Halloween. United States In the United States, Halloween did not become a holiday until the 19th century. The transatlantic migration of nearly two million Irish following the Great Irish Famine (1845–1849) brought the holiday to the United States. American librarian and author Ruth Edna Kelley wrote the first book length history of the holiday in the U.S., The Book of Hallowe'en (1919), and references souling in the chapter "Hallowe'en in America": "All Hallowe'en customs in the United States are borrowed directly or adapted from those of other countries. The taste in Hallowe'en festivities now is to study old traditions, and hold a Scotch party, using Robert Burns's poem Halloween as a guide; or to go a-souling as the English used. In short, no custom that was once honored at Hallowe'en is out of fashion now." The main event for children of modern Halloween in the United States and Canada is trick-or-treating, in which children, teenagers, (sometimes) young adults, and parents (accompanying their children) disguise themselves in costumes and go door-to-door in their neighborhoods, ringing each doorbell and yelling "Trick or treat!" to solicit a gift of candy or similar items. Teenagers and adults will more frequently attend Halloween-themed costume parties typically hosted by friends or themed events at nightclubs either on Halloween itself or a weekend close to the holiday. At the turn of the 20th century, Halloween had turned into a night of vandalism, with destruction of property and cruelty to animals and people. Around 1912, the Boy Scouts, Boys Clubs, and other neighborhood organizations came together to encourage a safe celebration that would end the destruction that had become so common on this night. The commercialization of Halloween in the United States did not start until the 20th century, beginning perhaps with Halloween postcards (featuring hundreds of designs), which were most popular between 1905 and 1915. Dennison Manufacturing Company (which published its first Halloween catalog in 1909) and the Beistle Company were pioneers in commercially made Halloween decorations, particularly die-cut paper items. German manufacturers specialised in Halloween figurines that were exported to the United States in the period between the two World Wars. Halloween is now the United States' second most popular holiday (after Christmas) for decorating; the sale of candy and costumes is also extremely common during the holiday, which is marketed to children and adults alike. The National Confectioners Association (NCA) reported in 2005 that 80% of American adults planned to give out candy to trick-or-treaters. The NCA reported in 2005 that 93% of children planned to go trick-or-treating. According to the National Retail Federation, the most popular Halloween costume themes for adults are, in order: witch, pirate, vampire, cat, and clown. Each year, popular costumes are dictated by various current events and pop culture icons. On many college campuses, Halloween is a major celebration, with the Friday and Saturday nearest 31 October hosting many costume parties. Other popular activities are watching horror movies and visiting haunted houses. Total spending on Halloween is estimated to be $8.4 billion. An Associated Press survey found that 66% of American parents planned to take their children trick or treating. Within the survey, 46% identified as Protestant and 24% as Catholic. Events Many theme parks stage Halloween events annually, such as Halloween Horror Nights at Universal Studios Hollywood and Universal Orlando, Mickey's Halloween Party and Mickey's Not-So-Scary Halloween Party at Disneyland Resort and Magic Kingdom respectively, and Knott's Scary Farm at Knott's Berry Farm. One of the more notable parades is New York's Village Halloween Parade. Each year approximately 50,000 costumed marchers parade up Sixth Avenue. Salem, Massachusetts, site of the Salem witch trials, celebrates Halloween throughout the month of October with tours, plays, concerts, and other activities. A number of venues in New York's lower Hudson Valley host various events to showcase a connection with Washington Irving's Legend of Sleepy Hollow. Van Cortlandt Manor stages the "Great Jack o' Lantern Blaze" featuring thousands of lighted carved pumpkins. Some locales have had to modify their celebrations due to disruptive behavior on the part of young adults. Madison, Wisconsin hosts an annual Halloween celebration. In 2002, due to the large crowds in the State Street area, a riot broke out, necessitating the use of mounted police and tear gas to disperse the crowds. Likewise, Chapel Hill, site of the University of North Carolina, has a downtown street party which in 2007 drew a crowd estimated at 80,000 on downtown Franklin Street, in a town with a population of just 54,000. In 2008, in an effort to curb the influx of out-of-towners, mayor Kevin Foy put measures in place to make commuting downtown more difficult on Halloween. In 2014, large crowds of college students rioted at the Keene, New Hampshire Pumpkin Fest, whereupon the City Council voted not to grant a permit for the following year's festival, and organizers moved the event to Laconia for 2015. Brazil The Brazilian non-governmental organization named Amigos do Saci created Saci Day as a Brazilian parallel in opposition to the "American-influenced" holiday of Halloween that saw minor celebration in Brazil. The Saci is a mischievous evil character in Brazilian folklore. Saci Day is commemorated on October 31, the same day as Halloween, and is an official holiday in the state of São Paulo. Despite official recognition in São Paulo and several other municipalities throughout the country, few Brazilians celebrate it. Dominican Republic In the Dominican Republic it has been gaining popularity, largely due to many Dominicans living in the United States and then bringing the custom to the island. In the larger cities of Santiago or Santo Domingo it has become more common to see children trick-or-treating, but in smaller towns and villages it is almost entirely absent, partly due to religious opposition. Tourist areas such as Sosua and Punta Cana feature many venues with Halloween celebrations, predominantly geared towards adults. Mexico () Observed in Mexico and Mexican communities abroad, Day of the Dead () celebrations arose from the syncretism of indigenous Aztec traditions with the Christian Hallowtide of the Spanish colonizers. Flower decorations, altars and candies are part of this holiday season. The holiday is distinct from Halloween in its origins and observances, but the two have become associated because of cross-border connections between Mexico and the United States through popular culture and migration, as the two celebrations occur at the same time of year and may involve similar imagery, such as skeletons. Halloween and Día de Muertos have influenced each other in some areas of the United States and Mexico, with Halloween traditions such as costumes and face-painting becoming increasingly common features of the Mexican festival. Asia China The Chinese celebrate the "Hungry Ghost Festival" in mid-July, when it is customary to float river lanterns to remember those who have died. By contrast, Halloween is often called "All Saints' Festival" (Wànshèngjié, 萬聖節), or (less commonly) "All Saints' Eve" (Wànshèngyè, 萬聖夜) or "Eve of All Saints' Day" (Wànshèngjié Qiányè, 萬聖節前夕), stemming from the term "All Hallows Eve" (hallow referring to the souls of holy saints). Chinese Christian churches hold religious celebrations. Non-religious celebrations are dominated by expatriate Americans or Canadians, but costume parties are also popular for Chinese young adults, especially in large cities. Hong Kong Disneyland and Ocean Park (Halloween Bash) host annual Halloween shows. Mainland China has been less influenced by Anglo traditions than Hong Kong and Halloween is generally considered "foreign". As Halloween has become more popular globally it has also become more popular in China, however, particularly amongst children attending private or international schools with many foreign teachers from North America. Hong Kong Traditional "door-to-door" trick or treating is not commonly practiced in Hong Kong due to the vast majority of Hong Kong residents living in high-rise apartment blocks. However, in many buildings catering to expatriates, Halloween parties and limited trick or treating is arranged by the management. Instances of street-level trick or treating in Hong Kong occur in ultra-exclusive gated housing communities such as The Beverly Hills populated by Hong Kong's super-rich and in expatriate areas like Discovery Bay and the Red Hill Peninsula. For the general public, there are events at Tsim Sha Tsui's Avenue of the Stars that try to mimic the celebration. In the Lan Kwai Fong area of Hong Kong, known as a major entertainment district for the international community, a Halloween celebration and parade has taken place for over 20 years, with many people dressing in costume and making their way around the streets to various drinking establishments. Many international schools also celebrate Halloween with costumes, and some put an academic twist on the celebrations such as the "Book-o-ween" celebrations at Hong Kong International School where students dress as favorite literary characters. Japan Halloween arrived in Japan mainly as a result of American pop culture. As late as 2009, it was celebrated mainly by expats. The wearing of elaborate costumes by young adults at night has since become popular in areas such as Amerikamura in Osaka and Shibuya in Tokyo, where, in October 2012, about 1700 people dressed in costumes to take part in the Halloween Festival. Celebrations have become popular with young adults as a costume party and club event. Trick-or-treating for Japanese children has taken hold in some areas. By the mid-2010s, Yakuza were giving snacks and sweets to children. However in recent years authorities in Tokyo have tried to discourage street drinking on Halloween. Philippines The period from 31 October through 2 November is a time for remembering dead family members and friends. Many Filipinos travel back to their hometowns for family gatherings of festive remembrance. Trick-or-treating is gradually replacing the dying tradition of Pangangaluluwâ, a local analogue of the old English custom of souling. People in the provinces still observe Pangangaluluwâ by going in groups to every house and offering a song in exchange for money or food. The participants, usually children, would sing carols about the souls in Purgatory, with the abúloy (alms for the dead) used to pay for Masses for these souls. Along with the requested alms, householders sometimes gave the children suman (rice cakes). During the night, various small items, such as clothing, plants, etc., would "mysteriously" disappear, only to be discovered the next morning in the yard or in the middle of the street. In older times, it was believed that the spirits of ancestors and loved ones visited the living on this night, manifesting their presence by taking an item. As the observation of Christmas traditions in the Philippines begins as early as September, it is a common sight to see Halloween decorations next to Christmas decorations in urban settings. Saudi Arabia Starting 2022, Saudi Arabia began to celebrate Halloween in the public in Riyadh under its Vision 2030. Singapore Around mid-July Singapore Chinese celebrate "Zhong Yuan Jie / Yu Lan Jie" (Hungry Ghosts Festival), a time when it is believed that the spirits of the dead come back to visit their families. In recent years, Halloween celebrations are becoming more popular, with influence from the west. In 2012, there were over 19 major Halloween celebration events around Singapore. SCAPE's Museum of Horrors held its fourth scare fest in 2014. Universal Studios Singapore hosts "Halloween Horror Nights". South Korea The popularity of the holiday among young people in South Korea comes from English academies and corporate marketing strategies, and was influenced by Halloween celebrations in Japan and America. Despite not being a public holiday, it is celebrated in different areas around Seoul, especially Itaewon and Hongdae. Taiwan Traditionally, Taiwanese people celebrate "Zhong Yuan Pudu Festival", where spirits that do not have any surviving family members to pay respects to them, are able to roam the Earth during the seventh lunar month. It is known as Ghost Month. While some have compared it to Halloween, it has no relations and the overall meaning is different. In recent years, mainly as a result of American pop culture, Halloween is becoming more widespread amongst young Taiwanese people. Halloween events are held in many areas across Taipei, such as Xinyi Special District and Shilin District where there are many international schools and expats. Halloween parties are celebrated differently based on different age groups. One of the most popular Halloween event is the Tianmu Halloween Festival, which started in 2009 and is organised by the Taipei City Office of Commerce. The 2-day annual festivity has attracted more than 240,000 visitors in 2019. During this festival, stores and businesses in Tianmu place pumpkin lanterns outside their stores to identify themselves as trick-or-treat destinations for children. Oceania Australia Non-religious celebrations of Halloween modelled on North American festivities are growing increasingly popular in Australia despite not being traditionally part of the culture. Some Australians criticise this intrusion into their culture. Many dislike the commercialisation and American pop-culture influence. Some supporters of the event place it alongside other cultural traditions such as Saint Patrick's Day. Halloween historian and author of Halloween: Pagan Festival to Trick or Treat, Mark Oxbrow says while Halloween may have been popularised by depictions of it in US movies and TV shows, it is not a new entry into Australian culture. His research shows Halloween was first celebrated in Australia in Castlemaine, Victoria, in 1858, which was 43 years before Federation. His research shows Halloween traditions were brought to the country by Scottish miners who settled in Victoria during the Gold Rush. Because of the polarised opinions about Halloween, growing numbers of people are decorating their letter boxes to indicate that children are welcome to come knocking. In the past decade, the popularity of Halloween in Australia has grown. In 2020, the first magazine dedicated solely to celebrating Halloween in Australia was launched, called Hallozween, and in 2021, sales of costumes, decorations and carving pumpkins soared to an all-time high despite the effect of the global COVID-19 pandemic limiting celebrations. New Zealand Halloween first gained traction in New Zealand in the 1990s, and every year it is one of the first countries in the world to celebrate Halloween due to its proximity to the International Date Line. Although Halloween is not celebrated to the same extent as in North America, it is still a significant event, mainly celebrated in urban areas. Trick-or-treat has become increasingly popular with minors in New Zealand, despite being not a "British or Kiwi event" and the influence of American globalisation. One criticism of Halloween in New Zealand is that it is overly commercialised—by The Warehouse, for example. Europe Over the years, Halloween has become more popular in Europe and has been partially ousting some older customs like the (), Martinisingen, and others. France Halloween was introduced to most of France in the 1990s. In Brittany, Halloween had been celebrated for centuries and is known as Kalan Goañv (Night of Spirits). During this time, it is believed that the spirits of the dead return to the world of the living led by the Ankou, the collector of souls. Germany Halloween was not generally observed in Germany prior to the 1990s, but has been increasing in popularity. It has been associated with the influence of United States culture, and "Trick or Treating" () has been occurring in various German cities, especially in areas such as the Dahlem neighborhood in Berlin, which was part of the American zone during the Cold War. Today, Halloween in Germany brings in 200 million euros a year, through multiple industries. Halloween is celebrated by both children and adults. Adults celebrate at themed costume parties and clubs, while children go trick or treating. Complaints of vandalism associated with Halloween "Tricks" are increasing, particularly from many elderly Germans unfamiliar with "Trick or Treating". Greece In Greece, Halloween is not celebrated widely and it is a working day, with little public interest, since the early 2000s. Recently, it has somewhat increased in popularity as both a secular celebration; although Carnival is vastly more popular among Greeks. For very few, Halloween is considered the fourth most popular festival in the country after Christmas, Easter, and Carnival. Retail businesses, bars, nightclubs, and certain theme parks might organize Halloween parties. This boost in popularity has been attributed to the influence of western consumerism. Since it is a working day, Halloween is not celebrated on 31 October unless the date falls on a weekend, in which case it is celebrated by some during the last weekend before All Hallow's Eve, usually in the form of themed house parties and retail business decorations. Trick-or-treating is not widely popular because similar activities are already undertaken during Carnival. The slight rise in popularity of Halloween in Greece has led to some increase in its popularity throughout nearby countries in the Balkans and Cyprus. In the latter, there has been an increase in Greek-Cypriot retailers selling Halloween merchandise every year. Ireland On Halloween night, adults and children dress up as various monsters and creatures, light bonfires, and enjoy fireworks displays; Derry in Northern Ireland is home to the largest organized Halloween celebration on the island, in the form of a street carnival and fireworks display. Games are often played, such as bobbing for apples, in which apples, peanuts, other nuts and fruits, and some small coins are placed in a basin of water. Everyone takes turns catching as many items possible using only their mouths. Another common game involves the hands-free eating of an apple hung on a string attached to the ceiling. Games of divination are also played at Halloween. Colcannon is traditionally served on Halloween. 31 October is the busiest day of the year for the Emergency Services. Bangers and fireworks are illegal in the Republic of Ireland; however, they are commonly smuggled in from Northern Ireland where they are legal. Bonfires are frequently built around Halloween. Trick-or-treating is popular amongst children on 31 October and Halloween parties and events are commonplace. October Holiday occurs on the last Monday of October and may fall on Halloween. Its Irish names are or , the latter translating literally as 'Halloween holiday'. Italy In Italy, All Saints' Day is a public holiday. On 2 November, Tutti i Morti or All Souls' Day, families remember loved ones who have died. These are still the main holidays. In some Italian tradition, children would awake on the morning of All Saints or All Souls to find small gifts from their deceased ancestors. In Sardinia, Concas de Mortu (Head of the deads), carved pumpkins that look like skulls, with candles inside are displayed. Halloween is, however, gaining in popularity, and involves costume parties for young adults. The traditions to carve pumpkins in a skull figure, lighting candles inside, or to beg for small gifts for the deads e.g. sweets or nuts, also belong to North Italy. In Veneto these carved pumpkins were called lumère (lanterns) or suche dei morti (deads' pumpkins). Poland Since the fall of Communism in 1989, Halloween has become increasingly popular in Poland. Particularly, it is celebrated among younger people. The influx of Western tourists and expats throughout the 1990s introduced the costume party aspect of Hallowe'en celebrations, particularly in clubs and at private house parties. Door-to-door trick or treating is not common. Pumpkin carving is becoming more evident, following a strong North American version of the tradition. Poland is the biggest pumpkin producer in the European Union. Romania Romanians observe the Feast of St. Andrew, patron saint of Romania, on 30 November. On St. Andrew's Eve ghosts are said to be about. A number of customs related to divination, in other places connected to Halloween, are associated with this night. However, with the popularity of Dracula and vampires in western Europe, around Halloween the Romanian tourist industry promotes trips to locations connected to the historical Vlad Tepeș and the more fanciful Dracula of Bram Stoker. One of the most successful Halloween Parties in Transylvania takes place in Sighișoara, the citadel where Vlad the Impaler was born. This party include magician shows, ballet show and The Ritual Killing of a Living Dead The biggest Halloween party in Transylvania take place at Bran Castle, aka Dracula's Castle from Transylvania. Both the Catholic and Orthodox Churches in Romania discourage Halloween celebrations, advising their parishioners to focus rather on the "Day of the Dead" on 1 November, when special religious observances are held for the souls of the deceased. Opposition by religious and nationalist groups, including calls to ban costumes and decorations in schools in 2015, have been met with criticism. Halloween parties are popular in bars and nightclubs. Russia In Russia, most Christians are Orthodox, and in the Orthodox Church, Halloween is on the Saturday after Pentecost, and therefore 4 to 5 months before western Halloween. Celebration of western Halloween began in the 1990s around the downfall of the Soviet regime, when costume and ghoulish parties spread in night clubs throughout Russia. Halloween is generally celebrated by younger generations and is not widely celebrated in civic society (e.g. theaters or libraries). In fact, Halloween is among the Western celebrations that the Russian government and politicians—which have grown increasingly anti-Western in the early 2010s—are trying to eliminate from public celebration. Spain In Spain, celebrations involve eating castanyes (roasted chestnuts), panellets (special almond balls covered in pine nuts), moniatos (roast or baked sweet potato), Ossos de Sant cake and preserved fruit (candied or glazed fruit). Moscatell (Muscat) is drunk from porrons. Around the time of this celebration, it is common for street vendors to sell hot toasted chestnuts wrapped in newspaper. In many places, confectioners often organise raffles of chestnuts and preserved fruit. The tradition of eating these foods comes from the fact that during All Saints' night, on the eve of All Souls' Day in the Christian tradition, bell ringers would ring bells in commemoration of the dead into the early morning. Friends and relatives would help with this task, and everyone would eat these foods for sustenance. Other versions of the story state that the Castanyada originates at the end of the 18th century and comes from the old funeral meals, where other foods, such as vegetables and dried fruit were not served. The meal had the symbolic significance of a communion with the souls of the departed: while the chestnuts were roasting, prayers would be said for the person who had just died. The festival is usually depicted with the figure of a castanyera: an old lady, dressed in peasant's clothing and wearing a headscarf, sitting behind a table, roasting chestnuts for street sale. In recent years, the Castanyada has become a revetlla of All Saints and is celebrated in the home and community. It is the first of the four main school festivals, alongside Christmas, Carnestoltes and St George's Day, without reference to ritual or commemoration of the dead. Galicia is known to have the second largest Halloween or Samain festivals in Europe and during this time, a drink called Queimada is often served. Sweden On All Hallow's Eve, a Requiem Mass is widely attended every year at Uppsala Cathedral, part of the Lutheran Church of Sweden. Throughout the period of Allhallowtide, starting with All Hallow's Eve, Swedish families visit churchyards and adorn the graves of their family members with lit candles and wreaths fashioned from pine branches. Among children, the practice of dressing in costume and collecting candy gained popularity beginning around 2005. The American traditions of Halloween have however been met with skepticism among the older generations, in part due to conflicting with the Swedish traditions on All Hallow's Eve and in part due to their commercialism. In Sweden, All Saint's Day/ All Hallow's Eve is observed on the Saturday occurring between October 31 and November 6, whereas Halloween is observed on October 31, every year. Switzerland In Switzerland, Halloween, after first becoming popular in 1999, is on the wane, and is most popular with young adults who attend parties. Switzerland already has a "festival overload" and even though Swiss people like to dress up for any occasion, they do prefer a traditional element, such as in the Fasnacht tradition of chasing away winter using noise and masks. United Kingdom and Crown dependencies England In the past, on All Souls' Eve families would stay up late, and little "soul cakes" were eaten. At the stroke of midnight, there was solemn silence among households, which had candles burning in every room to guide the souls back to visit their earthly homes and a glass of wine on the table to refresh them. The tradition of giving soul cakes that originated in Great Britain and Ireland was known as souling, often seen as the origin of modern trick or treating in North America, and souling continued in parts of England as late as the 1930s, with children going from door to door singing songs and saying prayers for the dead in return for cakes or money. Trick or treating and other Halloween celebrations are extremely popular, with shops decorated with witches and pumpkins, and young people attending costume parties. Scotland The name Halloween is first attested in the 16th century as a Scottish shortening of the fuller All-Hallow-Even, that is, the night before All Hallows' Day. Dumfries poet John Mayne's 1780 poem made note of pranks at Halloween "What fearfu' pranks ensue!". Scottish poet Robert Burns was influenced by Mayne's composition, and portrayed some of the customs in his poem Halloween (1785). According to Burns, Halloween is "thought to be a night when witches, devils, and other mischief-making beings are all abroad on their baneful midnight errands". Among the earliest record of Guising at Halloween in Scotland is in 1895, where masqueraders in disguise carrying lanterns made out of scooped out turnips, visit homes to be rewarded with cakes, fruit and money. If children approached the door of a house, they were given offerings of food. The children's practice of "guising", going from door to door in costumes for food or coins, is a traditional Halloween custom in Scotland. These days children who knock on their neighbours doors have to sing a song or tell stories for a gift of sweets or money. A traditional Halloween game includes apple "dooking", or "dunking" or (i.e., retrieving one from a bucket of water using only one's mouth), and attempting to eat, while blindfolded, a treacle/jam-coated scone hanging on a piece of string. Traditional customs and lore include divination practices, ways of trying to predict the future. A traditional Scottish form of divining one's future spouse is to carve an apple in one long strip, then toss the peel over one's shoulder. The peel is believed to land in the shape of the first letter of the future spouse's name. In Kilmarnock, Halloween is also celebrated on the last Friday of October, and is known colloquially as "Killieween". Isle of Man Halloween is a popular traditional occasion on the Isle of Man, where it is known as Hop-tu-Naa. Elsewhere Saint Helena In Saint Helena, Halloween is actively celebrated much along the American model, featuring ghosts, devils, witches and the like. Imitation pumpkins are used instead of real ones, as the pumpkin harvesting season in Saint Helena's hemisphere is not near Halloween. Trick-or-treating is widespread, and party venues provide entertainment for adults. See also Festival of the Dead Games and other activities during Halloween References Further reading Brock, Michelle. "What Halloween Can Learn from History" Halloween Human geography Allhallowtide
Geography of Halloween
Environmental_science
6,151
29,405,962
https://en.wikipedia.org/wiki/Clathrus%20chrysomycelinus
Clathrus chrysomycelinus is a species of fungus in the stinkhorn family. It is found in South America and reported from New Zealand, although the equivalence of the species is yet to be determined. References Phallales Fungi of South America Fungi described in 1898 Fungus species
Clathrus chrysomycelinus
Biology
59
63,470,073
https://en.wikipedia.org/wiki/NGC%20542
NGC 542 is a spiral galaxy in the constellation Andromeda, which is approximately 215 million light years from the Milky Way. Together with the galaxies NGC 529, NGC 531, and NGC 536, it forms the Hickson Compact Group 10, abbreviated HCG 10. It was discovered by Irish astronomer R.J. Mitchell in 1885. See also List of NGC objects (1–1000) References External links Spiral galaxies 0542 Andromeda (constellation) 005360
NGC 542
Astronomy
100
62,072,493
https://en.wikipedia.org/wiki/Chloroflexus%20islandicus
Chloroflexus islandicus is a photosynthetic bacterium isolated from the Strokkur Geyser in Iceland. This organism is thermophilic showing optimal growth at 55 °C (131 °F) with a pH range of 7.5 – 7.7. C. islandicus grows best photoheterotrophically under anaerobic conditions with light but is capable of chemoheterotrophically growth under aerobic conditions in the dark. C. islandicus has a yellowish green color. The individual cells form unbranched multicellular filaments about 0.6 μm in diameter and 4-7 μm in length. Phenotypic characteristics As a genus, Chloroflexus spp. are filamentous anoxygenic phototrophic (FAP) organisms that utilize type II photosynthetic reaction centers containing bacteriochlorophyll a, and light-harvesting chlorosomes containing bacteriochlorophyll. Beta- and gamma-carotenes are present. C. islandicus is gram negative. Cell morphology shows the presence of chlorosomes, pili and gliding motility. Pili are unique to C. islandicus being the only organism in the Chloroflexus genus to possess pili. Genetic characteristics The whole genome sequence of Chloroflexus islandicus was able to be determined (5.14 Mb). Using the 16S rRNA gene analysis, ANI (Average Nucleotide Identity) and DDH (DNA-DNA Hybridization) a new species of Chloroflexus was confirmed. The 16S rRNA analysis showed it is closely related to Chloroflexus aggregans (97.0%). The genomic data revealed 84.1% ANI and 22.8% DDH for Chloroflexus islandicus strain vs other known Chloroflexus strains. The separated species based on ANI is 95.0% or less and DDH is 70.0% or less. The G/C content for Chloroflexus islandicus was found to be 59.6 mol%. See also Chloroflexota Endosymbiotic theory References Phototrophic bacteria Bacteria described in 2017 Chloroflexota
Chloroflexus islandicus
Chemistry,Biology
488
54,551,486
https://en.wikipedia.org/wiki/Super-resolution%20dipole%20orientation%20mapping
Super-resolution dipole orientation mapping (SDOM) is a form of fluorescence polarization microscopy (FPM) that achieved super resolution through polarization demodulation. It was first described by Karl Zhanghao and others in 2016. Fluorescence polarization (FP) is related to the dipole orientation of chromophores, making fluorescence polarization microscopy possible to reveal structures and functions of tagged cellular organelles and biological macromolecules. In addition to fluorescence intensity, wavelength, and lifetime, the fourth dimension of fluorescence—polarization—can also provide intensity modulation without the restriction to specific fluorophores; its investigation in super-resolution microscopy is still in its infancy. History In 2013, Hafi et al. developed a novel super-resolution technique through sparse deconvolution of polarization-modulated fluorescent images (SPoD). Because the fluorescent dipole is an inherent feature of fluorescence, and its polarization intensity can be easily modulated with rotating linear polarized excitation, the polarization-based super-resolution technique therefore holds great promise with regard to a wide range of biological applications due to its compatibility with conventional fluorescent specimen labeling. The SPoD data, consisting of sequences of diffraction-limited images illuminated with varying linearly polarized light, were reconstructed with a deconvolution algorithm termed SPEED (sparsity penalty – enhanced estimation by demodulation). Although super resolution can be achieved, the dipole orientation information is lost during SPoD reconstruction. In 2016, Keller et al. argue that the improvement in resolution observed with the SPoD method is a deconvolution effect. That is, the super-resolution in the images that Hafi shows is achieved by SPEED algorithm not the SPoD method. So the polarization information does not contribute substantially to the final image. They concluded that polarization can't add further super-resolution information. At the same time, Waller et al. replied to the debate and they admit the question raised by Keller. They did some new experiments to support SPoD could bring further information. They prove that raw modulation information in SPoD also separated sub-diffractional details without SPEED. However, whether it works for heterogeneously and densely labeled samples is unsure and still need further studies. Afterwards, Karl Zhanghao et al. proposed a new approach called SDOM that resolves the effective dipole orientation from a much smaller number of fluorescent molecules within a sub-diffraction focal area. They also applied this method to resolve structural details in both fixed and live cells. Their results showed that polarization does provide further structural information on top of the super-resolution image, thereby providing a timely answer to the key question raised by the debate mentioned above. Fluorescence polarization microscopy As a fundamental physical dimension of fluorescence, polarization has been applied extensively in biological research. Through fluorescence polarization microscopy (FPM), the dipole orientation as well as the intensity of fluorescent probes could be measured. Compared with X-ray crystallography or electron microscopy which could elucidate ultra-high resolution of individual proteins or macromolecule assemblies, FPM doesn't require complex sample preparation which makes it suitable for live cell imaging. Near-field imaging techniques, such as Atomic Force Microscopy (AFM) could also provide structural information, which however, is limited only to samples on the surface. FPM is capable of imaging orientations in dynamic samples at the time scale of seconds or milliseconds, thus it can serve as a complementary method for the measurement of subcellular organelle structures. FPM has been evolving over the past decades, from manual or mechanical switching of polarization detection or excitation to simultaneously detection and fast polarization modulation via electro-optic devices. With faster imaging speed and higher imaging quality, FPM has been incorporated with various imaging modalities, such as wide-field, confocal microscopy, two-photon confocal, total internal reflection fluorescence microscope, FRAP, etc. However, as an optical imaging technique, the development of fluorescence polarization microscopy (FPM) is barricaded by the diffraction limit. Compared to the abundant super-resolution techniques on fluorescence intensity imaging, super-resolution techniques in FPM are still in its infancy. Recently, three forms of FPM have emerged and it has been proved that they can achieve super-resolution. They are SPoD, SDOM and polar-dSTORM (polarization-resolved direct stochastic optical reconstruction microscopy). Polar-dSTORM used On-Off modulation of the fluorescent probes and acquired adequate frames for a reconstruction of super resolution image. The imaging resolution of polar-dSTORM is high, with localization precision in tens of nanometers. Single dipole average orientation is directly measured separately and the wobbling angle is statistically calculated from neighboring emitters. The drawback of polar-dSTORM is a long imaging time of 2–40 min, which requires a stationary sample during the imaging period. The sample preparation of dSTORM also makes it hard for live cell samples. SDOM has achieved super resolution dipole orientation mapping with a spatial resolution of 150 nm and sub-second temporal resolution. It has been applied to both fixed cell and live cell imaging, which shows great advantages over diffraction limited FPM techniques on both revealing sub-diffractional structures and measuring local dipole orientations. In comparison with polar-dSTROM, SDOM still measures average dipoles and could not separate the signal of the wobbling of single fluorophores from the variation of orientation distribution of fluorophores with the resolvable area. As with SPoD, the power of SDOM would be weakened if the fluorescent probes are distributed too homogeneously or too dense. Thanks to the intrinsic polarization of chromophores, fluorescence polarization reveals the structures and functions of the biological macromolecules. With incorporation with various optical imaging modalities, FPM has played an irreplaceable role in solving many questions. Fast and non-invasive imaging of the samples makes it a complementary tool for X-crystallography which typically applies to individual proteins, or sub-complexes, or EM which requires invasive sample preparation, or AFM which could measure the surface of the sample. Compared to these methods, the specific labeling of the fluorescent probes provides better focus on the structure of interest. As the development of FPM techniques, its power has spread from uniform oriented fluorophores to fluorescent dipoles with organized orientation or on complex bio-structures. The detection accuracy has improved from measuring the bulk volume polarization to sub-diffraction area measurement and single dipole measurement. Imaging resolution of FPM matters not only for intensity image but also for the accuracy of dipole orientation detection. Recently developed super resolution FPM techniques still have their limitations though demonstrating great successes in their imaging results. Spatial 3D super resolution FPM techniques and 3D orientation measurement of fluorescent dipoles are still missing. In the future, more inventions are anticipated which could achieve both high-resolution measurement and fast temporal resolution, allowing imaging samples of live cells. This may be done by introducing existing super resolution principles into FPM, or by better exploiting the intensity fluctuation with polarization modulation, or other alternative means. Principle of SDOM Unlike other super-resolution methods, such as STED, SIM, PLAM and STORM, SDOM can achieve super-resolution based on a wide-field epi-fluorescence illumination microscope. The key point of SDOM is polarized excitation. The SDOM imaging system can be seen in figure A. The rotary linear polarized excitation is realized by continuously rotating a half-wave plate in front of a laser. Then, the illumination beam is focused onto the back focal plane of the objective to generate uniform illumination with rotating polarization light. The series of fluorescence images excited from different angles of polarized excitation are collected by an EMCCD camera. All organic fluorescent dyes and fluorescent proteins are dipoles, whose orientations are closely related to the structure of their labeled target proteins. Because both the excitation absorption and fluorescence emission of dipoles have polarization features, FPM has been widely used to study dipole orientation. As illustrated in the inset schematic figure B, the fluorophores (such as GFP) are linked to the target protein via the C terminus (connected to GFP's N terminus), and the dipole angle of the fluorophore will reflect the orientation of the target protein. Therefore, the SDOM can be used to study the structure of the protein. Figure C illustrates the principle of the SDOM super-resolution technique. Two neighboring fluorophores with 100 nm distance and different dipole orientations (pseudocolor in red and green) emit periodic signals excited by rotating polarized light. By rotating the polarization of excitation, the emission ratio between the two molecules is modulated accordingly, resulting in their separation in the polarization domain. The sparsity deconvolution can achieve a super-resolution image of effective dipole intensities under polarization modulation; with least-squares fitting, the dipole orientation can be determined. Arrows indicate the directions of dipole orientations. The super-resolution was achieved in the polarization domain. The SDOM result of two intersecting lines is shown in figure D, with arrows on top of the super-resolution image, illustrating the dipole orientation. Figure E shows the corresponding data are represented in (X, Y, θ) coordinates, in which the XY plane is the super-resolved intensity image. From both D and E, we can see that as SDOM introduces a new dimension, the molecules that are not able to be resolved in the super-resolution intensity image can be completely separated in the dipole orientation domain. References Microscopy
Super-resolution dipole orientation mapping
Chemistry
2,049
246,043
https://en.wikipedia.org/wiki/Kindness
Kindness is a type of behavior marked by acts of generosity, consideration, rendering assistance, or concern for others, without expecting praise or reward in return. It is a subject of interest in philosophy, religion, and psychology. In Book II of Rhetoric, Aristotle defines kindness as "helpfulness towards someone in need, not in return for anything, nor for the advantage of the helper himself, but for that of the person helped". Friedrich Nietzsche considered kindness and love to be the "most curative herbs and agents in human intercourse". Kindness is one of the Knightly Virtues. In Meher Baba's teachings, God is synonymous with kindness: "God is so kind that it is impossible to imagine His unbounded kindness!" History In English, the word kindness dates from approximately 1300, though the word's sense evolved to its current meanings in the late 1300s. In society Human mate choice studies suggest that both men and women value kindness in their prospective mates, along with intelligence, physical appearance, attractiveness, and age. In psychology Studies at Yale University used games with babies to conclude that kindness is inherent to human beings. There are similar studies about the root of empathy in infancy – with motor mirroring developing in the early months of life, and leading (optimally) to the concern shown by children for their peers in distress. Barbara Taylor and Adam Phillips stressed the element of necessary realism in adult kindness, as well as the way "real kindness changes people in the doing of it, often in unpredictable ways". Behaving kindly may improve a person's measurable well-being. Many studies have tried to test the hypothesis that doing something kind makes a person better off. A meta-analysis of 27 such studies found that the interventions studied (usually measuring short-term effects after brief acts of kindness, in WEIRD research subjects) supported the hypothesis that acting more kindly improves your well-being. Weaponized kindness Some thinkers have suggested that kindness can be weaponized to discourage enemies: Teaching kindness Kindness is most often taught from parents to children and is learned through observation and some direct teaching. Studies have shown that through programs and interventions kindness can be taught and encouraged during the first 20 years of life. Further studies show that kindness interventions can help improve well-being with comparable results as teaching gratitude. Similar findings have shown that organizational level teaching of kindness can improve the well-being of adults in college. See also References Further reading Brownlie, Julie (2024). "How kindness took a hold: A sociology of emotions, attachment and everyday enchantment". The British Journal of Sociology. External links A UK independent, not-for-profit organisation Random Acts of Kindness Foundation Video with quotes about Kindness, from Wikiquote Giving Virtue Concepts in ethics Seven virtues Fruit of the Holy Spirit Emotions Moral psychology Empathy
Kindness
Biology
573
40,518,464
https://en.wikipedia.org/wiki/Grouted%20roof
A grouted roof is a form of slate roof. It has developed as a form of vernacular architecture associated with the West coast of the British Isles. A grouted roof is distinguished by having an overall coat of a cementitious render. Conventional slate roofs A conventional slate roof consists of thin slates, hung over wooden laths. Slates are hung in place by wooden pegs through a hole at the top of each slate. The peg stops the slate slipping downwards, the weight of the slates above it hold the slate down. Later roofs replaced the peg by an iron nail driven into the lath, but the nail is always primarily a hook and it is the weight of the slates above that hold the roof covering down onto the frame. Lead nails were also used, long enough to be bent over the laths from the inside. In time these developed into strips cut from lead sheet. Such roofs are common and are particularly well known in Wales, where the high quality slate from North Wales was exported worldwide. As the slate was of high quality it could be split very thinly by skilled artisans and so gave a lightweight roof covering that was still strong and long-lasting against harsh weather. Where local slate was used instead, this was of lower quality than Snowdonia slate and so the roof design may have been less successful. In particular, lower quality slate must be cut to thicker, heavier slates to retain adequate strength, so giving a roof of much greater weight. Failure in slate roofs The primary means of failure in a slate roof is when individual slates lose their peg attachment and begin to slide out of place. This can open up small gaps above each slate. A secondary mode of failure is when the slates themselves begin to break up. The lower parts of a slate may break loose, giving a gap below a slate. Commonly the small and stressed area above the nail hole may fail, allowing the slate to slip as before. In the worst cases, a slate may simply break in half and be lost altogether. A common repair to slate roofs is to apply 'torching', a mortar fillet underneath the slates, attaching them to the battens. This may applied as either a repair, to hold slipping slates, or pre-emptively on construction. Where slates are particularly heavy, the roof may begin to split apart along the roof line. This usually follows rot developing and weakening the internal timbers, often as a result of poor ventilation within the roofspace. Grouted roofs Where a roof has begun to fail overall, it may be 'grouted' to extend its working life. This is not a spot repair, but an overall treatment applied to the entire roof. A thin wash of mortar or render is applied across the roof. This fills the gaps between slates and also covers the slates' surface. Grouting has two obvious visual effects: the distinct edges of the slates blur into a monolithic roof panel and also, as the grout is a pale white, paler than other building materials visible, the roof becomes much more prominent. Grouting is seen predominately along the western seaboard of the UK, particularly in Pembrokeshire in South West Wales and to a lesser extent the Isle of Anglesey, Cornwall and Devon. It has also been used, primarily as a sealant, on porous stone slates such as the Permian sandstones of Dumfriesshire. Grouting developed as a repair technique. A roof might have been grouted several times in its lifetime, each time the coating becoming thicker and more rounded. Eventually the weight of these extra render coats becomes too much for the structure of the roof and the roof may fail irreparably by splitting apart at the ridge. A partial form of grouting was also practiced, where an external lime mortar grouting was applied, but only on the spaces between slates. This was used for larger slates and may have been applied to cottages that were re-roofed from thatch to slate. Pembrokeshire grouted roofs Grouted roofs are most distinctive in Pembrokeshire and are considered to form part of the vernacular architecture of the county. Pembrokeshire slate is often of poor quality compared to North Wales slate, thus requires thicker, heavier slates. These heavy slates also needed to be set on a mortar bed beneath, and grouted above. A feature of the Pembrokeshire style of roof are a number of prominent vertical ridges on the outside of the roof. These are caused by barbed wire being stretched over the roof before grouting. The wire can be firmly attached to the structure and the barbs on the wire act as a mechanical key to retain the grout in place. Over time and repeated re-grouting, the wires are lost beneath ridges that build up. Barbed wire was used because it was conveniently available to remote farming communities and performed well. It was not manufactured until the 1870s and so this style of roof is newer than the first use of grouting, even with cement. William describes the used of barbed wire as being a twentieth century development, and derived from the earlier 'roped thatch' technique, where a roof of thatched straw or reed would be retained by straw ropes over it. Lime mortar vs. Portland cement The use of lime mortars began as limewashing. This was applied regularly to walls, and even to thatched roofs. Although the cottage architecture of pre-industrial Wales, particularly in the South and South West, was based on earth walls more than mortared stone masonry, this was still coated externally with a limewash. By the later part of the eighteenth century, this fondness for limewashing and its frequency was noted by many travellers to the area. Turf houses though, whilst common, were not limewashed. There is some debate as to the materials used to construct an 'original' grouted roof. Clearly Portland and modern cement-based renders have been used most commonly in the last century. Portland cement first appeared in the 1820s. Before this, lime plasters would have been used, particularly in rural areas where lime burning was a local industry through much of the country. Many sources claim that lime plasters were used originally, with a move to cement renders as they became available in modern times. Others claim that the technique of grouting a roof only developed with the advent of the cheap and easily applied modern cements. Certainly the 'classic' picturesque grouted roof of today, with its bright white finish and prominent wire ridges, is the product of Victorian materials that did not exist locally until the late 19th century. Griff Rhys Jones and Trehilyn Grouted roofs came to some prominence in 2007, when comedian and architectural restorer Griff Rhys Jones began work on Trehilyn farmhouse at Llanwnda, Pembrokeshire. The building was in a poor condition throughout and the grouted roof was particularly bad. Restoration involved the construction of a new roof which, in keeping with the previous roof, was grouted during construction. As has been a theme of building restoration in the UK for some years, a 'historically appropriate' lime plaster was used rather than a Portland cement. The results of the restoration generated much adverse comment. On one hand, the brilliant whiteness of the new roof was criticised, even though their whiteness had always been a notable feature of such roofs and would inevitably tone down with natural weathering. A more incisive comment was to question whether a specifically lime plaster was appropriate for such a roof, raising again the question of when they first appeared. The need for grouting at all on a brand-new roof was questioned, it being likened to, "taking a healthy child to a Victorian reenactment and breaking its legs to give the appearance of rickets." Problems were also experienced during the restoration and some of the blame for these was laid variously, by those concerned and by the outside observers, on either the quality of the materials used or on the plasterers' technique. See also Bermudian roof, grouted roofs in Bermuda, used for rainwater collection. References Roofs Vernacular architecture Pembrokeshire Roof tiles Roofing materials
Grouted roof
Technology,Engineering
1,676
17,141,740
https://en.wikipedia.org/wiki/Radio%20art
Radio art is an aural art form that uses radio technology (i.e. radio transmission, airwaves) to transmit sound. Scope Radio Art contributes to new media art - a digitally driven art movement growing in response to the informative technological revolution we live in. “From the artist's point of view radio is an environment to be entered into and acted upon, a site for various cultural voices to meet, converse, and merge in. These artists cross disciplines, raid all genres and recontextualize them into hybrids.” The radio medium can be used in ways which are different from what it was intended for. In that sense, the way the message is transmitted and received by an audience is as important as the message itself. "As an aural art form it reaffirms that it's not just what we say, but the way we say it." In Victoria Fenner's words, "Radio art is art which is specifically composed for the medium of radio and is uniquely suited to be transmitted via the airwaves." Themes Radio art projects can be collaborative including various professional sources, unifying an audio broadcast with science, experimentation, geography, entertainment, etc." Some have approached radio as an architectural space to be constructed sonically and linguistically; or as the site of an event, an arena, or stage. Some used it as a gathering place, or a conduit, a means to create community. Other artists have employed the media landscape itself as the narrative, while others looked into the body as the site and the source; the voicebox, the larynx become medium and metaphor." Styles Styles include radio documentary, radio drama, soundscape, sound art, electroacoustic music, sound poetry, performance, open source, translation, interviews, audio galleries, sound poetry intended for the radio, spoken word, concerts, experimental narratives, sonic geographies, pseudo documentaries, radio cinema, conceptual and multimedia performances intended for the radio. Art radio and webradio An art radio is a radio station that would dedicate every second of its transmission time to radio art. Although this kind of project can seem utopian in the traditional state of radio, there are few lasting experiences in the underground or community side such as London's ResonanceFM, and Upstate New York's WGXC which intend to make radio with art and promote the art of listening. Also, radio had renewed itself through the Internet. The audio streaming technique had replaced the analogue transmitting system and artists can experiment on radio outside the legal constraints of an FM license for example. Among the webradios which are dedicated to radio art, some broadcast pieces like traditional radios. Some others directly experiment with the medium in the more concrete sense. Radio Astronomy broadcast sounds taken from outer space in real time. Le Poulpe is a networking experimental radio that mix several "spaces" processed and streamed through the Internet. References External links *Wave Farm Radio and WGXC 90.7-FM in New York's Upper Hudson Valley web archive of radio art at Tellus Audio Cassette Magazine at Ubuweb New American Radio (1987–1998) Contemporary art New media New media art Visual arts genres Radio broadcasting
Radio art
Technology
659
52,332,469
https://en.wikipedia.org/wiki/Polyporus%20ciliatus
Polyporus ciliatus is a species of fungus in the genus Polyporus. References External links ciliatus Fungus species
Polyporus ciliatus
Biology
28
1,529,106
https://en.wikipedia.org/wiki/Full-sky%20Astrometric%20Mapping%20Explorer
Full-sky Astrometric Mapping Explorer (or FAME) was a NASA proposed astrometric satellite designed to determine with unprecedented accuracy the positions, distances, and motions of 40 million stars within our galactic neighborhood (distances by stellar parallax possible). This database was to allow astronomers to accurately determine the distance to all of the stars on this side of the Milky Way galaxy, detect large planets and planetary systems around stars within 1,000 light years of the Sun, and measure the amount of dark matter in the galaxy from its influence on stellar motions. It was to be a collaborative effort between the United States Naval Observatory (USNO) and several other institutions. FAME would have measured stellar positions to less than 50 microarcseconds. The NASA MIDEX mission was scheduled for launch in 2004. In January 2002, however, NASA abruptly cancelled this mission, mainly due to concerns about costs, which had grown from US$160 million initially to US$220 million. This would have been an improvement over the High Precision Parallax Collecting Satellite (Hipparcos) which operated 1989–1993 and produced various star catalogs. Astrometric parallax measurements form part of the cosmic distance ladder, and can also be measured by other space telescopes such as Hubble (HST) or ground-based telescopes to varying degrees of precision. Compared to the FAME accuracy of 50 microarcseconds, the Gaia mission is planning 10 microarcseconds accuracy, for mapping stellar parallax up to a distance of tens of thousands of light-years from Earth. See also Explorer program Gaia (spacecraft) Nano-JASMINE References Explorers Program Space telescopes Space astrometry missions
Full-sky Astrometric Mapping Explorer
Astronomy
341
26,339,316
https://en.wikipedia.org/wiki/Echinopsidine
Echinopsidine (Adepren) is an antidepressant that was under development in Bulgaria for the treatment of depression. It increases serotonin, norepinephrine, and dopamine levels in the brain and is believed to act as a monoamine oxidase inhibitor (MAOI). Echinopsidine is found naturally in Echinops echinatus along with the related alkaloids echinopsine and echinozolinone. See also Monoamine oxidase inhibitor References Quinoline alkaloids
Echinopsidine
Chemistry
116
10,040,846
https://en.wikipedia.org/wiki/Random%20measure
In probability theory, a random measure is a measure-valued random element. Random measures are for example used in the theory of random processes, where they form many important point processes such as Poisson point processes and Cox processes. Definition Random measures can be defined as transition kernels or as random elements. Both definitions are equivalent. For the definitions, let be a separable complete metric space and let be its Borel -algebra. (The most common example of a separable complete metric space is .) As a transition kernel A random measure is a (a.s.) locally finite transition kernel from an abstract probability space to . Being a transition kernel means that For any fixed , the mapping is measurable from to For every fixed , the mapping is a measure on Being locally finite means that the measures satisfy for all bounded measurable sets and for all except some -null set In the context of stochastic processes there is the related concept of a stochastic kernel, probability kernel, Markov kernel. As a random element Define and the subset of locally finite measures by For all bounded measurable , define the mappings from to . Let be the -algebra induced by the mappings on and the -algebra induced by the mappings on . Note that . A random measure is a random element from to that almost surely takes values in Basic related concepts Intensity measure For a random measure , the measure satisfying for every positive measurable function is called the intensity measure of . The intensity measure exists for every random measure and is a s-finite measure. Supporting measure For a random measure , the measure satisfying for all positive measurable functions is called the supporting measure of . The supporting measure exists for all random measures and can be chosen to be finite. Laplace transform For a random measure , the Laplace transform is defined as for every positive measurable function . Basic properties Measurability of integrals For a random measure , the integrals and for positive -measurable are measurable, so they are random variables. Uniqueness The distribution of a random measure is uniquely determined by the distributions of for all continuous functions with compact support on . For a fixed semiring that generates in the sense that , the distribution of a random measure is also uniquely determined by the integral over all positive simple -measurable functions . Decomposition A measure generally might be decomposed as: Here is a diffuse measure without atoms, while is a purely atomic measure. Random counting measure A random measure of the form: where is the Dirac measure and are random variables, is called a point process or random counting measure. This random measure describes the set of N particles, whose locations are given by the (generally vector valued) random variables . The diffuse component is null for a counting measure. In the formal notation of above a random counting measure is a map from a probability space to the measurable space . Here is the space of all boundedly finite integer-valued measures (called counting measures). The definitions of expectation measure, Laplace functional, moment measures and stationarity for random measures follow those of point processes. Random measures are useful in the description and analysis of Monte Carlo methods, such as Monte Carlo numerical quadrature and particle filters. See also Poisson random measure Vector measure Ensemble References Measures (measure theory) Stochastic processes
Random measure
Physics,Mathematics
668
25,431,009
https://en.wikipedia.org/wiki/Fundamental%20lemma%20%28Langlands%20program%29
In the mathematical theory of automorphic forms, the fundamental lemma relates orbital integrals on a reductive group over a local field to stable orbital integrals on its endoscopic groups. It was conjectured by in the course of developing the Langlands program. The fundamental lemma was proved by Gérard Laumon and Ngô Bảo Châu in the case of unitary groups and then by for general reductive groups, building on a series of important reductions made by Jean-Loup Waldspurger to the case of Lie algebras. Time magazine placed Ngô's proof on the list of the "Top 10 scientific discoveries of 2009". In 2010, Ngô was awarded the Fields Medal for this proof. Motivation and history Langlands outlined a strategy for proving local and global Langlands conjectures using the Arthur–Selberg trace formula, but in order for this approach to work, the geometric sides of the trace formula for different groups must be related in a particular way. This relationship takes the form of identities between orbital integrals on reductive groups G and H over a nonarchimedean local field F, where the group H, called an endoscopic group of G, is constructed from G and some additional data. The first case considered was . then developed the general framework for the theory of endoscopic transfer and formulated specific conjectures. However, during the next two decades only partial progress was made towards proving the fundamental lemma. Harris called it a "bottleneck limiting progress on a host of arithmetic questions". Langlands himself, writing on the origins of endoscopy, commented: Statement The fundamental lemma states that an orbital integral O for a group G is equal to a stable orbital integral SO for an endoscopic group H, up to a transfer factor Δ : where F is a local field, G is an unramified group defined over F, in other words a quasi-split reductive group defined over F that splits over an unramified extension of F, H is an unramified endoscopic group of G associated to κ, KG and KH are hyperspecial maximal compact subgroups of G and H, which means roughly that they are the subgroups of points with coefficients in the ring of integers of F, 1KG and 1KH are the characteristic functions of KG and KH, Δ(γH,γG) is a transfer factor, a certain elementary expression depending on γH and γG, γH and γG are elements of G and H representing stable conjugacy classes, such that the stable conjugacy class of G is the transfer of the stable conjugacy class of H, κ is a character of the group of conjugacy classes in the stable conjugacy class of γG, SO and O are stable orbital integrals and orbital integrals depending on their parameters. Approaches proved the fundamental lemma for Archimedean fields. verified the fundamental lemma for general linear groups. and verified some cases of the fundamental lemma for 3-dimensional unitary groups. and verified the fundamental lemma for the symplectic and general symplectic groups Sp4, GSp4. A paper of George Lusztig and David Kazhdan pointed out that orbital integrals could be interpreted as counting points on certain algebraic varieties over finite fields. Further, the integrals in question can be computed in a way that depends only on the residue field of F; and the issue can be reduced to the Lie algebra version of the orbital integrals. Then the problem was restated in terms of the Springer fiber of algebraic groups. The circle of ideas was connected to a purity conjecture; Laumon gave a conditional proof based on such a conjecture, for unitary groups. then proved the fundamental lemma for unitary groups, using Hitchin fibration introduced by , which is an abstract geometric analogue of the Hitchin system of complex algebraic geometry. showed for Lie algebras that the function field case implies the fundamental lemma over all local fields, and showed that the fundamental lemma for Lie algebras implies the fundamental lemma for groups. Notes References External links Gerard Laumon lecture on the fundamental lemma for unitary groups Algebraic groups Automorphic forms Theorems in abstract algebra Lemmas in number theory Langlands program
Fundamental lemma (Langlands program)
Mathematics
880
792,022
https://en.wikipedia.org/wiki/United%20Nations%20Information%20and%20Communication%20Technologies%20Task%20Force
The United Nations Information and Communication Technologies Task Force (UN ICT TF) was a multi-stakeholder initiative associated with the United Nations which is "intended to lend a truly global dimension to the multitude of efforts to bridge the global digital divide, foster digital opportunity and thus firmly put ICT at the service of development for all". Establishment The UN ICT Task Force was created by United Nations Secretary-General Kofi Annan in November 2001, acting upon a request by the United Nations Economic and Social Council (ECOSOC) dated July 11, 2000, with an initial term of mandate of three years (until the end of 2004). It followed in the footsteps of the World Economic Forum (WEF) Global Digital Divide Initiative (GDDI), and the Digital Opportunities Task Force (DOT Force), established in 2000 by the G8 at their annual summit in Okinawa, Japan. By providing it with a home in the United Nations, this accorded the UN ICT Task Force, in the eyes of many developing countries, a broader legitimization than the previous WEF and G8 initiatives, even if these previous initiatives also included a multi-stakeholder approach with broad participation by stakeholders from industrialized and developing countries. Aims and objectives The Task Force's principal aim was to provide policy advice to governments and international organizations for bridging the digital divide. In addition to supporting the World Summit on the Information Society (WSIS) and leading the UN in developing ICT strategies for development, the Task Force's objective was to form partnerships between the UN system and states, private industry, trusts, foundations, and donors, and other stakeholders. Membership and organization The UN ICT Task Force has included the top ranks of the computer industry (Cisco Systems, Hewlett-Packard, IBM, Nokia, SAP, Siemens, Sun Microsystems), together with global NGOs (e.g., the Association for Progressive Communications), governments and international agencies. Its coordinating body was a multi-stakeholder bureau, assisted by a small secretariat at UN headquarters in New York. Technical advice was provided by a high-level panel of technical advisors. Activities United Nations Information Technology Service (UNITeS) Within the Report of the high-level panel of experts on information and communication technology (22 May 2000) suggesting a UN ICT Task Force, the panel welcomed the establishment of a United Nations Information Technology Service (UNITeS), suggested by Kofi Annan in "We the peoples: the role of the United Nations in the 21st century" (Millennium Report of the Secretary-General). The panel made suggestions on its configuration and implementation strategy, including that ICT4D volunteering opportunities make mobilizing "national human resources" (local ICT experts) within developing countries a priority, for both men and women. The initiative was launched at the United Nations Volunteers under the leadership of Sharon Capeling-Alakija and was active from February 2001 to February 2005. Initiative staff and volunteers participated in the World Summit on the Information Society (WSIS) in Geneva in December 2003. Challenge to Silicon Valley In November 2002 Kofi Annan issued a Challenge to Silicon Valley to create suitable systems at prices low enough to permit deployment everywhere. The Office of the United Nations High Commissioner for Refugees ran a refugee camp in Tanzania where the Global Catalyst Foundation had placed computers and communications equipment for the use of the Burundian refugees confined there. The International Telecommunication Union (ITU) worked with the Kingdom of Bhutan on a Simputer project. World Summit on the Information Society (WSIS) The Task Force was active, inter alia, in the process leading to the World Summit on the Information Society (WSIS) in Geneva in December 2003 and WSIS II in Tunis, Tunisia, in November 2005. In order to participate in the second phase of the WSIS, the Task Force's original three-year mandate was extended by another year and expired on 31 December 2005, with no further extension. Working groups The Task Force's stakeholders, members and the experts on the panel of technical advisors, were active in working groups organized around four broad themes: ICT policy and governance Enabling environment Human resource development and capacity building ICT Indicators and MDG mapping Regional networks Regional activities were carried out in five regional networks—Africa, Latin America and the Caribbean, Asia, Arab States, and Europe and Central Asia. Meetings 2002, June 17–18: A session of the General Assembly of the United Nations was devoted to information and communication technologies for development, addressing the digital divide in the context of globalization and the development process. The session promoted coherence and synergies between various regional and international information and communication technologies initiatives. The meeting also contributed to the preparation of WSIS. Many countries were represented by high-level officials responsible for communications and for development. The Task Force held 10 semi-annual meetings in various places that served as important venues for exchange of best practices, and to bring the various stakeholders together to work on common themes. Most successful, in the eyes of the participants, were those meetings that were held in conjunction with a series of Global Forums: 1st meeting: at UN headquarters in New York City, NY, (United States) - November 19–20, 2001. 2nd meeting: at UN headquarters in New York City, NY, (United States) - February 3–4, 2002. 3rd meeting: at UN headquarters in New York City, NY, (United States) - September 30 - October 1, 2002, focused on ICT for development in Africa. It also reviewed the results of the first year of Task Force activities and agreed on an ambitious strategy for the next two years. 4th meeting: at UN in Geneva, (Switzerland) - February 21–22, 2004, with a Private Sector Forum. 5th meeting: at WIPO in Geneva - September 12–13, 2003, to allow participants to discuss the Task Force's contribution to WSIS. 6th meeting: at UN headquarters in New York City, NY, (United States) - March 2004, with a Global Forum on Internet Governance. 7th meeting: at the Ministry of Foreign Affairs in Berlin, Germany - November 18–20, 2004, with a Global Forum on an Enabling Environment. 8th meeting: in Dublin, Ireland - April 13–15, 2005, with a Global Forum on Harnessing the Potential of ICTs in Education. 9th meeting: at ILO in Geneva, Switzerland - October 1, 2005. 10th (final) meeting: at the World Summit on the Information Society in Tunis, Tunisia - November 17, 2005. In addition, a Global Roundtable Forum on "Innovation and Investment: Scaling Science and Technology to Meet the MDGs" was held in New York City, 13 September 2005. The primary focus of the Forum was on the critical role of science, technology and innovation, especially information and communication technologies, in scaling-up grassroots, national and global responses to achieve the Millennium Development Goals. WSIS II in Tunis Parallel to the booth at the ICT4ALL exhibition, a series of events was held under the auspices of the UN ICT Task Force and its members: Measuring the Information Society The Partnership for Measuring ICT for Development involves 11 organizations—Eurostat, the International Telecommunication Union (ITU), the Organisation for Economic Co-operation and Development (OECD), the United Nations Educational, Scientific and Cultural Organization (UNESCO), the United Nations ICT Task Force, the five United Nations Regional Commissions and the World Bank. Role of Parliaments in the Information Society Key parliament leaders presented their views on the role that national and regional assemblies can play in building the information society at a “High-level Dialogue on Governance, Global Citizenship and Technology”, on 16 November. Choosing the Right Technologies for Education At this workshop, the Global e-School Initiative presented the Total Cost of Ownership Calculator—a framework for identifying and selecting the right ICT for schools by assessing their benefits, feasibility and costs. Building Partnerships for the Information Society Two high-level round tables on 16 November focused on “Regional Perspectives for the Global Information Society” and on “Women in the Information Society: Building a Gender Balanced Knowledge-based Economy”. Putting ICT to Work for the Millennium Development Goals and the UN Development Agenda The 17 November round table examined how ICT can be applied to the achievement of the internationally agreed development goals, and discussed ways to raise awareness of ICT as an enabler of development. Achieving Better Quality and Cost Efficiency in Health Care and Education through ICT The 17 November panel demonstrated the potential of ICT to improve quality and cost efficiency of key public services, with specific focus on education and health care. Bridging the Digital Divide with Broadband Wireless Internet The 17 November round table focused on the critical role that broadband wireless infrastructure deployments play in bridging the digital divide. Outcomes from WSIS GESCI One of the notable outcomes of the work of the UN ICT Task Force was the creation in 2003 of the Global E-Schools and Communities Initiative (GESCI), an international NGO initially located in Dublin, Ireland, to improve education in schools and communities through the use of information and communication technologies. GESCI was officially launched during the WSIS. Today GESCI (www.gesci.org) is located in Nairobi, Kenya. It has evolved into an organization engaging with governments and ministries, development partners, the private sector and communities to provide strategic advice, coordinate policy dialogue, conduct research and develop and implement models of good practice for the widespread use and integration of ICTs in formal education and other learning environments, within the context of supporting the development of inclusive knowledge societies and the achievement of the SDGs. ePol-Net Another outcome is the Global ePolicy Resource Network (ePol-NET), designed to marshal global efforts in support of national e-strategies for development. The network provides ICT policymakers in developing countries with the depth and quality of information needed to develop effective national e-policies and e-strategies. The network was first proposed by the members of the Digital Opportunities Task Force (DOT Force), who merged their activities with the UN ICT Task Force in 2002. The ePol-Net was also officially launched during the WSIS. Global Centre for ICT in Parliament Another outcome of the WSIS is the Global Centre for ICT in Parliament. Launched by the UN Department of Economic and Social Affairs (UNDESA) in cooperation with the Inter-parliamentary Union (IPU) on the occasion of the World Summit of the Information Society (WSIS) in Tunis in November 2005, the Global Centre for Information and Communication Technologies in Parliament responds to the common desire to build a people-centred, inclusive and development-oriented information society, where legislatures are empowered to better fulfill their constitutional functions through Information and Communication Technologies (ICT). The Global Centre for ICT in Parliament acts as a clearing house for information, research, innovation, technology and technical assistance, and promotes a structured dialogue among parliaments, centres of excellence, international organizations, the civil society, the private sector and the donor community, with the purpose to enhance the sharing of experiences, the identification of best practices and the implementation of appropriate solutions. Follow-up The task of bridging the digital divide is yet unfinished. The WSIS has called for an Internet Governance Forum to allow for a global multi-stakeholder discussion of issues related to the governance of the global resource that the Internet represents. The WSIS also called for a follow-up and implementation process, for which the principles embodied in the multi-stakeholder composition and workings of the UN ICT TF can provide a useful model. Work is also being carried on by the UN Group on the Information Society (UN GIS), with a focus on the UN System, and the successor to the UN ICT TF, the Global Alliance for ICT and Development (GAID), with an international development emphasis. Selected documents Report of the high-level panel of experts on information and communication technology (22 May 2000), suggesting a UN ICT Task Force. Draft Ministerial Declaration (11 July 2000), asking for the establishment of the UN ICT TF. Publication series As part of its work, the Task Force and its members have published a series of books on various topics related to the work of the Task Force. These books are available in the UN bookstore, at Amazon (partially), or in PDF form: UN ICT Task Force Series 1 - Information Insecurity: A Survival Guide to the Uncharted Territories of Cyber-Threats and Cyber-Security (By Eduardo Gelbstein, Ahmad Kamal) - July 2005, UN ICT Task Force Series 2 - Information and Communication Technologies for African Development: An Assessment of Progress and Challenges Ahead (Edited with Introduction by Joseph O. Okpaku, Sr., Ph.D.) - July 2005, UN ICT Task Force Series 3: The Role of Information and Communication Technology in Global Development - Analyses and Policy Recommendations (Edited with introduction by Abdul Basit Haqqani) - July 2005, UN ICT Task Force Series 4: Connected for Development: Information Kiosks and Sustainability (By Akhtar Badshah, Sarbuland Khan and Maria Garrido) - July 2005, UN ICT Task Force Series 5 - Internet Governance: A Grand Collaboration (By Don MacLean) - July 2005, UN ICT Task Force Series 6 - Creating an Enabling Environment: Toward the Millennium Development Goals (By Denis Gilhooly) - September 2005, UN ICT Task Force Series 7 - WTO, E-commerce and Information Technologies: From the Uruguay Round through the Doha Development Agenda (By Sacha Wunsch-Vincent, Edited by Joanna McIntosh) UN ICT Task Force Series 8: The World Summit on the Information Society: Moving from the Past into the Future (Edited by Daniel Stauffacher and Wolfgang Kleinwächter) UN ICT Task Force Series 9: Harnessing the Potential of ICT for Education – A Multistakeholder Approach (Edited by Bonnie Bracey and Terry Culver) UN ICT Task Force Series 10: Village Phone Replication Manual (By David Keogh and Tim Wood) - September 2005, UN ICT Task Force Series 11: Information and Communication Technology for Peace - The Role of ICT in Preventing, Responding to and Recovering from Conflict (By Daniel Stauffacher, William Drake, Paul Currion and Julia Steinberger) UN ICT Task Force Series 12: Reforming Internet Governance: Perspectives from the Working Group on Internet Governance (WGIG) (Edited by William J. Drake) See also International Telecommunication Union Multistakeholder Model eCorps Geekcorps Geeks Without Bounds One Laptop per Child United Nations Information Technology Service (UNITeS) World Computer Exchange Notes External links UNICTTF official homepage Information about the Digital Opportunities Task Force (DOT Force) Global eSchools and Communities Initiative (GeSCI) Digital divide Information and communication technologies for development Internet governance organizations Organizations established by the United Nations Task forces
United Nations Information and Communication Technologies Task Force
Technology
3,023
7,339,097
https://en.wikipedia.org/wiki/MPEG%20program%20stream
Program stream (PS or MPEG-PS) is a container format for multiplexing digital audio, video and more. The PS format is specified in MPEG-1 Part 1 (ISO/IEC 11172-1) and MPEG-2 Part 1, Systems (ISO/IEC standard 13818-1/ITU-T H.222.0). The MPEG-2 Program Stream is analogous and similar to ISO/IEC 11172 Systems layer and it is forward compatible. Program streams are used on DVD-Video discs and HD DVD video discs, but with some restrictions and extensions. The filename extensions are VOB and EVO respectively. Coding structure Program streams are created by combining one or more Packetized Elementary Streams (PES), which have a common time base, into a single stream. It is designed for reasonably reliable media such as disks, in contrast to MPEG transport stream which is for data transmission in which loss of data is likely. Program streams have variable size records and minimal use of start codes which would make over the air reception difficult, but has less overhead. Program stream coding layer allows only one program of one or more elementary streams to be packaged into a single stream, in contrast to transport stream, which allows multiple programs. MPEG-2 Program stream can contain MPEG-1 Part 2 video, MPEG-2 Part 2 video, MPEG-1 Part 3 audio (MP3, MP2, MP1) or MPEG-2 Part 3 audio. It can also contain MPEG-4 Part 2 video, MPEG-2 Part 7 audio (AAC) or MPEG-4 Part 3 (AAC) audio, but they are rarely used. The MPEG-2 Program stream has provisions for non-standard data (e.g. AC-3 audio or subtitles) in the form of so-called private streams. International Organization for Standardization authorized SMPTE Registration Authority, LLC as the registration authority for MPEG-2 format identifiers. It publishes a list of compression formats which can be encapsulated in MPEG-2 transport stream and program stream. Coding details See also Elementary stream MPEG transport stream References External links MPEG-2 Official MPEG web site BBC On MPEG RFC 3555 - MIME Type Registration of RTP Payload Formats (video/MP2P, video/MP1S) Digital container formats MPEG MPEG-2 ITU-T recommendations
MPEG program stream
Technology
505
46,552,772
https://en.wikipedia.org/wiki/LG%20G4
The LG G4 is an Android smartphone developed by LG Electronics as part of the LG G series. Unveiled on 28 April 2015 and first released in South Korea on 29 April 2015 and widely released in June 2015, as the successor to 2014's G3. The G4 is primarily an evolution of the G3, with revisions to its overall design, display and camera. The G4 received mixed to positive reviews; while praising the G4's display quality, camera, and overall performance, critics characterized the G4 as being a robust device that did not contain enough substantial changes or innovation over its predecessor to make the device stand out against its major competitors, but could appeal to power users needing a smartphone with expandable storage and a removable battery due to the exclusion of these features from its main competitor on launch, the Samsung Galaxy S6. The device also became the subject of criticism due to instances of hardware failure caused by manufacturing defects, deemed "bootloops", which culminated in a class-action lawsuit filed in March 2017. Specifications Hardware Design The design of the G4 is an evolution of the G3, maintaining elements such as its rear-located volume and camera buttons. Several back cover options are available, including plastic covers with a diamond pattern and plastic covers coated in leather, stitched down the middle. Six leather color options are available. Display, chipsets, battery The G4 features a , 1440p "Quantum IPS" display, which LG stated would provide better contrast, color accuracy and energy efficiency over another display that LG did not explicitly specify. The G4 utilizes a hexa-core Snapdragon 808 with 3 GB of RAM, consisting of four low-power Cortex-A53 cores and two Cortex-A57 cores. The G4 includes a removable 3000 mAh battery and supports Qualcomm Quick Charge 2.0 technology with a compatible AC adapter, which is not included. The G4 comes with 32 GB of storage with the option to expand the amount of available storage with a microSD card up to 2 TB in size. Camera The rear-facing camera has a 16-megapixel sensor with a f/1.8 aperture lens, infrared active autofocus (by way of a time-of-flight sensor), three-axis optical image stabilization, and LED flash. A "RGB color spectrum sensor" is located below the flash, which analyzes ambient lighting to optimize the white balance and flash color to generate more natural-looking images. The front-facing camera is 8-megapixels with an aperture of f/2.0. While main camera is able to record video at 2160p (4K) with 30 fps and 720p (HD) with 120 fps, 1080p (Full HD) is unconventionally limited to 30 fps, rather than 60 fps, as from competing mobile phones such as the Samsung Galaxy S6, the HTC One M9 and the iPhone 6. Software The G4 is supplied with Android 5.1 "Lollipop", although the overall user experience is relatively similar to that of the G3. The camera software was upgraded with raw image support, along with a new manual mode offering the ability to adjust the focus, shutter speed, ISO and white balance. Optionally, a photo can be taken automatically by double-clicking the lower volume button when the screen is off. The "Glance View" feature allows users to view notifications when the display is off by dragging down. On October 14, 2015, LG announced that the G4 would be upgraded to Android 6.0 "Marshmallow", with its release beginning in Poland the following week. followed by releases in other European countries, South Korea, and in the U.S. on Sprint. It enables new features such as "Google Now on Tap", which allows users to perform searches within the context of information currently being displayed on-screen, and "Doze", which optimizes battery usage when the device is not being physically handled. By February 2016, it had reached wider distribution, such as in Canada. Android 7.0 "Nougat" became available for selected models in July 2017. Reception The G4 was met with mixed to positive reception from critics. The Verge felt that the G4 could appeal to power users alienated by the removal of expandable storage and replaceable batteries on the then-recently released Samsung Galaxy S6. The display was praised for its improved color accuracy and energy efficiency over the G3, remarking that it was "as good, if not better", than the S6. The G4's rear camera was praised for its quality and color reproduction, along with its "comprehensive" manual mode and its "uncommon" ability to save raw images, but it was noted that its autofocus sometimes missed focus or took long to achieve focus, and that the manual mode did not offer saturation or sharpness controls. LG's software was criticized for being relatively unchanged from the G3, and for suffering from feature creep and "ugly" aesthetics. The G4 given a 7.9 out of 10, concluding that it "actually functions just fine — but you're not going to see it in the hands of every person at the state fair this summer." The performance of the LG G4 was contrasted to devices that utilize Qualcomm's Snapdragon 810 (such as the LG G Flex 2), which was known to have overheating problems. Ars Technica felt that the need to throttle CPU-intensive tasks to prevent overheating had a negative effect on the 810's performance, arguing that as a result, the G4's Snapdragon 808 performed better in some cases than the 810 (represented in comparison testing by the HTC One M9), but that the Exynos processor of the Galaxy S6 had better overall performance than the two Snapdragon chips on benchmarks. Considering its camera to be the best part of the device, Ars Technica concluded that the G4 was "a perfectly competent smartphone, but doesn't really stand out much." Technical problems Touchscreen problems Some users reported inconsistencies in the performance of the G4's touchscreen, with some units—particularly U.S. T-Mobile and Verizon variants—having problems registering quick taps and swipes. LG released an update to fix the problem in its keyboard app in June 2015, followed by Verizon releasing a major OTA update in November 2015 to fix the touchscreen problems and address other problems with the device. "Bootloop" hardware failure In January 2016, LG confirmed that some G4 units had a manufacturing defect that eventually caused them to enter an unrecoverable reboot loop, resulting from "loose contact between components". The company stated that it would repair or replace affected devices under warranty at no charge. In March 2017, a class-action lawsuit was filed against LG Electronics in the U.S. state of California, alleging that despite acknowledging its existence, LG continued to produce units of the G4 and its sister model, the V10, with the defect, and distributed phones that could succumb to the defects as warranty replacements. The lawsuit stated that LG did not recall or "offer an adequate remedy to consumers" who bought the two models, nor provide any remedy for devices that fell outside of the one-year warranty period. The lawsuit was never certified as a class action, and was sent to arbitration. In January 2018, LG agreed to pay the participants in the lawsuit a $700 credit towards the purchase of a LG smartphone or $425. Since the lawsuit was not certified as a class action, consumers not actually participating in the lawsuit did not receive any payment. See also Comparison of smartphones References External links Android (operating system) devices LG Electronics smartphones Mobile phones introduced in 2015 Mobile phones with user-replaceable battery Mobile phones with 4K video recording Discontinued flagship smartphones Mobile phones with infrared transmitter
LG G4
Technology
1,672
18,332,975
https://en.wikipedia.org/wiki/Giant%20resonance
In nuclear physics, giant resonance is a high-frequency collective excitation of atomic nuclei, as a property of many-body quantum systems. In the macroscopic interpretation of such an excitation in terms of an oscillation, the most prominent giant resonance is a collective oscillation of all protons against all neutrons in a nucleus. In 1947, G. C. Baldwin and G. S. Klaiber observed the giant dipole resonance (GDR) in photonuclear reactions, and in 1972 the giant quadrupole resonance (GQR) was discovered, and in 1977 the giant monopole resonance (GMR) was discovered in medium and heavy nuclei. Giant dipole resonance Giant dipole resonances may result in a number of de-excitation events, such as nuclear fission, emission of neutrons or gamma rays, or combinations of these. Giant dipole resonances can be caused by any mechanism that imparts enough energy to the nucleus. Classical causes are irradiation with gamma rays at energies from 7 to 40 MeV, which couple to nuclei and either cause or increase the dipole moment of the nucleus by adding energy that separates charges in the nucleus. The process is the inverse of gamma decay, but the energies involved are typically much larger, and the dipole moments induced are larger than occur in the excited nuclear states that cause the average gamma decay. High energy electrons of >50 MeV may cause the same phenomenon, by coupling to the nucleus via a "virtual gamma photon", in a nuclear reaction that is the inverse (i.e., reverse) of internal conversion decay. See also Neutron emission References Further reading M. N. Harakeh, A. van der Woude: Giant Resonances: Fundamental High-Frequency Modes of Nuclear Excitation, Oxford Studies in Nuclear Physics, Oxford University Press, USA, July 2001, P. F. Bortignon, A. Bracco, R. A. Broglia: Giant Resonances, Contemporary Concepts in Physics, CRC Press, July 1998, External links Chomaz, Ph.: Collective excitations in nuclei Brink, D. M.: Giant resonances in excited nuclei Giant nuclear resonances, AssessScience.com Giant nuclear resonance, Answers.com referring to the McGraw-Hill Encyclopedia of Science & Technology Nuclear physics
Giant resonance
Physics
481
7,710,343
https://en.wikipedia.org/wiki/Land%20banking
Land banking is the practice of aggregating parcels of land for future sale or development. While in many countries land banking may refer to various private real estate investment schemes, in the United States it refers to the establishment of quasi-governmental county or municipal authorities tasked with managing an inventory of surplus land. In some cases the practice is run as a scam, with land being sold above its market value and its potential for future returns exaggerated. Municipal land banks in the United States Definition Land banks are quasi-governmental entities created by counties or municipalities to effectively manage and repurpose an inventory of underused, abandoned, or foreclosed property. They are often chartered to have powers that enable them to accomplish these goals in ways that existing government agencies can not. While the land bank "model" has gained broad support and has been implemented in a number of cities, it is implemented differently so as to best address the needs of the municipality, the state and the local legal context in which it was created. History Land banking originated in the 1920s and 1930s as a means of making low-priced land available for housing and ensuring orderly development. The period of deindustrialization in the United States coupled with increased suburbanization in the middle of the 20th century left many American cities with large amounts of vacant and blighted industrial, residential, and commercial property. Beginning in the early 1970s, municipalities began to seek solutions to manage decline or spur revitalization in once prosperous city neighborhoods. The first land bank was created in St. Louis in 1971. While additional municipalities continued to adopt them at a trickle it wasn't until the mid 2000s that land banks became viewed as a tested, reliable, and accepted model and experienced widespread implementation – particularly after the success of the Genesee County Land Bank. In 2009, the Department of Housing and Urban Development issued a report embracing land banks as a best practice model for municipalities dealing with the effects of the 2007–2008 financial crisis and the ensuing foreclosure crisis. Investment land banking by country United Kingdom Land banking developed in the late 17th Century in the British Isles and was previously the preserve of the landed gentry or real estate developers such as Nicholas Barbon. Many reputable commercial building companies engage successfully in land banking for future building projects. Companies also purchase land sites and easily divide them into smaller plots, then offer these plots for sale to individual investors. This relatively new practice in the UK does not fall under the control of the Financial Conduct Authority. Many people are wary of this form of investment as many plot-based land banking companies have failed or been closed down. There are currently no audited successes recorded for UK plot-based land banking despite the UK having gone through a major property boom between 2002 and 2007. A land banking scheme that is a Collective investment scheme is a "regulated activity" for the purposes of the Financial Services and Markets Act 2000 and, according to section 19(1), may only be operated in the UK by a person who is either authorised or exempt. Section 26 provides that an agreement made by a person in contravention of this is unenforceable and any sums paid to him may be recovered together with compensation for any loss suffered. After recent FCA enforcement of this regulation, many companies selling UK land plots have moved outside of the European Union and only offer land plots to non-UK residents who are not protected by FCA regulations. Companies offering land banking plots in the UK Since the changes in the land Registration Act, a number of companies offering UK land plots as investments have been formed. Typically this land is greenbelt, nature conservation, flood plain, agricultural or protected land unsuitable for development. There are no recorded successful planning permission applications for plots sold under such collective investment schemes. There have been considerable losses recorded by investors in UK land plot investment schemes. A large number of British companies offering UK land plots have failed or been shut down by the Financial Services Authority (FSA) or other authorities. Some companies have now moved offshore after an FSA investigation. Some companies now offer UK land plots from locations such as Dubai or Singapore where the local authorities do not regulate such activities, or are not aware of the high risk nature of the investment. In June 2010 the Monetary Authority of Singapore (MAS) issued a warning on land banking plot schemes warning they may be scams with a specific focus on companies offering land from the UK and Canada. Sales methods A company representative may contact an individual by telephone, in temporary shopping center booths, or at property shows and offer a strategic land investment in the UK. Very often UK government or industry statistics, the proximity of the land to built up areas, or the recent history of UK house prices are quoted as a demonstration of why the land plot is a great investment. Verbal communication will often indicate that the land is fast tracked for building approval and has strong potential as building land. When pricing the land reference is typically made to approved building land prices at the market peak. Very often the land banking company will present detailed plans showing a housing development on the site. These plans are often referred to as "pre-approved", "concept" or "predevelopment". The sales person will focus on the potential future value of the land against the current selling price. No reference is ever made to the value of green belts or agricultural land, or the issues involved with long-term maintenance, or collectively selling tiny plots of land. The sales price is typically increased 10–100 times over the current value of the land. Plans shown have no validity in UK planning law and cannot be considered an indication of progress in the planning process. No written contractual promise is ever given for planning permission despite the typically extreme optimism of the salesperson. The salesperson will typically never mention that the land is protected, or greenbelt land and cannot be developed under current planning regulations. There is typically no possibility of getting planning permission in any reasonable timeframe. The investor may end up paying a considerable amount of money for a small area of low-value land which has a very high risk of standing undeveloped. Once the general public becomes aware of the lack of viability of the proposed plot investment scheme, the real value of the individual plots collapses. This is typically followed by the land plot company liquidating completely, or relocating to another legal jurisdiction. For customers that show a willingness to purchase such schemes, there may also be attempts to sell additional plot based land banking products at alternate locations, or other high yield investment programmes. Customers may also be added to suckers lists which are then sold to other companies offering similar schemes. When the land banking plot company fails, plot investors may also be offered investment recovery or planning services for a fee. Such services typically are fraudulent or fail and lead to a further loss of money for the investor. Controversies A You and Yours documentary, first aired on BBC Radio 4 in December 2006, criticized the services offered by many land banking companies in the United Kingdom, suggesting that they were scamming their customers. A land banking scam is based on the very low chance of any of the plots receiving planning permission and the very high profit margins taken on the land plots, with the seller using misleading marketing tactics to convince the buyers that they are making a sound investment. A key strategy used for selling United Kingdom land plots is to imply that because a customer owns the land plot, they cannot lose their money. The land banking company typically suggests dramatic annual increases in the value of the land plots, and a very optimistic time frame for successful planning applications. These are never contractually committed. Typically the land banking company sells a land plot at a premium of 15 to 100 times the current market value of undeveloped land. A purchaser might pay £15,000 for a land plot that only has a current market value of £500. On this basis, most of the investment is not in land, and the small percentage annual increases in the value of the land plot are meaningless. The actual investment is in a proposed service to deliver valuable approved building land in the future. If that service is never delivered or is not successful, the remaining land asset is normally worthless. Should the selling company fail or disappear, the plot owner cannot economically sell the plot, as the administrative effort and cost of sale typically exceeds the value of the land plot. Many land banking companies target victims outside of the United Kingdom, particularly in Canada, Singapore, Thailand, Brunei and Malaysia. Residents of these countries may be naive of the UK property market and local planning regulations such as green belt zoning. In 2008, the land banking firm UKLI was placed into administration due to insolvency, despite having taken £69 million from 4,500 people for land plots. Land International was closed down in 2008 after losing investors £10 million, and the same Land International plots were later offered for sale in Asia. In 2010, Land International (Far East) failed, causing investors to lose S$6 million (£2.5M). MP David Heath requested a debate in the House of Commons following the offering of 209 plots in the village of Dean, saying that "while land banking may not be illegal it is undoubtedly a scam". The UK Land Registry issued a press release on January 15, 2009 advising consumers that the Land Registry has published a guide warning against land banking investment schemes. Land Registry Head of Corporate Legal Services Mike Westcott Rudd said that the public were being "misled about the prospects of obtaining planning permission", with well-known banks and developers being falsely cited as partners in the project, and that in some cases forged Land Registry paperwork was being presented to suggest that planning approval existed where it did not. As a result of the significant controversy and media coverage land banking received, many directors and officials of companies involved were prosecuted and handed custodial sentences by the courts. United States Government In 2011 New York State passed a land bank statute authorizing the establishment of nonprofits in each county to take title to vacant abandoned homes so they can be rehabilitated, sold or demolished in an orderly fashion. Many counties upstate including Erie County, Onondaga County, Schenectady County and Albany County have abandoned homes as people moved to the suburbs. Some properties have been abandoned due to back taxes and the city has taken title. The recent robo-signing settlement gave Attorney General Eric Schneiderman the wherewithal to fund land banks in Schenectady and Albany. The state of Michigan also has a land bank program. Ohio passed land bank legislation in 2009. Commercial Land banking as an investment is nothing new to America. Several self-made billionaires started by purchasing large tracts in California where the development opportunities had not yet arisen. People such as Bob Hope and Donald Trump have reaped tremendous rewards from buying large areas and holding the property until the market commanded a considerable return when sold. There have, however, also been many land scams in the US, such as the large areas of Florida swampland which were sold as being suitable for real estate. Florida land scams have history as far back as the 1920s Florida Land Rush. Many Florida Counties have traces of these land scams today. Polk County, Florida, in particular has been devastated with land banking scams. Polk County, being the land that lies between the city of Tampa, in Hillsborough County Florida and the city of Orlando, in Orange County Florida has been a hot bed for speculative land development. North Polk County falls within the lower Green Swamp. The State of Florida has declared the Green Swamp "land of critical state concern". During the 1970s through the late 1980s the Green Swamp was sold as being suitable for real estate development. The development of Disney World and the attraction it received was the sales tool to persuade individuals to buy one acre lots at high speculative prices. These prices ranged from $2,000 to as high as $15,000 per acre. The Florida land banking scams continue today and are mostly operated outside of the United States. Unwary foreign customers are sold Florida land from outside the U.S. borders through contracts for deed arrangements. Australia In December 2008, during the Great Recession, the Foreign Investment Review Board (FIRB) relaxed laws regarding foreign investment in Australian real estate. Under previous legislation temporary residents were only allowed to purchase a property for Principal Place of Residence purposes valued at up to $300,000. Under new laws active since February 2009, this monetary limit has been removed. In March 2010, the Reserve Bank of Australia governor announced that it is monitoring the effect of the rule change on the housing market. On April 24, 2010, Assistant Treasurer Senator Nick Sherry announced the tightening of foreign investment laws as a result of a public backlash to the changes made a year earlier. Whilst they are still entitled to purchase a property of any value, temporary Residents must now sell their residence upon leaving the country, and must report all purchases to the Foreign Investment Review Board, effectively eliminating this land banking loophole. However, foreign companies are allowed to purchase property to house local staff. It is possible for foreign individuals to create a registered company for the sole purpose of purchasing property in Australia and actively bypass the loophole fix. As of April 20, 2010, the COAG has agreed the Housing Supply and Affordability Reform Working Party will extend the land audit work to examine ‘underused examine private holdings of large parcels of land by mid-2010. Agricultural land banking While most land banking is based on the prospect of urban areas expanding at the expense of rural areas, agricultural land is expanding at the expense of virgin land in various parts of the world. Agricultural land banking involves purchasing virgin land that has been identified as suitable for agriculture because of its climate, topography, and soil properties, where the buyer has no intention of working the land himself or leasing it out. When purchased by the land banking investor, such lands are often rather far away from existing infrastructure, which keeps prices low. The investor anticipates that, because of the area's natural productive potential, an agricultural infrastructure (sufficient roads, specialised contractors, grain storages) will develop, with more land put under cultivation and land values multiplying. Agricultural land banking is found where large tracts of fertile virgin land still exist, where valuations are low and where legislation allows large land holdings (free hold) by domestic and foreign investors. Typical countries for such investments during recent years have been Argentina, Brazil, Uruguay, Paraguay where land prices appreciated accordingly. Though the perception that the world's fertile land is a limited and valuable asset is by no means new, it received renewed public and media attention with the global food crisis, when phrases like peak wheat or peak soil were coined. See also Land economy Landlord Land recycling Land reform Planning permission References External links UK Seven Oaks council public statement on land banking UK Land registry publishes warning on land bank "investment" schemes UK Government Consumer Site – land banking scams UK Financial Services Authority – land banking statement Conveyancing Melbourne Property Scam Agricultural economics Investment Confidence tricks Land use Urban planning
Land banking
Engineering
3,029
25,617,311
https://en.wikipedia.org/wiki/Kansas%20Building%20Science%20Institute
The Kansas Building Science Institute is a vocational school located in Manhattan, Kansas. The Institute conducts week-long Home Energy Rater Trainings (HERS) as well as Building Performance Index (BPI) and Weatherization (WX) Trainings, among others. Training center The Institute conducts trainings in a multi-purpose classroom and training center in Manhattan. The campus also includes a furnace lab, a mobile home for Weatherization Trainings, and an attached house to perform test ratings and inspections on. Other houses around Manhattan are also used for this purpose. References External links Kansas Building Science Institute Vocational education in the United States Education in Riley County, Kansas Education in Kansas Building engineering organizations 1996 establishments in Kansas Manhattan, Kansas
Kansas Building Science Institute
Engineering
146
38,523,559
https://en.wikipedia.org/wiki/Mohammad%20Javad%20Tondguyan
Mohammad Javad Bagher Tondguyan (; 16 June 1950 – 16 December 1991) was an Iranian engineer and petroleum minister under Prime Minister Mohammad-Ali Rajai from 2 September to 3 November 1980 when he was captured by the Iraqi forces in November 1980 during Iran-Iraq war. Early life and education Tondguyan was born in Tehran on 16 June 1950. Tondguyan was involved in opposition movement against Shah Mohammad Reza Pahlavi in 1967 and was detained for eleven months and interrogated by the SAVAK. During this period he met Mohammad Khatami. From 1968 Tondguyan studied oil engineering at the Abadan Technologic Institute, now Petroleum University of Technology, where he was head of the Islamic Association. The association hosted Ali Shariati, one of the philosophical and political leaders of the Islamic revolution, as a speaker during the 1960s and 1970s. Tondguyan was also one of the figures who disseminated the views of Ayatollah Ruhollah Khomeini in Abadan during this period. Tondguyan graduated from the Abadan Technologic Institute in 1972. He also attended the Iran School of Management and obtained a degree in 1978. Career Following his graduation, Tondguyan began to work in the Tehran refinery. Then he worked for various oil companies in Iran until the 1979 revolution. After the revolution, he was appointed deputy science minister. On 25 September 1980, Tondguyan was named oil minister replacing Ali Akbar Moinfar in the post and served in the cabinet of Mohammad Ali Rajai. His successor as the minister of oil was Mohammad Gharazi. Captivity and death Tondguyan was captured by the Iraqi forces on his tour to the fronts on the Abadan road in Khuzestan province on 3 November 1980 at the initial phase of the Iran-Iraq war which lasted from 1980 to 1988. His deputy and a ministry official were also captured with him. They were reportedly taken to Baghdad. In October 1990, the Iraqi officials stated that he committed suicide two years after his captivity. In November 1990, his wife and father denied this report. Tondguyan's body was delivered by the International Committee of the Red Cross to the Iran government in 1991. The committee reported that he died of torture after eleven years of detention in Iraqi prisons. Personal life Tondguyan was married and had four children. As of 2018 his son, Mohammad Mehdi, was a member of the Tehran City Council. Notes References External links 20th-century Iranian engineers 20th-century Iranian politicians 1950 births 1991 deaths Iran–Iraq War prisoners of war Iranian prisoners of war Iranian torture victims National Iranian Oil Company people Oil ministers of Iran Petroleum engineers Prisoners of war held by Iraq Petroleum University of Technology alumni Politicians from Tehran
Mohammad Javad Tondguyan
Engineering
563
4,468,819
https://en.wikipedia.org/wiki/Urea%20phosphate
Urea phosphate is a 1:1 combination of urea and phosphoric acid that is used as a fertilizer. It has an NPK formula of 17-44-0, and is soluble in water, producing a strongly acidic solution. Urea phosphate is available in fertilizer vendor bags that carry a UP signet on the packaging. It is sometimes added to blends which contain calcium nitrate, magnesium nitrate and potassium nitrate to produce water-soluble formulas such as 15-5-15 and 13-2-20. The acidity of urea phosphate allows Ca, Mg and P to co-exist in solution. Under less acidic conditions, there would be precipitation of Ca–Mg phosphates. Urea phosphate is often used in drip irrigation to clean pipe systems. The phosphoric acid and urea molecules in the urea phosphate crystal structure form a complex hydrogen-bonding network, with the hydrogen atoms bonding more strongly to urea molecules. It freely dissociates when dissolved in water. Urea phosphate is produced as a non-ionic adduct of urea and phosphoric acid, with the typical 17-44-0 grade of fertilizer produced using wet process phosphoric acid at concentrations that vary from 54% to 90%: References Phosphates Ureas
Urea phosphate
Chemistry
272
18,164,717
https://en.wikipedia.org/wiki/Osmium%28IV%29%20chloride
Osmium(IV) chloride or osmium tetrachloride is the inorganic compound composed of osmium and chlorine with the empirical formula OsCl4. It exists in two polymorphs (crystalline forms). The compound is used to prepare other osmium complexes. Preparation, structure, reactions It was first reported in 1909 as the product of chlorination of osmium metal. This route affords the high temperature polymorph: Os + 2 Cl2 → OsCl4 This reddish-black polymorph is orthorhombic and adopts a structure in which osmium centres are octahedrally coordinated, sharing opposite edges of the OsCl6 octahedra to form a chain. A brown, apparently cubic polymorph forms upon reduction of osmium tetroxide with thionyl chloride: OsO4 + 2 SOCl2 → OsCl4 + 2 Cl2 + 2 SO2 Osmium tetraoxide dissolves in hydrochloric acid to give the hexachloroosmate anion: OsO4 + 10 HCl → H2OsCl6 + 2 Cl2 + 4 H2O References Osmium compounds Chlorides Platinum group halides
Osmium(IV) chloride
Chemistry
254
59,818,585
https://en.wikipedia.org/wiki/BD%20Phoenicis
BD Phoenicis is a variable star in the constellation of Phoenix. From parallax measurements by the Gaia spacecraft, it is located at a distance of from Earth. Its absolute magnitude is calculated at 1.5. Description BD Phoenicis is a Lambda Boötis star, an uncommon type of peculiar stars that have very low abundances of iron-peak elements. In particular, BD Phoenicis has near-solar carbon and oxygen content, but its iron abundance is only 4% of the solar value. BD Phoenicis is also a pulsating variable of Delta Scuti type, varying its apparent magnitude between 5.90 and 5.94. A study of its light curve detected seven pulsation periods that range from 50 to 84 minutes, the strongest one having a period of 57 minutes and an amplitude of 9 milli-magnitudes. Pulsations are common among Lambda Boötis stars and seem to be more common than normal main sequence stars of the same spectral type. BD Phoenicis is an A-type main-sequence star with a spectral type of A1Va. Stellar evolution models indicate it contains double the solar mass and an age of about 800 million years—having completed 83% of its main sequence lifetime. It is radiating 21 times the Sun's luminosity from its photosphere at an effective temperature of . BD Phoenicis has a composite spectra that indicate it is a binary star, but nothing is known about its companion. Observations by the Herschel Space Observatory have detected an infrared excess from BD Phoenicis, indicating that there is a debris disk in the system. By modeling the emission as a black body, it is estimated that the dust has a temperature of and is at a distance of from the star. The existence of debris disks is possibly related to the Lambda Boötis phenomenon. References Delta Scuti variables Lambda Boötis stars Phoenix (constellation) A-type main-sequence stars Durchmusterung objects 011413 008593 0541 Phoenicis, BD
BD Phoenicis
Astronomy
429
21,394,895
https://en.wikipedia.org/wiki/Tarski%27s%20high%20school%20algebra%20problem
In mathematical logic, Tarski's high school algebra problem was a question posed by Alfred Tarski. It asks whether there are identities involving addition, multiplication, and exponentiation over the positive integers that cannot be proved using eleven axioms about these operations that are taught in high-school-level mathematics. The question was solved in 1980 by Alex Wilkie, who showed that such unprovable identities do exist. Statement of the problem Tarski considered the following eleven axioms about addition , multiplication , and exponentiation to be standard axioms taught in high school: These eleven axioms, sometimes called the high school identities, are related to the axioms of a bicartesian closed category or an exponential ring. Tarski's problem then becomes: are there identities involving only addition, multiplication, and exponentiation, that are true for all positive integers, but that cannot be proved using only the axioms 1–11? Example of a provable identity Since the axioms seem to list all the basic facts about the operations in question, it is not immediately obvious that there should be anything provably true one can state using only the three operations, but cannot prove with the axioms. However, proving seemingly innocuous statements can require long proofs using only the above eleven axioms. Consider the following proof that Strictly we should not write sums of more than two terms without brackets, and therefore a completely formal proof would prove the identity (or ) and would have an extra set of brackets in each line from onwards. The length of proofs is not an issue; proofs of similar identities to that above for things like would take a lot of lines, but would really involve little more than the above proof. History of the problem The list of eleven axioms can be found explicitly written down in the works of Richard Dedekind, although they were obviously known and used by mathematicians long before then. Dedekind was the first, though, who seemed to be asking if these axioms were somehow sufficient to tell us everything we could want to know about the integers. The question was put on a firm footing as a problem in logic and model theory sometime in the 1960s by Alfred Tarski, and by the 1980s it had become known as Tarski's high school algebra problem. Solution In 1980 Alex Wilkie proved that not every identity in question can be proved using the axioms above. He did this by explicitly finding such an identity. By introducing new function symbols corresponding to polynomials that map positive numbers to positive numbers he proved this identity, and showed that these functions together with the eleven axioms above were both sufficient and necessary to prove it. The identity in question is This identity is usually denoted and is true for all positive integers and as can be seen by factoring out of the second factor on each side; yet it cannot be proved true using the eleven high school axioms. Intuitively, the identity cannot be proved because the high school axioms can't be used to discuss the polynomial Reasoning about that polynomial and the subterm requires a concept of negation or subtraction, and these are not present in the high school axioms. Lacking this, it is then impossible to use the axioms to manipulate the polynomial and prove true properties about it. Wilkie's results from his paper show, in more formal language, that the "only gap" in the high school axioms is the inability to manipulate polynomials with negative coefficients. R. Gurevič showed in 1988 that there is no finite axiomatization for the valid equations for the positive natural numbers with 1, addition, multiplication, and exponentiation. Generalisations Wilkie proved that there are statements about the positive integers that cannot be proved using the eleven axioms above and showed what extra information is needed before such statements can be proved. Using Nevanlinna theory it has also been proved that if one restricts the kinds of exponential one takes then the above eleven axioms are sufficient to prove every true statement. Another problem stemming from Wilkie's result, which remains open, is that which asks what the smallest algebra is for which is not true but the eleven axioms above are. In 1985 an algebra with 59 elements was found that satisfied the axioms but for which was false. Smaller such algebras have since been found, and it is now known that the smallest such one must have either 11 or 12 elements. See also Notes References Stanley N. Burris, Karen A. Yeats, The saga of the high school identities, Algebra Universalis 52 no.2–3, (2004), pp. 325–342, . Theorems in the foundations of mathematics Universal algebra
Tarski's high school algebra problem
Mathematics
953
32,761,768
https://en.wikipedia.org/wiki/Scaffold/matrix%20attachment%20region
The term S/MAR (scaffold/matrix attachment region), otherwise called SAR (scaffold-attachment region), or MAR (matrix-associated region), are sequences in the DNA of eukaryotic chromosomes where the nuclear matrix attaches. As architectural DNA components that organize the genome of eukaryotes into functional units within the cell nucleus, S/MARs mediate structural organization of the chromatin within the nucleus. These elements constitute anchor points of the DNA for the chromatin scaffold and serve to organize the chromatin into structural domains. Studies on individual genes led to the conclusion that the dynamic and complex organization of the chromatin mediated by S/MAR elements plays an important role in the regulation of gene expression. Overview It has been known for many years that a polymer meshwork, a so-called "nuclear matrix" or "nuclear-scaffold" is an essential component of eukaryotic nuclei. This nuclear skeleton acts as a dynamic support for many specialized events concerning the readout a spread of genetic information (see below). S/MARs map to non-random locations in the genome. They occur at the flanks of transcribed regions, in 5´-introns, and also at gene breakpoint cluster regions (BCRs). Being association points for common nuclear structural proteins S/MARs are required for authentic and efficient chromosomal replication and transcription, for recombination and chromosome condensation. S/MARs do not have an obvious consensus sequence. Although prototype elements consist of AT-rich regions several hundred base pairs in length, the overall base composition is definitely not the primary determinant of their activity. Instead, their function requires a pattern of "AT-patches" that confer the propensity for local strand unpairing under torsional strain. Bioinformatics approaches support the idea that, by these properties, S/MARs not only separate a given transcriptional unit (chromatin domain) from its neighbors, but also provide platforms for the assembly of factors enabling transcriptional events within a given domain. An increased propensity to separate the DNA strands (the so-called 'stress induced duplex destabilization' potential, SIDD) can serve the formation of secondary structures such as cruciforms or slippage structures, which are recognizable features for a number of enzymes (DNAses, topoisomerases, poly(ADP-ribosyl) polymerases and enzymes of the histone-acetylation and DNA-methylation apparatus). S/MARs have been classified as either being constitutive (acting as permanent domain boundaries in all cell types) or facultative (cell type- and activity-related) depending on their dynamic properties. While the number of S/MARs in the human genome has been estimated to approach 64,000 (chromatin domains) plus an additional 10,000 (replication foci), in 2007 still only a minor fraction (559 for all eukaryotes) had met the standard criteria for an annotation in the S/MARt database. Context-dependent properties Current views of the nuclear matrix envision it as a dynamic entity, which changes its properties along the requirements of the cell nucleus—much the same as the cytoskeleton adapts its structure and function to external signals. In retrospect it is of note that the discovery of S/MARs has two major routes: the description of scaffold-attachment elements (SARs) by Laemmli and coworkers, which were thought to demarcate the borders of a given chromatin domain the characterization of matrix-associated regions (MARs) the first examples of which supported the immunoglobulin kapp-chain enhancer according to its occupancy with transcription factors Subsequent work demonstrated both the constitutive (SAR-like) and the facultative (MAR-like) function of the elements depending on the context. Whereas constitutive S/MARs were found to be associated with a DNase I hypersensitive site in 'all' cell types (whether or not the enclosed domain was transcribed), DNAse I hypersensitivity of the facultative type depended on the transcriptional status. The major difference between these two functional types of S/MARs is their size: the constitutive elements may extend over several kilobasepairs whereas facultative ones are at the lower size limit around 300 base pairs. The figure shows our present understanding of these properties and it incorporates the following findings: the dynamic properties of S/MAR-scaffold contacts as derived by haloFISH investigations the fact that during transcription DNA is reeled through RNA-polymerase which itself is a fixed component of the nuclear matrix the fact that certain domain-intrinsic S/MARs require the support of an adjacent transcription factor to become active. Use in gene therapy As an alternative to viral vectors, which can have unwanted effects in patients body, non-viral methods of gene therapy are being studied. One of such methods uses plasmids with special properties - the so-called episomes. Episomes have the ability to divide together with the rest of eukaryotic genome during mitosis. Compared with standard plasmids they are not epigenetically silenced within nucleus and are not enzymatically destroyed. Episomes acquire this ability through the presence of S/MAR sequence within their construct. Additional information In 2006, Tetko found a strong correlation of intragenic S/MARs with spatiotemporal expression of genes in Arabidopsis thaliana. On a genome scale, pronounced tissue- and organ-specific and developmental expression patterns of S/MAR-containing genes have been detected. Notably, transcription factor genes contain a significant higher portion of S/MARs. The pronounced difference in expression characteristics of S/MAR-containing genes emphasizes their functional importance and the importance of structural chromosomal characteristics for gene regulation in plants as well as within other eukaryotes. References Molecular genetics
Scaffold/matrix attachment region
Chemistry,Biology
1,252