source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Timeline%20of%20solar%20cells
In the 19th century, it was observed that the sunlight striking certain materials generates detectable electric current – the photoelectric effect. This discovery laid the foundation for solar cells. Solar cells have gone on to be used in many applications. They have historically been used in situations where electrical power from the grid was unavailable. As the invention was brought out it made solar cells as a prominent utilization for power generation for satellites. Satellites orbit the Earth, thus making solar cells a prominent source for power generation through the sunlight falling on them. Solar cells are commonly used in satellites in today's times. 1800s 1839 - Edmond Becquerel observes the photovoltaic effect via an electrode in a conductive solution exposed to light. 1873 - Willoughby Smith finds that selenium shows photoconductivity. 1874 - James Clerk Maxwell writes to fellow mathematician Peter Tait of his observation that light affects the conductivity of selenium. 1877 - William Grylls Adams and Richard Evans Day observed the photovoltaic effect in solidified selenium, and published a paper on the selenium cell. 'The action of light on selenium,' in "Proceedings of the Royal Society, A25, 113. 1883 - Charles Fritts develops a solar cell using selenium on a thin layer of gold to form a device giving less than 1% efficiency. 1887 - Heinrich Hertz investigates ultraviolet light photoconductivity and discovers the photoelectric effect 1887 - James Moser reports dye sensitized photoelectrochemical cell. 1888 - Edward Weston receives patent US389124, "Solar cell," and US389125, "Solar cell." 1888–91 - Aleksandr Stoletov creates the first solar cell based on the outer photoelectric effect 1894 - Melvin Severy receives patent US527377, "Solar cell," and US527379, "Solar cell." 1897 - Harry Reagan receives patent US588177, "Solar cell." 1899 - Weston Bowser receives patent US598177, "Solar storage." 1900–1929 1901 - Philipp von Lenard
https://en.wikipedia.org/wiki/Structure%20of%20Management%20Information
In computing, the Structure of Management Information (SMI), an adapted subset of ASN.1, is a technical language used in definitions of Simple Network Management Protocol (SNMP) and its extensions to define sets ("modules") of related managed objects in a Management Information Base (MIB). SMI subdivides into three parts: module definitions, object definitions, and notification definitions. Module definitions are used when describing information modules. An ASN .1 macro, MODULE-IDENTITY, is used to concisely convey the semantics of an information module. Object definitions describe managed objects. An ASN.1 macro, OBJECT-TYPE, is used to concisely convey the syntax and semantics of a managed object. Notification definitions (aka "traps") are used when describing unsolicited transmissions of management information. An ASN.1 macro, NOTIFICATION-TYPE, concisely conveys the syntax and semantics of a notification. Implementations libsmi, a C library for accessing MIB information
https://en.wikipedia.org/wiki/Vertical%20seismic%20profile
In geophysics, vertical seismic profile (VSP) is a technique of seismic measurements used for correlation with surface seismic data. The defining characteristic of a VSP (of which there are many types) is that either the energy source, or the detectors (or sometimes both) are in a borehole. In the most common type of VSP, hydrophones, or more often geophones or accelerometers, in the borehole record reflected seismic energy originating from a seismic source at the surface. There are numerous methods for acquiring a vertical seismic profile (VSP). Zero-offset VSPs (A) have sources close to the wellbore directly above receivers. Offset VSPs (B) have sources some distance from the receivers in the wellbore. Walkaway VSPs (C) feature a source that is moved to progressively farther offset and receivers held in a fixed location. Walk-above VSPs (D) accommodate the recording geometry of a deviated well, having each receiver in a different lateral position and the source directly above the receiver. Salt-proximity VSPs (E) are reflection surveys to help define a salt-sediment interface near a wellbore by using a source on top of a salt dome away from the drilling rig. Drill-noise VSPs (F), also known as seismic-while-drilling (SWD) VSPs, use the noise of the drill bit as the source and receivers laid out along the ground. Multi-offset VSPs (G) involve a source some distance from numerous receivers in the wellbore. A vertical seismic profile is constructed to identify a value known as a source wavelet. This is useful when it comes to a process known as deconvolution. Deconvolution allows for a more readable and more focused VSP. The idea is that the VSP reports any abnormal seismic activity and deconvolution allows for a more focused profile on these abnormal activities. VSPs are used to measure a seismic signal at depth and with that measurement the wavelength at the source of the seismic activity is easily found. With the measurement of the source wavelet, geophysicists
https://en.wikipedia.org/wiki/Underdominance
In genetics, underdominance, also known as homozygote advantage, heterozygote disadvantage, or negative overdominance," is the opposite of overdominance. It is the selection against the heterozygote, causing disruptive selection and divergent genotypes. Underdominance exists in situations where the heterozygotic genotype is inferior in fitness to either the dominant or recessive homozygotic genotype. Compared to examples of overdominance in actual populations, underdominance is considered more unstable and may lead to the fixation of either allele. An example of stable underdominance may occur in individuals who are heterozygotic for polymorphisms that would make them better suited for one of two niches. Consider a situation in which a population is completely homozygotic for an "A" allele, allowing exploitation of a particular resource. Eventually, a polymorphic "a" allele may be introduced into the population, resulting in an individual who is capable of exploiting a different resource. This would result in an "aa" homozygotic invasion of the population due to nonexistent competition of the unexploited resource. The frequency of "aa" individuals would increase until the abundance of the "a" resource begins to decline. Eventually, the "AA" and "aa" genotypes would reach equilibrium with each other, with "Aa" heterozygotic individuals potentially experiencing a reduced fitness compared to those individuals who are homozygotic for utilization of either resource. This example of underdominance is stable because any shift in equilibrium would result in selection for the rare allele due to increased resource abundance. This compensatory selection would ultimately return the dimorphic system to underdominant equilibrium. Incidence in butterfly populations An example of stable underdominance can be found in the African butterfly species Pseudacraea eurytus, which utilizes Batesian mimicry to escape predation. This species possesses two alleles which each confer an appe
https://en.wikipedia.org/wiki/Peruvian%20inti
The inti was the currency of Peru between 1985 and 1991. Its ISO 4217 code was PEI and its abbreviation was I/. The inti was divided into 100 céntimos. The inti replaced the inflation-stricken sol. The new currency was named after Inti, the Inca sun god. History The inti was introduced on 1 February 1985, replacing the sol which had suffered from high inflation. One inti was equivalent to 1,000 soles. Coins denominated in the new unit were put into circulation from May 1985 and banknotes followed in June of that year. By 1990, the inti had itself suffered from high inflation. As an interim measure, from January to July 1991, the "inti millón" (I/m.) was used as a unit of account. One inti millón was equal to 1,000,000 intis and hence to one new sol. The nuevo sol ("new sol") was adopted on 1 July 1991, replacing the inti at an exchange rate of a million to one. Thus: 1 new sol = 1,000,000 intis = 1,000,000,000 old soles. Inti notes and coins are no longer legal tender in Peru, nor can they be exchanged for notes and coins denominated in the current nuevo sol. Coins Coins were introduced in 1985 in denominations of 1, 5, 10, 20, and 50 centimos (designs were taken from the previous 10, 50, 100, and 500 soles de oro coins), plus 1 and 5 intis. The 1 céntimo coin was issued only in 1985. The 5 céntimo coins were issued until 1986. All the other denominations were issued until 1988. All coins featured Navy Admiral Miguel Grau: cent coins on the reverse, Inti coins on the obverse. Banknotes In June 1985, notes were introduced in denominations of I/.10, I/.50 (taken from previous 10,000 and 50,000 soles de oro notes) and I/.100, followed by I/.500 in December of the same year. The next year, I/.1,000 notes were added, followed by I/.5,000 and I/.10,000 in 1988. 50,000 and I/.100,000 notes were added in 1989. I/.500,000 denominations were added early in 1990, I/.1,000,000 denominations were added in mid-1990, and I/.5,000,000 intis in August 1990. The obverses featur
https://en.wikipedia.org/wiki/Volume%20element
In mathematics, a volume element provides a means for integrating a function with respect to volume in various coordinate systems such as spherical coordinates and cylindrical coordinates. Thus a volume element is an expression of the form where the are the coordinates, so that the volume of any set can be computed by For example, in spherical coordinates , and so . The notion of a volume element is not limited to three dimensions: in two dimensions it is often known as the area element, and in this setting it is useful for doing surface integrals. Under changes of coordinates, the volume element changes by the absolute value of the Jacobian determinant of the coordinate transformation (by the change of variables formula). This fact allows volume elements to be defined as a kind of measure on a manifold. On an orientable differentiable manifold, a volume element typically arises from a volume form: a top degree differential form. On a non-orientable manifold, the volume element is typically the absolute value of a (locally defined) volume form: it defines a 1-density. Volume element in Euclidean space In Euclidean space, the volume element is given by the product of the differentials of the Cartesian coordinates In different coordinate systems of the form , , , the volume element changes by the Jacobian (determinant) of the coordinate change: For example, in spherical coordinates (mathematical convention) the Jacobian determinant is so that This can be seen as a special case of the fact that differential forms transform through a pullback as Volume element of a linear subspace Consider the linear subspace of the n-dimensional Euclidean space Rn that is spanned by a collection of linearly independent vectors To find the volume element of the subspace, it is useful to know the fact from linear algebra that the volume of the parallelepiped spanned by the is the square root of the determinant of the Gramian matrix of the : Any point p in the subsp
https://en.wikipedia.org/wiki/Contraflexure
In solid mechanics, a point along a beam under a lateral load is known as a point of contraflexure if the bending moment about the point equals zero. In a bending moment diagram, it is the point at which the bending moment curve intersects with the zero line (i.e. where the bending moment reverses direction along the beam). Knowing the place of the contraflexure is especially useful when designing reinforced concrete or structural steel beams and also for designing bridges. Flexural reinforcement may be reduced at this point. However, to omit reinforcement at the point of contraflexure entirely is inadvisable as the actual location is unlikely to realistically be defined with confidence. Additionally, an adequate quantity of reinforcement should extend beyond the point of contraflexure to develop bond strength and to facilitate shear force transfer. See also Deformation Engineering mechanics Flexural rigidity Flexural stress Fluid mechanics Inflection point Strength of materials
https://en.wikipedia.org/wiki/Leonard%20Mlodinow
Leonard Mlodinow (; November 26, 1954) is an American theoretical physicist and mathematician, screenwriter and author. In physics, he is known for his work on the large N expansion, a method of approximating the spectrum of atoms based on the consideration of an infinite-dimensional version of the problem, and for his work on the quantum theory of light inside dielectrics. He has also written books for the general public, five of which have been New York Times best-sellers, including The Drunkard's Walk: How Randomness Rules Our Lives, which was chosen as a New York Times notable book, and short-listed for the Royal Society Science Book Prize; The Grand Design, co-authored with Stephen Hawking, which argues that invoking God is not necessary to explain the origins of the universe; War of the Worldviews, co-authored with Deepak Chopra; and Subliminal: How Your Unconscious Mind Rules Your Behavior, which won the 2013 PEN/E. O. Wilson Literary Science Writing Award. He also makes public lectures and media appearances on programs including Morning Joe and Through the Wormhole, and debated Deepak Chopra on ABC's Nightline. Biography Mlodinow was born in Chicago, Illinois, of parents who were both Holocaust survivors. His father, who spent more than a year in the Buchenwald concentration camp, had been a leader in the Jewish resistance in his hometown of Częstochowa, in Nazi Germany-occupied Poland. As a child, Mlodinow was interested in both mathematics and chemistry, and while in high school was tutored in organic chemistry by a professor from the University of Illinois. As recounted in his book Feynman's Rainbow, his interest turned to physics during a semester he took off from college to spend on a kibbutz in Israel, during which he had little to do at night besides reading The Feynman Lectures on Physics, which was one of the few English books he found in the kibbutz library. Mlodinow completed his doctorate at the University of California, Berkeley. It was in th
https://en.wikipedia.org/wiki/KARMEN
KARMEN (KArlsruhe Rutherford Medium Energy Neutrino experiment), a detector associated with the ISIS synchrotron at the Rutherford Appleton Laboratory. Neutrinos for study are supplied via the decay of pions produced when a proton beam strikes a target. It operated from 1990 until March 2001, observing the appearance and disappearance of electron neutrinos. KARMEN searched for neutrino oscillations, with implications for the existence of sterile neutrinos. Results Limits were set on neutrino oscillation parameters. The KARMEN results disagreed with the LSND experiment and were followed up by MiniBooNE.
https://en.wikipedia.org/wiki/Mcrypt
mcrypt is a replacement for the popular Unix crypt command. crypt was a file encryption tool that used an algorithm very close to the World War II Enigma cipher. Mcrypt provides the same functionality but uses several modern algorithms such as AES. Libmcrypt, Mcrypt's companion, is a library of code that contains the actual encryption functions and provides an easy method for use. The last update to libmcrypt was in 2007, despite years of unmerged patches. Maintained alternatives include ccrypt, libressl, and others. Examples of mcrypt usage in a Linux command-line environment: mcrypt --list # See available encryption algorithms. mcrypt -a blowfish myfilename # Encrypts myfilename to myfilename.nc # using the Blowfish encryption algorithm. # You are prompted two times for a passphrase. mcrypt -d mytextfile.txt.nc # Decrypts mytextfile.txt.nc to mytextfile.txt. mcrypt -V -d -a enigma -o scrypt --bare # Can en/decrypt files crypted with SunOS crypt. mcrypt --help It implements numerous cryptographic algorithms, mostly block ciphers and stream ciphers, some of which fall under export restrictions in the United States. Algorithms include DES, Blowfish, ARCFOUR, Enigma, GOST, LOKI97, RC2, Serpent, Threeway, Twofish, WAKE, and XTEA. See also bcrypt crypt (Unix) ccrypt scrypt
https://en.wikipedia.org/wiki/Current%20quark
Current quarks (also called naked quarks or bare quarks) are a description of valence quarks as the cores of the quark particles that are the invariable parts of a hadron, with their non-virtual ("real" or permanent) quarks with their surrounding "covering" of evanescent gluons and virtual quarks imagined stripped away. In quantum chromodynamics, the mass of the current quarks is called the current quark mass, as opposed to the much larger mass of the composite particle which is carried in the gluon and virtual quark covering. The heavier quarks are large enough for their substantial masses to predominate over the combined mass of their virtual-particle "dressing" or covering, but the lighter quarks masses are overwhelmed by their evanescent covering's mass-energy; the light quarks' core masses are such a small fraction of the covering that the actual mass values are difficult to infer with any accuracy (hence, the data listed below for the light quarks are fraught with caveats). The constituent quark, in contrast, is a combination of both the "naked" current quark and its "dressing" of evanescent gluons and virtual quarks. For the lighter quarks, the mass of each constituent quark is approximately of the average mass of the proton and neutron, with a little extra mass fudged in for the strange quark. Abstraction vs. reality Since it is not physically possible even at solar-interior temperatures to "strip naked" any quark of its covering, it is a matter of legitimate doubt whether current quarks are actual or real, or merely a convenient but unrealistic and abstract notion. High energy particle accelerators provide a demonstration that the idea of a "naked quark" is in some sense real: If the current quark imbedded in one constituent quark is hit inside its covering with large momentum, the current quark accelerates through its evanescent covering and leaves it behind, at least temporarily producing a "naked" or undressed quark, showing that to some extent the i
https://en.wikipedia.org/wiki/Toomas%20Kivisild
Toomas Kivisild (born 11 August 1969, in Tapa, Estonia) is an Estonian population geneticist. He graduated as a biologist and received his PhD in Genetics, from University of Tartu, Estonia, in 2000. Since then he has worked as a postdoctoral research fellow in the School of Medicine, at Stanford University (2002-3), Estonian Biocentre (since 2003), as the Professor of Evolutionary Biology, University of Tartu (2005-6), and as a Lecturer and Reader in Human Evolutionary Genetics in the Department of Archaeology and Anthropology at the University of Cambridge (2006-2018). From 2018 he is a professor in the Department of Human Genetics at KU Leuven and a senior researcher at the Institute of Genomics, University of Tartu. Kivisild has focused in his research on questions relating global genetic population structure with evolutionary processes such as selection, drift, migrations and admixture. He coauthored the second edition of the textbook Human Evolutionary Genetics (2013). Selected publications 1999a. "Deep common ancestry of Indian and western-Eurasian mitochondrial DNA lineages" 1999b. "The Place of the Indian mtDNA Variants in the Global Network of Maternal Lineages and the Peopling of the Old World" 2000a. "An Indian Ancestry: a Key for Understanding Human Diversity in Europe and Beyond" 2000b. "The origins of southern and western Eurasian populations: an mtDNA study" 2003a. "The Genetics of Language and Farming Spread in India" 2003b. "The Genetic Heritage of the Earliest Settlers Persists Both in Indian Tribal and Caste Populations" , The emerging limbs and twigs of the East Asian mtDNA tree. 2005a. Different population histories of the Mundari- and Mon-Khmer-speaking Austro-Asiatic tribes inferred from the mtDNA 9-bp deletion/insertion polymorphism in Indian populations 2005b. Reconstructing the Origin of Andaman Islanders 2005c. Tracing Modern Human Origins 2006a. Response to Comment on‘‘Reconstructing the Origin of Andaman Islanders’
https://en.wikipedia.org/wiki/Artificial%20disintegration
Artificial disintegration is the term coined by Ernest Rutherford for the process by which an atomic nucleus is broken down by bombarding it with high speed alpha particles, either from a particle accelerator, or a naturally decaying radioactive substance such as radium, as Rutherford originally used. See also Nuclear fission The Fly in the Cathedral
https://en.wikipedia.org/wiki/List%20of%20terms%20using%20the%20word%20occipital
The adjective occipital, in zoology, means pertaining to the occiput (rear of the skull). Occipital is a descriptor for several areas of animal and human anatomy. External occipital protuberance Internal occipital crest Greater occipital nerve Lesser occipital nerve Occipital artery Occipital bone Occipital bun Occipital condyle Occipital groove Occipital lobe Occipital plane Occipital pole Occipital ridge Occipital scales Occipital triangle Occipital vein Parieto-occipital sulcus PGO (Ponto-geniculo-occipital) waves Preoccipital notch
https://en.wikipedia.org/wiki/Grundig
Grundig ( , ) is a Turkish consumer electronics manufacturer owned by the Arçelik A.Ş., the white goods (major appliance) manufacturer of Turkish conglomerate Koç Holding. The company made domestic appliances and personal-care products. Originally a German consumer electronic company, Grundig GmbH was founded in 1945 by Max Grundig and eventually headquartered in Nuremberg. It grew to become one of the leading radio, TV, recorder and other electronics goods manufacturers of Europe in the following decades of the 20th century. In the 1970s, Philips began acquiring Grundig AG's shares, leading to complete control in 1993. In 1998, Philips divested Grundig. In 2007, Koç Holding bought Grundig and put the brand under its home-appliances subsidiary Arcelik A.Ş. Koç is a publicly listed conglomerate with more than 80,000 employees. History Grundig began in 1945 with the establishment of a store named Fürth, Grundig & Wurzer (RVF), which sold radios and was headquartered in Fürth, northern Bavaria. After the Second World War, Max Grundig recognized the need for radios in Germany, and in 1947 produced a kit, while a factory and administration centre were built at Fürth. In 1951, the first television sets were manufactured at the new facility. At the time Grundig was the largest radio manufacturer in Europe. Divisions were established in Nuremberg, Frankfurt and Karlsruhe. In 2013, Grundig launched its home appliances (white goods) product range, becoming one of the mainstream manufacturers in Europe. Parent Arcelik A.Ş., has more than 27,000 employees worldwide. Grundig has manufacturing plants in several European cities that deliver their products to more than 65 countries around the world. 1940s Grundig started as a typical German company in 1945. Its early notability was due to Grundig radio. Max Grundig, a radio dealer, built a machine called "Heinzelmann", which was a radio that came without thermionic valves and as a do-it-yourself kit to circumvent post war rule
https://en.wikipedia.org/wiki/Brilliant%20blue%20FCF
Brilliant blue FCF (Blue 1) is a synthetic organic compound used primarily as a blue colorant for processed foods, medications, dietary supplements, and cosmetics. It is classified as a triarylmethane dye and is known under various names, such as FD&C Blue No. 1 or acid blue 9. It is denoted by E number E133 and has a color index of 42090. It has the appearance of a blue powder and is soluble in water and glycerol, with a maximum absorption at about 628 nanometers. It is one of the oldest FDA-approved color additives and is generally considered nontoxic and safe. Production Brilliant blue FCF is a synthetic dye produced by the condensation of 2-formylbenzenesulfonic acid and the appropriate aniline followed by oxidation. It can be combined with tartrazine (E102) to produce various shades of green. It is usually a disodium salt. The diammonium salt has CAS number . Calcium and potassium salts are also permitted. It can also appear as an aluminium lake. The chemical formation is C37H34N2Na2O9S3. Related dyes are C.I. acid green 3 (CAS#4680-78-8) and acid green 9 (CAS#4857-81-2). In these dyes, the 2-sulfonic acid group is replaced by H and Cl, respectively. Many attempts have been made to find similarly colored natural dyes that are as stable as brilliant blue FCF. Blue pigments must possess many chemical traits, including pi-bond conjugation, aromatic rings, heteroatoms and heteroatom groups, and ionic charges in order to absorb low energy red light. Most natural blue dyes are either unstable, blue only in alkaline conditions, or toxic; good candidates for further research into use as natural dyes include anthocyanin and trichotomine derivatives. No replacement for brilliant blue FCF has been found for use in beverages. Applications Like many other color additives, the primary use of Blue No. 1 is to correct or enhance natural coloring or to give colorless compounds a vivid hue. In the United States, of the two approved blue dyes (the other being Indigo carmi
https://en.wikipedia.org/wiki/Stephen%20L.%20Adler
Stephen Louis Adler (born November 30, 1939) is an American physicist specializing in elementary particles and field theory. He is currently professor emeritus in the school of natural sciences at the Institute for Advanced Study in Princeton, New Jersey. Biography Adler was born in New York City. He received an A.B. degree at Harvard University in 1961, where he was a Putnam Fellow in 1959, and a Ph.D. from Princeton University in 1964. Adler completed his doctoral dissertation, titled High energy neutrino reactions and conservations hypotheses, under the supervision of Sam Treiman. He is the son of Irving Adler, an American author, teacher, mathematician, scientist and political activist, and Ruth Adler and older brother of Peggy Adler. Adler became a member of the Institute for Advanced Study in 1966, becoming a full professor of theoretical physics in 1969, and was named "New Jersey Albert Einstein Professor" at the institute in 1979. He was elected a member of the American Academy of Arts and Sciences in 1974, and a member of the National Academy of Sciences in 1975. He has won the J. J. Sakurai Prize from the American Physical Society in 1988, and the Dirac Medal of the International Centre for Theoretical Physics in 1998, among other awards. Adler's seminal papers on high energy neutrino processes, current algebra, soft pion theorems, sum rules, and perturbation theory anomalies helped lay the foundations for the current standard model of elementary particle physics. In 2012, Adler contributed to a family venture when he wrote the foreword for his then 99-year-old father's 87th book, Solving the Riddle of Phyllotaxis: Why the Fibonacci Numbers and the Golden Ratio Occur on Plants. The book's diagrams are by his sister Peggy. Trace dynamics In his book Quantum Theory as an Emergent Phenomenon, published 2004, Adler presented his trace dynamics, a framework in which quantum field theory emerges from a matrix theory. In this matrix theory, particles are
https://en.wikipedia.org/wiki/Bioarchaeology
The term bioarchaeology has been attributed to British archaeologist Grahame Clark who, in 1972, defined it as the study of animal and human bones from archaeological sites. Redefined in 1977 by Jane Buikstra, bioarchaeology in the United States now refers to the scientific study of human remains from archaeological sites, a discipline known in other countries as osteoarchaeology, osteology or palaeo-osteology. Compared to bioarchaeology, osteoarchaeology is the scientific study that solely focus on the human skeleton. The human skeleton is used to tell us about health, lifestyle, diet, mortality and physique of the past. Furthermore, palaeo-osteology is simple the study of ancient bones. In contrast, the term bioarchaeology is used in Europe to describe the study of all biological remains from archaeological sites. Although Clark used it to describe just human remains and animal remains (zoology/archaeozoology/zooarchaeology), increasingly modern archaeologists also include botanical remains (botany/archaeobotany/paleobotany/paleoethnobotany). Bioarchaeology was largely born from the practices of New Archaeology, which developed in the United States in the 1970s as a reaction to a mainly cultural-historical approach to understanding the past. Proponents of New Archaeology advocated using processual methods to test hypotheses about the interaction between culture and biology, or a biocultural approach. Some archaeologists advocate a more holistic approach to bioarchaeology that incorporates critical theory and is more relevant to modern descent populations. If possible, human remains from archaeological sites are analyzed to determine sex, age, and health. The results are used to determine patterns relevant to human behavior at the site. Paleodemography Paleodemography is the field that attempts to identify demographic characteristics from the past population. The information gathered is used to make interpretations. Bioarchaeologists use paleodemography some
https://en.wikipedia.org/wiki/Behavioral%20geography
Behavioral geography is an approach to human geography that examines human behavior by separating it into different parts. In addition, behavioral geography is an ideology/approach in human geography that makes use of the methods and assumptions of behaviorism to determine the cognitive processes involved in an individual's perception of or response and reaction to their environment. Behavioral geographers focus on the cognitive processes underlying spatial reasoning, decision making, and behavior. Behavioral geography is the branch of human science which deals with the study of cognitive processes with its response to its environment through behaviorism. Issues Because of the name it is often assumed to have its roots in behaviorism. While some behavioral geographers clearly have roots in behaviorism due to the emphasis on cognition, most can be seen as cognitively oriented. Indeed, it seems that behaviorism interest is more recent and growing. This is particularly true in the area of human landscaping. Behavioral geography draws from early behaviorist works such as Tolman's concepts of "cognitive maps". More cognitively oriented, behavioral geographers focus on the cognitive processes underlying spatial reasoning, decision making, and behavior. More behaviorally oriented geographers are materialists and look at the role of basic learning processes and how they influence the landscape patterns or even group identity. The cognitive processes include environmental perception and cognition, wayfinding, the construction of cognitive maps, place attachment, the development of attitudes about space and place, decisions and behavior based on imperfect knowledge of one's environs, and numerous other topics. The approach adopted in behavioral geography is closely related to that of psychology, but draws on research findings from a multitude of other disciplines including economics, sociology, anthropology, transportation planning, and many others. The Social Construct
https://en.wikipedia.org/wiki/The%20Pirate%20Bay
The Pirate Bay (sometimes abbreviated as TPB) is an online index of digital content of entertainment media and software. Founded in 2003 by Swedish think tank Piratbyrån, The Pirate Bay allows visitors to search, download, and contribute magnet links and torrent files, which facilitate peer-to-peer, file sharing among users of the BitTorrent protocol. The Pirate Bay has sparked controversies and discussion about legal aspects of file sharing, copyright, and civil liberties and has become a platform for political initiatives against established intellectual property laws as well as a central figure in an anti-copyright movement. The website has faced several shutdowns and domain seizures, switching to a series of new web addresses to continue operating. In April 2009, the website's founders (Peter Sunde, Fredrik Neij, and Gottfrid Svartholm) were found guilty in the Pirate Bay trial in Sweden for assisting in copyright infringement and were sentenced to serve one year in prison and pay a fine. In some countries, Internet service providers (ISPs) have been ordered to block access to the website. Subsequently, proxy websites have been providing access to it. Founders Svartholm, Neij, and Sunde were all released by 2015 after serving shortened sentences. History The Pirate Bay was established in September 2003 by the Swedish anti-copyright organisation Piratbyrån (); it has been run as a separate organisation since October 2004. The Pirate Bay was first run by Gottfrid Svartholm and Fredrik Neij, who are known by their nicknames "anakata" and "TiAMO", respectively. They have both been accused of "assisting in making copyrighted content available" by the Motion Picture Association of America. On 31 May 2006, the website's servers in Stockholm were raided and taken away by Swedish police, leading to three days of downtime. The Pirate Bay claims to be a non-profit entity based in the Seychelles; however, this is disputed. The Pirate Bay has been involved in a number o
https://en.wikipedia.org/wiki/Lithophile
Lithophiles are micro-organisms that can live within the pore interstices of sedimentary and even fractured igneous rocks to depths of several kilometers. Some are known to live on surface rocks, and make use of photosynthesis for energy. Those that live in deeper rocks cannot use photosynthesis to gather energy, but instead extract energy from minerals around them. They live in cracks in the rock where water seeps down. The water contains dissolved carbon dioxide (CO2) which the organisms use for their carbon needs. They have been detected in rocks down to depths of nearly three km, where the temperature is approximately 75 °C. Terrestrial lithophiles can be found in canyons primarily composed of granite, an igneous rock, and soils saturated with fractured rock. Organisms from the genus Elliptochloris, a subaerial photosynthetic green algae, demonstrate lithophilic preferences through colonization in granite cracks and in proximity to terrestrial lichens. Lithophilic lichens from the genus Collema form tight symbiotic relationships between fungi and photosynthetic algae such as Elliptochloris in order to produce necessary saturated fatty acid secondary metabolites. Lithophilic algal species colonizing fractured rock outcroppings individually exhibit coccal morphological shape while aggregating into an elliptical or globular arrangement during adulthood. Lithobiontic ecological niches further classify lithophiles into sub-categories determined by their spatial niche specificity. The term, lithic, refers to an association with rock and can be further explained by the term, lithobiontic, regarded as organisms living both on, and within rock surfaces. Sub-surface rock organisms, endoliths, primarily exhibit niche preference within fissures, cavities, or tunnels of various rocks. While many endoliths degrade and effectively excavate the available carbonate rock surface, many are preyed upon by select gastropod, and echinoderm species. This habitat preference ca
https://en.wikipedia.org/wiki/Fire%20ecology
Fire ecology is a scientific discipline concerned with the effects of fire on natural ecosystems. Many ecosystems, particularly prairie, savanna, chaparral and coniferous forests, have evolved with fire as an essential contributor to habitat vitality and renewal. Many plant species in fire-affected environments use fire to germinate, establish, or to reproduce. Wildfire suppression not only endangers these species, but also the animals that depend upon them. Wildfire suppression campaigns in the United States have historically molded public opinion to believe that wildfires are harmful to nature. Ecological research has shown, however, that fire is an integral component in the function and biodiversity of many natural habitats, and that the organisms within these communities have adapted to withstand, and even to exploit, natural wildfire. More generally, fire is now regarded as a 'natural disturbance', similar to flooding, windstorms, and landslides, that has driven the evolution of species and controls the characteristics of ecosystems. Fire suppression, in combination with other human-caused environmental changes, may have resulted in unforeseen consequences for natural ecosystems. Some large wildfires in the United States have been blamed on years of fire suppression and the continuing expansion of people into fire-adapted ecosystems as well as climate change. Land managers are faced with tough questions regarding how to restore a natural fire regime, but allowing wildfires to burn is likely the least expensive and most effective method in many situations. History Fire has played a major role in shaping the world's vegetation. The biological process of photosynthesis began to concentrate the atmospheric oxygen needed for combustion during the Devonian approximately 350 million years ago. Then, approximately 125 million years ago, fire began to influence the habitat of land plants. In the 20th century ecologist Charles Cooper made a plea for fire as an eco
https://en.wikipedia.org/wiki/Screen%20burn-in
Screen burn-in, image burn-in, ghost image, or shadow image, is a permanent discoloration of areas on an electronic display such as a cathode ray tube (CRT) in an old computer monitor or television set. It is caused by cumulative non-uniform use of the screen. Newer liquid-crystal displays (LCDs) may suffer from a phenomenon called image persistence instead, which is not permanent. One way to combat screen burn-in was the use of screensavers, which would move an image around to ensure that no one area of the screen remained illuminated for too long. Causes With phosphor-based electronic displays (for example CRT-type computer monitors, oscilloscope screens or plasma displays), non-uniform use of specific areas, such as prolonged display of non-moving images (text or graphics), repetitive contents in gaming graphics, or certain broadcasts with tickers and flags, can create a permanent ghost-like image of these objects or otherwise degrade image quality. This is because the phosphor compounds which emit light to produce images lose their luminance with use. This wear results in uneven light output over time, and in severe cases can create a ghost image of previous content. Even if ghost images are not recognizable, the effects of screen burn are an immediate and continual degradation of image quality. The length of time required for noticeable screen burn to develop varies due to many factors, ranging from the quality of the phosphors employed, to the degree of non-uniformity of sub-pixel use. It can take as little as a few weeks for noticeable ghosting to set in, especially if the screen displays a certain image (example: a menu bar at the top or bottom of the screen) constantly and displays it continually over time. In the rare case when horizontal or vertical deflection circuits fail, all output energy is concentrated to a vertical or horizontal line on the display which causes almost instant screen burn. CRT Phosphor burn-in is particularly prevalent with mo
https://en.wikipedia.org/wiki/Deutsches%20Forschungsnetz
Deutsches Forschungsnetz ("German Research Network"), usually abbreviated to DFN, is the German national research and education network (NREN) used for academic and research purposes. It is managed by the scientific community organized in the voluntary Association to Promote a German Education and Research Network (Verein zur Förderung eines Deutschen Forschungsnetzes e.V.) which was founded in 1984 by universities, non-university research institutions and research-oriented companies to stimulate computerized communication in Germany. DFN's "super core" backbone X-WiN network points of presence are - for example - based in Erlangen, Frankfurt, Hannover and Potsdam with more than 70 locations and can route up to 1TBit/s with over 10000 km of dedicated fibre connections. Many connections to other networks such as GÉANT2 or DECIX are 100G-based and are implemented at the super core. Today connections up to 200GBit are possible. Networks run by DFN e.V. WiN is short for Wissenschaftsnetz ("science network"). WiN (1989–1998) ERWIN (1990-1992) B-WiN (1996-2001) G-WiN (Gigabit-Wissenschaftsnetz) (2000-2005) X-WiN (since 2006)
https://en.wikipedia.org/wiki/Richard%20Bache
Richard Bache (September 12, 1737 – April 17, 1811), born in Settle, West Riding of Yorkshire, England, immigrated to Philadelphia, in the colony of Pennsylvania, where he was a businessman, a marine insurance underwriter, and later served as Postmaster-General of the American Post Office. He also was the son-in-law of Benjamin Franklin. Early life Bache was born on September 12, 1737, in Settle, West Riding of Yorkshire, the youngest child of William Bache, a tax collector, and Mary (née Blechynden) Bache, who were married around 1720. His older brother was Theophylact Bache, who married Ann Dorothea Barclay (a daughter of Andrew Barclay and Helena (née Roosevelt) Barclay). In 1751, his elder brother Theophylact arrived in New York City, where he was taken under the wing of Paul Richard, a successful merchant and former mayor, whose wife was a Bache relative. Career Bache immigrated as a young man in 1760 to New York to join his brother Theophylact in a dry goods and marine insurance business. After a couple of years, he went to Philadelphia, where he prospered for several years. He was among nearly 30 young men who in October 1766 met at the city's London Coffee House to found the Gloucester Fox Hunting Club (GFHC), the first in America, to take up a pursuit closely associated with becoming "true Englishmen." In 1767, Bache suffered financial problems when debts contracted by him were repudiated by his London associate, Edward Green. Later years During the American Revolution, Bache served on the Board of War, which was a special standing committee to oversee the Continental Army's administration and to make recommendations regarding the army to Congress. His wife, Sally, was widely known for her patriotism and charitable activities. After immigrating to North America, he acquired ownership of a slave named Bob. Franklin later arranged an appointment for Bache as the US Postmaster General (1776–1782), to succeed him. After Franklin's death in 1790, Bache
https://en.wikipedia.org/wiki/GoBack
Norton GoBack (previously WildFile GoBack, Adaptec GoBack, and Roxio GoBack) is a disk utility for Microsoft Windows that can record up to 8 GB of disk changes. When the filesystem is idle for a few seconds, it marks these as "safe points". The product allows the disk drive to be restored to any point within the available history. It also allows older versions of files to be restored, and previous versions of the whole disk to be browsed. Depending on disk activity, the typical history might cover a few hours to a few days. Operation GoBack replaces the master boot record, and also replaces the partition table with a single partition. This allows a hard drive to be changed back, even in the event that the operating system is unable to boot, while also protecting the filesystem from alteration so that the revert information remains correct. GoBack is compatible with hardware RAID drives. Incompatible products Due to the changes made to the partition table, this can cause problems when dual booting other operating systems on the same hard disk. It is possible to retain dual-boot compatibility, but can involve saving the partition table before enabling GoBack, and after enabling GoBack, re-writing the partition table back to the disk (after booting from a different device, such as a Live CD). It may also be necessary to disable GoBack prior to using certain low level disk utilities, such as formatting software. Such low level utilities which do not first check for the presence of GoBack can (in combination with GoBack) cause data to become corrupted. Another example of an incompatible program is the Windows version of True Image For Windows. GoBack is also incompatible with products such as Drive Vaccine PC Restore and RollBack Rx - as these products require access to the master boot record. Usage precaution Several users at CNET Reviews have reported data loss after installing this product. CNET gave Norton GoBack 4.0 an editors' rating of 3.5 out of five star
https://en.wikipedia.org/wiki/Topological%20skeleton
In shape analysis, skeleton (or topological skeleton) of a shape is a thin version of that shape that is equidistant to its boundaries. The skeleton usually emphasizes geometrical and topological properties of the shape, such as its connectivity, topology, length, direction, and width. Together with the distance of its points to the shape boundary, the skeleton can also serve as a representation of the shape (they contain all the information necessary to reconstruct the shape). Skeletons have several different mathematical definitions in the technical literature, and there are many different algorithms for computing them. Various different variants of skeleton can also be found, including straight skeletons, morphological skeletons, etc. In the technical literature, the concepts of skeleton and medial axis are used interchangeably by some authors, while some other authors regard them as related, but not the same. Similarly, the concepts of skeletonization and thinning are also regarded as identical by some, and not by others. Skeletons are widely used in computer vision, image analysis, pattern recognition and digital image processing for purposes such as optical character recognition, fingerprint recognition, visual inspection or compression. Within the life sciences skeletons found extensive use to characterize protein folding and plant morphology on various biological scales. Mathematical definitions Skeletons have several different mathematical definitions in the technical literature; most of them lead to similar results in continuous spaces, but usually yield different results in discrete spaces. Quench points of the fire propagation model In his seminal paper, Harry Blum of the Air Force Cambridge Research Laboratories at Hanscom Air Force Base, in Bedford, Massachusetts, defined a medial axis for computing a skeleton of a shape, using an intuitive model of fire propagation on a grass field, where the field has the form of the given shape. If one "sets
https://en.wikipedia.org/wiki/Duty-free%20shop
A duty-free shop (or store) is a retail outlet whose goods are exempt from the payment of certain local or national taxes and duties, on the requirement that the goods sold will be sold to travelers who will take them out of the country, who will then pay duties and taxes in their destination country (depending on its personal exemption limits and tariff regime). Which products can be sold duty-free vary by jurisdiction, as well as how they can be sold, and the process of calculating the duty or refunding the duty component. Tax Free World Association (TFWA) announced that in 2011 Asia-Pacific, with 35 percent of global duty-free and travel retail sales, has more duty free than Europe or the Americas, with these regions accounting for 34 percent and 23 percent respectively. 31 percent of sales came from the fragrances and cosmetics category, followed by the wine and spirit category with 17 percent and then comes tobacco products. However, some countries impose duty on goods brought into the country, though they had been bought duty-free in another country, or when the value or quantity of such goods exceed an allowed limit. Duty-free shops are often found in the international zone of international airports, sea ports, and train stations but goods can also be bought duty-free on board airplanes and passenger ships. They are not as commonly available for road or train travelers, although several border crossings between the United States and both Canada and Mexico have duty-free shops for car travelers. In some countries, any shop can participate in a reimbursement system, such as Global Blue and Premier Tax Free, wherein a sum equivalent to the tax is paid, but then the goods are presented to customs and the sum reimbursed on exit. Duty-free are abolished for intra-EU (inside the EU tax union) travelers but are retained for travelers whose final destination is outside the EU. They also sell to intra-EU travelers but with appropriate taxes. The world's largest ai
https://en.wikipedia.org/wiki/AND%20gate
The AND gate is a basic digital logic gate that implements logical conjunction (∧) from mathematical logic AND gate behaves according to the truth table. A HIGH output (1) results only if all the inputs to the AND gate are HIGH (1). If not all inputs to the AND gate are HIGH, LOW output results. The function can be extended to any number of inputs. Symbols There are three symbols for AND gates: the American (ANSI or 'military') symbol and the IEC ('European' or 'rectangular') symbol, as well as the deprecated DIN symbol. Additional inputs can be added as needed. For more information see Logic gate symbols article. It can also be denoted as symbol "^" or "&". The AND gate with inputs A and B and output C implements the logical expression . This expression also may be denoted as or . Implementations An AND gate can be designed using only N-channel (pictured) or P-channel MOSFETs, but is usually implemented with both (CMOS). The digital inputs a and b cause the output F to have the same result as the AND function. AND gates may be made from discrete components and are readily available as integrated circuits in several different logic families. Analytical representation is the analytical representation of AND gate: Alternatives If no specific AND gates are available, one can be made from NAND or NOR gates, because NAND and NOR gates are "universal gates" meaning that they can be used to make all the others. See also OR gate NOT gate NAND gate NOR gate XOR gate XNOR gate IMPLY gate Boolean algebra Logic gate
https://en.wikipedia.org/wiki/OR%20gate
The OR gate is a digital logic gate that implements logical disjunction. The OR gate outputs "true" if any of its inputs are "true"; otherwise it outputs "false". The input and output states are normally represented by different voltage levels. Description Any OR gate can be constructed with two or more inputs. It outputs a 1 if any of these inputs are 1, or outputs a 0 only if all inputs are 0. The inputs and outputs are binary digits ("bits") which have two possible logical states. In addition to 1 and 0, these states may be called true and false, high and low, active and inactive, or other such pairs of symbols. Thus it performs a logical disjunction (∨) from mathematical logic. The gate can be represented with the plus sign (+) because it can be used for logical addition. Equivalently, an OR gate finds the maximum between two binary digits, just as the AND gate finds the minimum. Together with the AND gate and the NOT gate, the OR gate is one of three basic logic gates from which any Boolean circuit may be constructed. All other logic gates may be made from these three gates; any function in binary mathematics may be implemented with them. It is sometimes called the inclusive OR gate to distinguish it from XOR, the exclusive OR gate. The behavior of OR is the same as XOR except in the case of a 1 for both inputs. In situations where this never arises (for example, in a full-adder) the two types of gates are interchangeable. This substitution is convenient when a circuit is being implemented using simple integrated circuit chips which contain only one gate type per chip. Symbols There are two logic gate symbols currently representing the OR gate: the American (ANSI or 'military') symbol and the IEC ('European' or 'rectangular') symbol. The DIN symbol is deprecated. The "≥1" on the IEC symbol indicates that the output is activated by at least one active input. Hardware description and pinout OR gates are basic logic gates, and are available in TTL and CMO
https://en.wikipedia.org/wiki/Gemma%20%28botany%29
A gemma (plural gemmae) is a single cell, or a mass of cells, or a modified bud of tissue, that detaches from the parent and develops into a new individual. This type of asexual reproduction is referred to as fragmentation. It is a means of asexual propagation in plants. These structures are commonly found in fungi, algae, liverworts and mosses, but also in some flowering plants such as pygmy sundews and some species of butterworts. Vascular plants have many other methods of asexual reproduction including bulbils and turions. In mosses and liverworts The production of gemmae is a widespread means of asexual reproduction in both liverworts and mosses. In liverworts such as Marchantia, the flattened plant body or thallus is a haploid gametophyte with gemma cups scattered about its upper surface. The gemma cups are cup-like structures containing gemmae. The gemmae are small discs of haploid tissue, and they directly give rise to new gametophytes. They are dispersed from gemma cups by rainfall. The gemmae are bilaterally symmetrical and are not differentiated into dorsal and ventral surfaces. The mature gemmae fall on the ground and if conditions are suitable their germination starts immediately. The surface of the gemma which comes in contact of the soil gives out many rhizoids. This surface eventually becomes the lower(ventral) surface of the thallus. Meanwhile, the apical cells present in the two lateral notches become active and form two thalli in opposites directions. Endogenous gemmmae are also produced in liverworts, these are ovoid or ellipsoidal shaped, 2 celled at leaf tips or margins. Examples such as Bazzania kokawana (Fossombroniaceae), Endogemma caespiticia and also Riccardia species.
https://en.wikipedia.org/wiki/The%20Fantastic%20Four%20%28unreleased%20film%29
The Fantastic Four is an unreleased 1994 superhero film based on the Marvel Comics superhero team of the same name, created by Stan Lee and Jack Kirby. The film features the team's origin and first battle with Doctor Doom. Executive-produced by low-budget specialists Roger Corman and Bernd Eichinger, it was made to allow Eichinger to keep the Fantastic Four film rights. It was not released officially, although pirated copies have circulated since May 31, 1994 as well as various clips being available on YouTube and Dailymotion. Plot Reed Richards and Victor Von Doom are college friends who use the opportunity of a passing comet to try an experiment. It goes wrong, leaving Victor believed dead. Susan and Johnny Storm are two children living with their mother, who has a boarding house where Reed lives. Ben Grimm is a family friend and a college buddy of Reed's. Ten years later, Reed, Susan, Johnny and Ben participate in a mission in an experimental spacecraft of Reed's as the same comet passes Earth. Unbeknownst to them, a crucial diamond component designed to protect them from the comet's cosmic rays, has been replaced with an imitation by a criminal named The Jeweler, leaving them exposed to the radiation. After crash-landing on Earth they discover that the cosmic rays have given them special powers: Reed's bodily structure has become elastic, Susan can become invisible, Johnny can generate fire on demand and Ben has transformed into a creature with stone-like skin. They are later captured by men posing as Marines and are taken to Victor who has become the villainous monarch Dr. Doom. They escape and meet at the Baxter Building, trying to decide how to move forward with their superpowers. An angry Ben leaves them to go out on his own, feeling he has become a freak. He is found by homeless men and joins them in the lair of the Jeweler. The Jeweler has his henchmen kidnap blind artist Alicia Masters whom he plans to force into being his bride, intending to use the
https://en.wikipedia.org/wiki/Per%20Martin-L%C3%B6f
Per Erik Rutger Martin-Löf (; ; born 8 May 1942) is a Swedish logician, philosopher, and mathematical statistician. He is internationally renowned for his work on the foundations of probability, statistics, mathematical logic, and computer science. Since the late 1970s, Martin-Löf's publications have been mainly in logic. In philosophical logic, Martin-Löf has wrestled with the philosophy of logical consequence and judgment, partly inspired by the work of Brentano, Frege, and Husserl. In mathematical logic, Martin-Löf has been active in developing intuitionistic type theory as a constructive foundation of mathematics; Martin-Löf's work on type theory has influenced computer science. Until his retirement in 2009, Per Martin-Löf held a joint chair for Mathematics and Philosophy at Stockholm University. His brother Anders Martin-Löf is now emeritus professor of mathematical statistics at Stockholm University; the two brothers have collaborated in research in probability and statistics. The research of Anders and Per Martin-Löf has influenced statistical theory, especially concerning exponential families, the expectation-maximization method for missing data, and model selection. Per Martin-Löf received his PhD in 1970 from Stockholm University, under Andrey Kolmogorov. Martin-Löf is an enthusiastic bird-watcher; his first scientific publication was on the mortality rates of ringed birds. Randomness and Kolmogorov complexity In 1964 and 1965, Martin-Löf studied in Moscow under the supervision of Andrei N. Kolmogorov. He wrote a 1966 article The definition of random sequences that gave the first suitable definition of a random sequence. Earlier researchers such as Richard von Mises had attempted to formalize the notion of a test for randomness in order to define a random sequence as one that passed all tests for randomness; however, the precise notion of a randomness test was left vague. Martin-Löf's key insight was to use the theory of computation to define forma
https://en.wikipedia.org/wiki/Brouwer%E2%80%93Heyting%E2%80%93Kolmogorov%20interpretation
In mathematical logic, the Brouwer–Heyting–Kolmogorov interpretation, or BHK interpretation, of intuitionistic logic was proposed by L. E. J. Brouwer and Arend Heyting, and independently by Andrey Kolmogorov. It is also sometimes called the realizability interpretation, because of the connection with the realizability theory of Stephen Kleene. It is the standard explanation of intuitionistic logic. The interpretation The interpretation states what is intended to be a proof of a given formula. This is specified by induction on the structure of that formula: A proof of is a pair where is a proof of and is a proof of . A proof of is either where is a proof of or where is a proof of . A proof of is a function that converts a proof of into a proof of . A proof of is a pair where is an element of and is a proof of . A proof of is a function that converts an element of into a proof of . The formula is defined as , so a proof of it is a function that converts a proof of into a proof of . There is no proof of , the absurdity or bottom type (nontermination in some programming languages). The interpretation of a primitive proposition is supposed to be known from context. In the context of arithmetic, a proof of the formula is a computation reducing the two terms to the same numeral. Kolmogorov followed the same lines but phrased his interpretation in terms of problems and solutions. To assert a formula is to claim to know a solution to the problem represented by that formula. For instance is the problem of reducing to ; to solve it requires a method to solve problem given a solution to problem . Examples The identity function is a proof of the formula , no matter what P is. The law of non-contradiction expands to : A proof of is a function that converts a proof of into a proof of . A proof of is a pair of proofs <a, b>, where is a proof of P, and is a proof of . A proof of is a function that converts a proof of P into a proo
https://en.wikipedia.org/wiki/Lake%20stratification
Lake stratification is the tendency of lakes to form separate and distinct thermal layers during warm weather. Typically stratified lakes show three distinct layers: the epilimnion, comprising the top warm layer; the thermocline (or metalimnion), the middle layer, whose depth may change throughout the day; and the colder hypolimnion, extending to the floor of the lake. Every lake has a set mixing regime that is influenced by lake morphometry and environmental conditions. However, changes to human influences in the form of land use change, increases in temperature, and changes to weather patterns have been shown to alter the timing and intensity of stratification in lakes around the globe. Rising air temperatures have the same effect on lake bodies as a physical shift in geographic location, with tropical zones being particularly sensitive. These changes can further alter the fish, zooplankton, and phytoplankton community composition, in addition to creating gradients that alter the availability of dissolved oxygen and nutrients. Definition The thermal stratification of lakes refers to a change in the temperature at different depths in the lake, and is due to the density of water varying with temperature. Cold water is denser than warm water and the epilimnion generally consists of water that is not as dense as the water in the hypolimnion. However, the temperature of maximum density for freshwater is 4 °C. In temperate regions where lake water warms up and cools through the seasons, a cyclical pattern of overturn occurs that is repeated from year to year as the cold dense water at the top of the lake sinks (see stable and unstable stratification). For example, in dimictic lakes the lake water turns over during the spring and the fall. This process occurs more slowly in deeper water and as a result, a thermal bar may form. If the stratification of water lasts for extended periods, the lake is meromictic. In shallow lakes, stratification into epilimnion, metalimnio
https://en.wikipedia.org/wiki/Electron%20paramagnetic%20resonance
Electron paramagnetic resonance (EPR) or electron spin resonance (ESR) spectroscopy is a method for studying materials that have unpaired electrons. The basic concepts of EPR are analogous to those of nuclear magnetic resonance (NMR), but the spins excited are those of the electrons instead of the atomic nuclei. EPR spectroscopy is particularly useful for studying metal complexes and organic radicals. EPR was first observed in Kazan State University by Soviet physicist Yevgeny Zavoisky in 1944, and was developed independently at the same time by Brebis Bleaney at the University of Oxford. Theory Origin of an EPR signal Every electron has a magnetic moment and spin quantum number , with magnetic components or . In the presence of an external magnetic field with strength , the electron's magnetic moment aligns itself either antiparallel () or parallel () to the field, each alignment having a specific energy due to the Zeeman effect: where is the electron's so-called g-factor (see also the Landé g-factor), for the free electron, is the Bohr magneton. Therefore, the separation between the lower and the upper state is for unpaired free electrons. This equation implies (since both and are constant) that the splitting of the energy levels is directly proportional to the magnetic field's strength, as shown in the diagram below. An unpaired electron can change its electron spin by either absorbing or emitting a photon of energy such that the resonance condition, , is obeyed. This leads to the fundamental equation of EPR spectroscopy: . Experimentally, this equation permits a large combination of frequency and magnetic field values, but the great majority of EPR measurements are made with microwaves in the 9000–10000 MHz (9–10 GHz) region, with fields corresponding to about 3500 G (0.35 T). Furthermore, EPR spectra can be generated by either varying the photon frequency incident on a sample while holding the magnetic field constant or doing the reverse. In pra
https://en.wikipedia.org/wiki/Radical%20anion
In organic chemistry, a radical anion is a free radical species that carries a negative charge. Radical anions are encountered in organic chemistry as reduced derivatives of polycyclic aromatic compounds, e.g. sodium naphthenide. An example of a non-carbon radical anion is the superoxide anion, formed by transfer of one electron to an oxygen molecule. Radical anions are typically indicated by . Polycyclic radical anions Many aromatic compounds can undergo one-electron reduction by alkali metals. The electron is transferred from the alkali metal ion to an unoccupied antibonding p-p п* orbital of the aromatic molecule. This transfer is usually only energetically favorable if the aprotic solvent efficiently solvates the alkali metal ion. Effective solvents are those that bind to the alkali metal cation: diethyl ether < THF < 1,2-dimethoxyethane < HMPA. In principle any unsaturated molecule can form a radical anion, but the antibonding orbitals are only energetically accessible in more extensive conjugated systems. Ease of formation is in the order benzene < naphthalene < anthracene < pyrene, etc. Salts of the radical anions are often not isolated as solids but used in situ. They are usually deeply colored. Naphthalene in the form of Lithium naphthalene is obtained from the reaction of naphthalene with lithium. Sodium naphthalene is obtained from the reaction of naphthalene with sodium. Sodium 1-methylnaphthalene and 1-methylnaphthalene are more soluble than sodium naphthalene and naphthalene, respectively. biphenyl as its lithium salt. acenaphthylene is a milder reductant than the naphthalene anion. anthracene in the form of its alkali metal salts. pyrene as its sodium salt. Perylene in the form of its alkali metal (M = Li, Na, Cs) etherates. Other examples Cyclooctatetraene is reduced by elemental potassium to the dianion. The resulting dianion is a 10-pi electron system, which conforms to the Huckel rule for aromaticity. Quinone is reduced to a semiquinone
https://en.wikipedia.org/wiki/Bolting%20%28horticulture%29
In horticulture, bolting is the production of a flowering stem (or stems) on agricultural and horticultural crops before the harvesting of a crop, at a stage when a plant makes a natural attempt to produce seeds and to reproduce. The flowering stems are usually vigorous extensions of existing leaf-bearing stems; to produce them, a plant diverts resources from producing the edible parts (such as leaves or roots), resulting in changes in flavor and texture, withering, and in general, a poor-quality harvest. Plants that have produced flowering stems in this way are said to have bolted. Crops inclined to bolt include lettuce, basil, beetroot, brassicas, spinach, celery, onion, and leek. Bolting is induced by plant hormones of the gibberellin family, and can occur as a result of several factors, including changes in day-length, the prevalence of high temperatures at particular stages in a plant's growth-cycle, and the existence of stresses such as insufficient water or minerals. These factors may interact in a complex way. Day-length may affect the propensity to bolt in that some plants are "long day plants", some are "short day plants" and some are "day neutral" (see photoperiodism), so for example when a long day plant, such as spinach, experiences increasingly long days that reach a particular length, it will tend to bolt. Low or high temperatures can affect the propensity of some plants to bolt if they occur for sufficient periods at particular points in the life-cycle of the plant; once these conditions have been met, plants that require such a trigger will subsequently bolt - regardless of subsequent temperatures. Plants under stress may respond by bolting so that they can produce seeds before they die. Plant breeders have introduced cultivars of "bolt-proof" crops that are less prone to the condition.
https://en.wikipedia.org/wiki/Teletext
Teletext, or broadcast teletext, is a standard for displaying text and rudimentary graphics on suitably equipped television sets. Teletext sends data in the broadcast signal, hidden in the invisible vertical blanking interval area at the top and bottom of the screen. The teletext decoder in the television buffers this information as a series of "pages", each given a number. The user can display chosen pages using their remote control. In broad terms, it can be considered as Videotex, a system for the delivery of information to a user in a computer-like format, typically displayed on a television or a dumb terminal, but that designation is usually reserved for systems that provide bi-directional communication, such as Prestel or Minitel. Teletext was created in the United Kingdom in the early 1970s by John Adams, Philips' lead designer for video display units. Public teletext information services were introduced by major broadcasters in the UK, starting with the BBC's Ceefax service in 1974. It offered a range of text-based information, typically including news, weather and TV schedules. Also, paged subtitle (or closed captioning) information was transmitted using the same system. Similar systems were subsequently introduced by other television broadcasters in the UK and mainland Europe in the following years. Meanwhile, the UK's General Post Office introduced the Prestel system using the same display standards but run over telephone lines using bi-directional modems rather than the send-only system used with televisions. Teletext formed the basis for the World System Teletext standard (CCIR Teletext System B), an extended version of the original system. This standard saw widespread use across Europe starting in the 1980s, with almost all televisions sets including a decoder. Other standards were developed around the world, notably NABTS (CCIR Teletext System C) in the United States, Antiope (CCIR Teletext System A) in France and JTES (CCIR Teletext System D) in Ja
https://en.wikipedia.org/wiki/Czes%C5%82aw%20Ryll-Nardzewski
Czesław Ryll-Nardzewski (; 7 October 1926 – 18 September 2015) was a Polish mathematician. Born in Wilno, Second Polish Republic (now Vilnius, Lithuania), he was a student of Hugo Steinhaus. At the age of 26 he became professor at Warsaw University. In 1959, he became a professor at the Wrocław University of Technology. He was the advisor of 18 PhD theses. His main research areas are measure theory, functional analysis, foundations of mathematics and probability theory. Several theorems bear his name: the Ryll-Nardzewski fixed point theorem, the Ryll-Nardzewski theorem in model theory, and the Kuratowski and Ryll-Nardzewski measurable selection theorem. He became a member of the Polish Academy of Sciences in 1967. He died in 2015 at the age of 88 and he is buried in the Wrocław, cm. Grabiszyński.
https://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno%20algorithm
In numerical optimization, the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm is an iterative method for solving unconstrained nonlinear optimization problems. Like the related Davidon–Fletcher–Powell method, BFGS determines the descent direction by preconditioning the gradient with curvature information. It does so by gradually improving an approximation to the Hessian matrix of the loss function, obtained only from gradient evaluations (or approximate gradient evaluations) via a generalized secant method. Since the updates of the BFGS curvature matrix do not require matrix inversion, its computational complexity is only , compared to in Newton's method. Also in common use is L-BFGS, which is a limited-memory version of BFGS that is particularly suited to problems with very large numbers of variables (e.g., >1000). The BFGS-B variant handles simple box constraints. The algorithm is named after Charles George Broyden, Roger Fletcher, Donald Goldfarb and David Shanno. Rationale The optimization problem is to minimize , where is a vector in , and is a differentiable scalar function. There are no constraints on the values that can take. The algorithm begins at an initial estimate for the optimal value and proceeds iteratively to get a better estimate at each stage. The search direction pk at stage k is given by the solution of the analogue of the Newton equation: where is an approximation to the Hessian matrix at , which is updated iteratively at each stage, and is the gradient of the function evaluated at xk. A line search in the direction pk is then used to find the next point xk+1 by minimizing over the scalar The quasi-Newton condition imposed on the update of is Let and , then satisfies , which is the secant equation. The curvature condition should be satisfied for to be positive definite, which can be verified by pre-multiplying the secant equation with . If the function is not strongly convex, then the condition has to be enforce
https://en.wikipedia.org/wiki/Naming%20convention%20%28programming%29
In computer programming, a naming convention is a set of rules for choosing the character sequence to be used for identifiers which denote variables, types, functions, and other entities in source code and documentation. Reasons for using a naming convention (as opposed to allowing programmers to choose any character sequence) include the following: To reduce the effort needed to read and understand source code; To enable code reviews to focus on issues more important than syntax and naming standards. To enable code quality review tools to focus their reporting mainly on significant issues other than syntax and style preferences. The choice of naming conventions can be a controversial issue, with partisans of each holding theirs to be the best and others to be inferior. Colloquially, this is said to be a matter of dogma. Many companies have also established their own set of conventions. Potential benefits Benefits of a naming convention can include the following: to provide additional information (i.e., metadata) about the use to which an identifier is put; to help formalize expectations and promote consistency within a development team; to enable the use of automated refactoring or search and replace tools with minimal potential for error; to enhance clarity in cases of potential ambiguity; to enhance the aesthetic and professional appearance of work product (for example, by disallowing overly long names, comical or "cute" names, or abbreviations); to help avoid "naming collisions" that might occur when the work product of different organizations is combined (see also: namespaces); to provide meaningful data to be used in project handovers which require submission of program source code and all relevant documentation; to provide better understanding in case of code reuse after a long interval of time. Challenges The choice of naming conventions (and the extent to which they are enforced) is often a contentious issue, with partisans holding their v
https://en.wikipedia.org/wiki/Authentication%20server
An authentication server provides a network service that applications use to authenticate the credentials, usually account names and passwords, of their users. When a client submits a valid set of credentials, it receives a cryptographic ticket that it can subsequently use to access various services. Authentication is used as the basis for authorization, which is the determination whether a privilege may be granted to a particular user or process, privacy, which keeps information from becoming known to non-participants, and non-repudiation, which is the inability to deny having done something that was authorized to be done based on the authentication. Major authentication algorithms include passwords, Kerberos, and public key encryption. See also TACACS+ RADIUS Multi-factor authentication Universal 2nd Factor
https://en.wikipedia.org/wiki/RIVPACS
RIVPACS (River Invertebrate Prediction and Classification System) is an aquatic biomonitoring system for assessing water quality in freshwater rivers in the United Kingdom. It is based on the macroinvertebrate species (such as freshwater shrimp, freshwater sponges, worms, crayfish, aquatic snails, freshwater mussels, insects, and many others) found at the study site during sampling. Some of these species are tolerant to pollution, low dissolved oxygen, and other stressors, but others are sensitive; organisms vary in their tolerances. Therefore, different species will usually be found, in different proportions, at different river sites of varying quality. Some organisms are especially good indicator species. The species found at the reference sites collectively make up the species assemblage for that site and are the basis for a statistical comparison between reference sites and non-reference sites. The comparison between the expected species and the observed species can then be used to estimate this aspect of the ecological health of a river. The system is meant to be standardized, easy to use, and relatively low cost. It can complement other types of water quality monitoring such as chemical monitoring. RIVPACS supports the implementation of the Water Framework Directive as its official tool for macroinvertebrate classification Reference sites can be chosen and adjusted several ways. Usually they represent the best conditions within the region or area under study, and are a short stretch of river. Sometimes the reference site expectations are adjusted for degradation of the entire region by human impact. 'Pristine' freshwater sites are sampled to collect information on physical characteristics, chemistry, and macroinvertebrates, sometimes several times each year. This information is then used to predict what invertebrates are present from samples of physiochemistry from other sites. RIVPACS is used across the UK and supported by Centre for Ecology and Hydrology,
https://en.wikipedia.org/wiki/Goldman%20equation
The Goldman–Hodgkin–Katz voltage equation, sometimes called the Goldman equation, is used in cell membrane physiology to determine the reversal potential across a cell's membrane, taking into account all of the ions that are permeant through that membrane. The discoverers of this are David E. Goldman of Columbia University, and the Medicine Nobel laureates Alan Lloyd Hodgkin and Bernard Katz. Equation for monovalent ions The GHK voltage equation for monovalent positive ionic species and negative: This results in the following if we consider a membrane separating two -solutions: It is "Nernst-like" but has a term for each permeant ion: = the membrane potential (in volts, equivalent to joules per coulomb) = the selectivity for that ion (in meters per second) = the extracellular concentration of that ion (in moles per cubic meter, to match the other SI units) = the intracellular concentration of that ion (in moles per cubic meter) = the ideal gas constant (joules per kelvin per mole) = the temperature in kelvins = Faraday's constant (coulombs per mole) is approximately 26.7 mV at human body temperature (37 °C); when factoring in the change-of-base formula between the natural logarithm, ln, and logarithm with base 10 , it becomes , a value often used in neuroscience. The ionic charge determines the sign of the membrane potential contribution. During an action potential, although the membrane potential changes about 100mV, the concentrations of ions inside and outside the cell do not change significantly. They are always very close to their respective concentrations when the membrane is at their resting potential. Calculating the first term Using , , (assuming body temperature) and the fact that one volt is equal to one joule of energy per coulomb of charge, the equation can be reduced to which is the Nernst equation. Derivation Goldman's equation seeks to determine the voltage Em across a membrane. A Cartesian coordinate system is used to desc
https://en.wikipedia.org/wiki/Distributed%20File%20System%20%28Microsoft%29
Distributed File System (DFS) is a set of client and server services that allow an organization using Microsoft Windows servers to organize many distributed SMB file shares into a distributed file system. DFS has two components to its service: Location transparency (via the namespace component) and Redundancy (via the file replication component). Together, these components enable data availability in the case of failure or heavy load by allowing shares in multiple different locations to be logically grouped under one folder, the "DFS root". Microsoft's DFS is referred to interchangeably as 'DFS' and 'Dfs' by Microsoft and is unrelated to the DCE Distributed File System, which held the 'DFS' trademark but was discontinued in 2005. It is also called "MS-DFS" or "MSDFS" in some contexts, e.g. in the Samba user space project. Overview There is no requirement to use the two components of DFS together; it is perfectly possible to use the logical namespace component without using DFS file replication, and it is perfectly possible to use file replication between servers without combining them into one namespace. A DFS root can only exist on a server version of Windows (from Windows NT 4.0 and up) and OpenSolaris (in kernel space) or a computer running Samba (in user space.) The Enterprise and Datacenter Editions of Windows Server can host multiple DFS roots on the same server. OpenSolaris intends on supporting multiple DFS roots in "a future project based on Active Directory (AD) domain-based DFS namespaces". There are two ways of implementing DFS on a server: Standalone DFS namespace - allows for a DFS root that exists only on the local computer, and thus does not use Active Directory. A Standalone DFS can only be accessed on the computer on which it is created. It does not offer any fault tolerance and cannot be linked to any other DFS. This is the only option available on Windows NT 4.0 Server systems. Standalone DFS roots are rarely encountered because of their
https://en.wikipedia.org/wiki/Orville%20Carlisle
Orville H. Carlisle (July 5, 1917 – August 1, 1988), a shoe salesman in Norfolk, Nebraska invented the hobby that would become known as model rocketry. In 1953, Orville and his brother were joint owners of a shoe store on 420 Norfolk Ave. Robert, a model aviation enthusiast, demonstrated his "U-control" planes for groups in parks and schools in and around Norfolk, to demonstrate advances in aeronautical technology since World War II. He wanted a model missile for use in his demonstrations, to illustrate rocketry technology (which would, in a few years, lead to the beginning of the Space Age). He called on Orville, whose hobby was pyrotechnics, to build him a rocket. By 1954, Orville had developed his first rocket, the Rock-A-Chute Mark I. This model had an airframe of paper with balsa fins mounted on long booms behind the body. Propulsion was achieved by a handmade solid rocket motor burning DuPont fffG black powder propellant. The engine was used once and then discarded. The same technology goes into model rocket engines produced currently. Carlisle's second rocket, the Rock-A-Chute Mark II had a more streamlined design and is still a practical model rocket today. In 1958, he was awarded for his design of a "toy rocket". G. Harry Stine, in an article published posthumously in Sport Rocketry magazine, wrote that the U.S. Patent Office should not have awarded Carlisle the patent because the design merely represented a reasonable extension of existing fireworks technology. Prior to the launch of Sputnik in 1957, Carlisle read an article in the February 1957 issue of Popular Mechanics by G. Harry Stine, then an engineer working at White Sands Missile Range. The article remarked on the danger that individuals (mostly teenage boys), inspired by the birth of the Space Age, might experiment with rockets of their own design and end up seriously hurting themselves or even dying. Carlisle realized that he had a solution to this problem with his "Rock-A-Chute" models and
https://en.wikipedia.org/wiki/Vespertine%20%28biology%29
Vespertine is a term used in the life sciences to indicate something of, relating to, or occurring in the evening. In botany, a vespertine flower is one that opens or blooms in the evening. In zoology, the term is used for a creature that becomes active at dusk, such as bats and owls. Strictly speaking, however, the term means that activity ceases during the hours of full darkness and does not resume until the next evening. Activity that continues throughout the night should be described as nocturnal. Vespertine behaviour is a special case of crepuscular behaviour; like crepuscular activity, vespertine activity is limited to dusk rather than full darkness. Unlike vespertine activity, crepuscular activity may resume in dim twilight before dawn. A related term is matutinal, referring to activity limited to the dawn twilight. The word vespertine is derived from the Latin word , an adjective meaning "evening". See also Crypsis Matutinal
https://en.wikipedia.org/wiki/Ben%20Green%20%28mathematician%29
Ben Joseph Green FRS (born 27 February 1977) is a British mathematician, specialising in combinatorics and number theory. He is the Waynflete Professor of Pure Mathematics at the University of Oxford. Early life and education Ben Green was born on 27 February 1977 in Bristol, England. He studied at local schools in Bristol, Bishop Road Primary School and Fairfield Grammar School, competing in the International Mathematical Olympiad in 1994 and 1995. He entered Trinity College, Cambridge in 1995 and completed his BA in mathematics in 1998, winning the Senior Wrangler title. He stayed on for Part III and earned his doctorate under the supervision of Timothy Gowers, with a thesis entitled Topics in arithmetic combinatorics (2003). During his PhD he spent a year as a visiting student at Princeton University. He was a research Fellow at Trinity College, Cambridge between 2001 and 2005, before becoming a Professor of Mathematics at the University of Bristol from January 2005 to September 2006 and then the first Herchel Smith Professor of Pure Mathematics at the University of Cambridge from September 2006 to August 2013. He became the Waynflete Professor of Pure Mathematics at the University of Oxford on 1 August 2013. He was also a Research Fellow of the Clay Mathematics Institute and held various positions at institutes such as Princeton University, University of British Columbia, and Massachusetts Institute of Technology. Mathematics The majority of Green's research is in the fields of analytic number theory and additive combinatorics, but he also has results in harmonic analysis and in group theory. His best known theorem, proved jointly with his frequent collaborator Terence Tao, states that there exist arbitrarily long arithmetic progressions in the prime numbers: this is now known as the Green–Tao theorem. Amongst Green's early results in additive combinatorics are an improvement of a result of Jean Bourgain of the size of arithmetic progressions in sumsets, a
https://en.wikipedia.org/wiki/Reference%20monitor
In operating systems architecture a reference monitor concept defines a set of design requirements on a reference validation mechanism, which enforces an access control policy over subjects' (e.g., processes and users) ability to perform operations (e.g., read and write) on objects (e.g., files and sockets) on a system. The properties of a reference monitor are captured by the acronym NEAT, which means: The reference validation mechanism must be Non-bypassable, so that an attacker cannot bypass the mechanism and violate the security policy. The reference validation mechanism must be Evaluable, i.e., amenable to analysis and tests, the completeness of which can be assured (verifiable). Without this property, the mechanism might be flawed in such a way that the security policy is not enforced. The reference validation mechanism must be Always invoked. Without this property, it is possible for the mechanism to not perform when intended, allowing an attacker to violate the security policy. The reference validation mechanism must be Tamper-proof. Without this property, an attacker can undermine the mechanism itself and hence violate the security policy. For example, Windows 3.x and 9x operating systems were not built with a reference monitor, whereas the Windows NT line, which also includes Windows 2000 and Windows XP, was designed to contain a reference monitor, although it is not clear that its properties (tamperproof, etc.) have ever been independently verified, or what level of computer security it was intended to provide. The claim is that a reference validation mechanism that satisfies the reference monitor concept will correctly enforce a system's access control policy, as it must be invoked to mediate all security-sensitive operations, must not be tampered with, and has undergone complete analysis and testing to verify correctness. The abstract model of a reference monitor has been widely applied to any type of system that needs to enforce access control a
https://en.wikipedia.org/wiki/Conservation%20status
The conservation status of a group of organisms (for instance, a species) indicates whether the group still exists and how likely the group is to become extinct in the near future. Many factors are taken into account when assessing conservation status: not simply the number of individuals remaining, but the overall increase or decrease in the population over time, breeding success rates, and known threats. Various systems of conservation status are in use at international, multi-country, national and local levels, as well as for consumer use such as sustainable seafood advisory lists and certification. The two international systems are by the International Union for Conservation of Nature (IUCN) and The Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). International systems IUCN Red List of Threatened Species The IUCN Red List of Threatened Species by the International Union for Conservation of Nature is the best known worldwide conservation status listing and ranking system. Species are classified by the IUCN Red List into nine groups set through criteria such as rate of decline, population size, area of geographic distribution, and degree of population and distribution fragmentation. Also included are species that have gone extinct since 1500 CE. When discussing the IUCN Red List, the official term "threatened" is a grouping of three categories: critically endangered, endangered, and vulnerable. Extinct (EX) – No known living individuals Extinct in the wild (EW) – Known only to survive in captivity, or as a naturalized population outside its historic range Critically Endangered (CR) – Highest risk of extinction in the wild Endangered (EN) – Higher risk of extinction in the wild Vulnerable (VU) – High risk of extinction in the wild Near Threatened (NT) – Likely to become endangered in the near future Conservation Dependent (CD) – Low risk; is conserved to prevent being near threatened, certain events may lead it t
https://en.wikipedia.org/wiki/Deconfinement
In physics, deconfinement (in contrast to confinement) is a phase of matter in which certain particles are allowed to exist as free excitations, rather than only within bound states. Examples Various examples exist in particle physics where certain gauge theories exhibit transitions between confining and deconfining phases. A prominent example, and the first case considered as such in theoretical physics, occurs at high energy in quantum chromodynamics when quarks and gluons are free to move over distances larger than a femtometer (the size of a hadron). This phase is also called the quark–gluon plasma. These ideas have been adopted in many-body theory of matter with a distinguished example developed in the context fractional quantum Hall effect. See also Onset of deconfinement Colour confinement Quark–gluon plasma Quark-nova Fractionalization Quark matter Gluons
https://en.wikipedia.org/wiki/Flavour%20%28particle%20physics%29
In particle physics, flavour or flavor refers to the species of an elementary particle. The Standard Model counts six flavours of quarks and six flavours of leptons. They are conventionally parameterized with flavour quantum numbers that are assigned to all subatomic particles. They can also be described by some of the family symmetries proposed for the quark-lepton generations. Quantum numbers In classical mechanics, a force acting on a point-like particle can only alter the particle's dynamical state, i.e., its momentum, angular momentum, etc. Quantum field theory, however, allows interactions that can alter other facets of a particle's nature described by non dynamical, discrete quantum numbers. In particular, the action of the weak force is such that it allows the conversion of quantum numbers describing mass and electric charge of both quarks and leptons from one discrete type to another. This is known as a flavour change, or flavour transmutation. Due to their quantum description, flavour states may also undergo quantum superposition. In atomic physics the principal quantum number of an electron specifies the electron shell in which it resides, which determines the energy level of the whole atom. Analogously, the five flavour quantum numbers (isospin, strangeness, charm, bottomness or topness) can characterize the quantum state of quarks, by the degree to which it exhibits six distinct flavours (u, d, s, c, b, t). Composite particles can be created from multiple quarks, forming hadrons, such as mesons and baryons, each possessing unique aggregate characteristics, such as different masses, electric charges, and decay modes. A hadron's overall flavour quantum numbers depend on the numbers of constituent quarks of each particular flavour. Conservation laws All of the various charges discussed above are conserved by the fact that the corresponding charge operators can be understood as generators of symmetries that commute with the Hamiltonian. Thus, the eige
https://en.wikipedia.org/wiki/Ernst%20Gehrcke
Ernst J. L. Gehrcke (1 July 1878 in Berlin – 25 January 1960 in Hohen-Neuendorf) was a German experimental physicist. He was director of the optical department at the Reich Physical and Technical Institute. Concurrently, he was a professor at the University of Berlin. He developed the Lummer–Gehrcke method in interferometry and the multiplex interferometric spectroscope for precision resolution of spectral-line structures. As an anti-relativist, he was a speaker at an event organized in 1920 by the Working Society of German Scientists. He sat on the board of trustees of the Potsdam Astrophysical Observatory. After World War II, he worked at Carl Zeiss Jena, and he helped to develop and become the director of the Institute for Physiological Optics at the University of Jena. In 1949, he began work at the German Office for Materials and Product Testing. In 1953, he became the director of the optical department of the German Office for Weights and Measures. Education Gehrcke studied at the Friedrich-Wilhelms-Universität (today, the Humboldt-Universität zu Berlin) from 1897 to 1901. He received his doctorate under Emil Warburg in 1901. Career In 1901, Gehrcke joined the Physikalisch-Technische Reichsanstalt (PTR, Reich Physical and Technical Institute, after 1945 renamed the Physikalisch-Technische Bundesanstalt). In 1926, he became the director of the optical department, a position he held until 1946. Concurrent with his position at the PTR, he was a Privatdozent at the Friedrich-Wilhelms-Universität from 1904 to 1921 and an außerordentlicher Professor (extraordinarius professor) from 1921 to 1946. After the close of World War II, the University was in the Russian sector of Berlin. In 1946, Gehrcke worked at Carl Zeiss AG in Jena, and he helped to develop and become the director of the Institute for Physiological Optics at the Friedrich-Schiller-Universität Jena. In 1949, he went to East Berlin to the Deutsches Amt für Materialprüfung (German Office for Materials a
https://en.wikipedia.org/wiki/Bonding%20electron
A bonding electron is an electron involved in chemical bonding. This can refer to: Chemical bond, a lasting attraction between atoms, ions or molecules Covalent bond or molecular bond, a sharing of electron pairs between atoms Bonding molecular orbital, an attraction between the atomic orbitals of atoms in a molecule Chemical bonding
https://en.wikipedia.org/wiki/Von%20Mangoldt%20function
In mathematics, the von Mangoldt function is an arithmetic function named after German mathematician Hans von Mangoldt. It is an example of an important arithmetic function that is neither multiplicative nor additive. Definition The von Mangoldt function, denoted by , is defined as The values of for the first nine positive integers (i.e. natural numbers) are which is related to . Properties The von Mangoldt function satisfies the identity The sum is taken over all integers that divide . This is proved by the fundamental theorem of arithmetic, since the terms that are not powers of primes are equal to . For example, consider the case . Then By Möbius inversion, we have and using the product rule for the logarithm we get For all , we have Also, there exist positive constants and such that for all , and for all sufficiently large . Dirichlet series The von Mangoldt function plays an important role in the theory of Dirichlet series, and in particular, the Riemann zeta function. For example, one has The logarithmic derivative is then These are special cases of a more general relation on Dirichlet series. If one has for a completely multiplicative function , and the series converges for , then converges for . Chebyshev function The second Chebyshev function ψ(x) is the summatory function of the von Mangoldt function: It was introduced by Pafnuty Chebyshev who used it to show that the true order of the prime counting function is . Von Mangoldt provided a rigorous proof of an explicit formula for involving a sum over the non-trivial zeros of the Riemann zeta function. This was an important part of the first proof of the prime number theorem. The Mellin transform of the Chebyshev function can be found by applying Perron's formula: which holds for . Exponential series Hardy and Littlewood examined the series in the limit . Assuming the Riemann hypothesis, they demonstrate that In particular this function is oscillatory with diverging oscillati
https://en.wikipedia.org/wiki/Homoserine
Homoserine (also called isothreonine) is an α-amino acid with the chemical formula HO2CCH(NH2)CH2CH2OH. L-Homoserine is not one of the common amino acids encoded by DNA. It differs from the proteinogenic amino acid serine by insertion of an additional -CH2- unit into the backbone. Homoserine, or its lactone form, is the product of a cyanogen bromide cleavage of a peptide by degradation of methionine. Homoserine is an intermediate in the biosynthesis of three essential amino acids: methionine, threonine (an isomer of homoserine), and isoleucine. Its complete biosynthetic pathway includes glycolysis, the tricarboxylic acid (TCA) or citric acid cycle (Krebs cycle), and the aspartate metabolic pathway. It forms by two reductions of aspartic acid via the intermediacy of aspartate semialdehyde. Specifically, the enzyme homoserine dehydrogenase, in association with NADPH, catalyzes a reversible reaction that interconverts L-aspartate-4-semialdehyde to L-homoserine. Then, two other enzymes, homoserine kinase and homoserine O-succinyltransferase use homoserine as a substrate and produce phosphohomoserine and O-succinyl homoserine respectively. Applications Commercially, homoserine can serve as precursor to the synthesis of isobutanol and 1,4-butanediol. Purified homoserine is used in enzyme structural studies. Also, homoserine has played important roles in studies to elucidate peptide synthesis and synthesis of proteoglycan glycopeptides. Bacterial cell lines can make copious amounts of this amino acid. Biosynthesis Homoserine is produced from aspartate via aspartate-4-semialdehyde, which is produced from β-phosphoaspartate. By the action of homoserine dehydrogenases, the semialdehyde is converted to homoserine. L-Homoserine is substrate for homoserine kinase, yielding phosphohomoserine (homoserine-phosphate), which is converted to by threonine synthase to yield L-threonine. Homoserine is converted to O-succinyl homoserine by homoserine O-succinyltransferase, a p
https://en.wikipedia.org/wiki/AP%20site
In biochemistry and molecular genetics, an AP site (apurinic/apyrimidinic site), also known as an abasic site, is a location in DNA (also in RNA but much less likely) that has neither a purine nor a pyrimidine base, either spontaneously or due to DNA damage. It has been estimated that under physiological conditions 10,000 apurinic sites and 500 apyrimidinic may be generated in a cell daily. AP sites can be formed by spontaneous depurination, but also occur as intermediates in base excision repair. In this process, a DNA glycosylase recognizes a damaged base and cleaves the N-glycosidic bond to release the base, leaving an AP site. A variety of glycosylases that recognize different types of damage exist, including oxidized or methylated bases, or uracil in DNA. The AP site can then be cleaved by an AP endonuclease, leaving 3'-hydroxyl and deoxyribose-5-phosphate termini (see DNA structure). In alternative fashion, bifunctional glycosylase-lyases can cleave the AP site, leaving a 5' phosphate adjacent to a 3' α,β-unsaturated aldehyde. Both mechanisms form a single-strand break, which is then repaired by either short-patch or long-patch base excision repair. If left unrepaired, AP sites can lead to mutation during semiconservative replication. They can cause replication fork stalling and are bypassed by translesion synthesis. In E. coli, adenine is preferentially inserted across from AP sites, known as the "A rule". The situation is more complex in higher eukaryotes, with different nucleotides showing a preference depending on the organism and experimental conditions. Formation AP sites form when deoxyribose is cleaved from its nitrogenous base, breaking the glycosidic linkage between the two. This can happen spontaneously, as a result of chemical activity, radiation, or due to enzyme activity. The glycosidic linkages in DNA can be broken via acid-catalyzed hydrolysis. Purine bases can be ejected under weakly acidic conditions, while pyrimidines require stronger a
https://en.wikipedia.org/wiki/Haploinsufficiency
Haploinsufficiency in genetics describes a model of dominant gene action in diploid organisms, in which a single copy of the wild-type allele at a locus in heterozygous combination with a variant allele is insufficient to produce the wild-type phenotype. Haploinsufficiency may arise from a de novo or inherited loss-of-function mutation in the variant allele, such that it yields little or no gene product (often a protein). Although the other, standard allele still produces the standard amount of product, the total product is insufficient to produce the standard phenotype. This heterozygous genotype may result in a non- or sub-standard, deleterious, and (or) disease phenotype. Haploinsufficiency is the standard explanation for dominant deleterious alleles. In the alternative case of haplosufficiency, the loss-of-function allele behaves as above, but the single standard allele in the heterozygous genotype produces sufficient gene product to produce the same, standard phenotype as seen in the homozygote. Haplosufficiency accounts for the typical dominance of the "standard" allele over variant alleles, where the phenotypic identity of genotypes heterozygous and homozygous for the allele defines it as dominant, versus a variant phenotype produced only by the genotype homozygous for the alternative allele, which defines it as recessive. Mechanism The alteration in the gene dosage, which is caused by the loss of a functional allele, is also called allelic insufficiency. Haploinsufficiency in humans About 3,000 human genes cannot tolerate loss of one of the two alleles. An example of this is seen in the case of Williams syndrome, a neurodevelopmental disorder caused by the haploinsufficiency of genes at 7q11.23. The haploinsufficiency is caused by the copy-number variation (CNV) of 28 genes led by the deletion of ~1.6 Mb. These dosage-sensitive genes are vital for human language and constructive cognition. Another example is the haploinsufficiency of telomerase rever
https://en.wikipedia.org/wiki/PC/SC
PC/SC (short for "Personal Computer/Smart Card") is a specification for smart-card integration into computing environments. Microsoft has implemented PC/SC in Microsoft Windows 200x/XP and makes it available under Microsoft Windows NT/9x. A free implementation of PC/SC, PC/SC Lite, is available for Linux and other Unixes; a forked version comes bundled with Mac OS X. Work group Core members Gemalto Infineon Microsoft Toshiba Associate members Advanced Card Systems Alcor Micro Athena Smartcard Solutions Bloombase C3PO S.L. Cherry Electrical Products Cross S&T Inc.Dai Nippon Printing Co., Ltd. Feitian Technologies Kobil Systems GmbH Silitek Nidec Sankyo Corporation O2Micro, Inc. OMNIKEY (HID Global) Precise Biometrics Realtek Semiconductor Corp. Research In Motion Sagem Orga SCM Microsystems Siemens Teridian Semiconductor Corp. See also CT-API, an alternative API External links PC/SC Workgroup Free Implementation (PCSCLite) pcsc-tools free commandline tools for PC/SC Winscard Smart Card API functions in Microsoft Windows XP/2000 SMACADU open source smart card analyzing tools PC/SC Reader List PC/SC-standard readers de:PC/SC#Der PC/SC-Standard
https://en.wikipedia.org/wiki/Medical%20optical%20imaging
Medical optical imaging is the use of light as an investigational imaging technique for medical applications, pioneered by American Physical Chemist Britton Chance. Examples include optical microscopy, spectroscopy, endoscopy, scanning laser ophthalmoscopy, laser Doppler imaging, and optical coherence tomography. Because light is an electromagnetic wave, similar phenomena occur in X-rays, microwaves, and radio waves. Optical imaging systems may be divided into diffusive and ballistic imaging systems. A model for photon migration in turbid biological media has been developed by Bonner et al. Such a model can be applied for interpretation data obtained from laser Doppler blood-flow monitors and for designing protocols for therapeutic excitation of tissue chromophores. Diffusive optical imaging Diffuse optical imaging (DOI) is a method of imaging using near-infrared spectroscopy (NIRS) or fluorescence-based methods. When used to create 3D volumetric models of the imaged material DOI is referred to as diffuse optical tomography, whereas 2D imaging methods are classified as diffuse optical topography. The technique has many applications to neuroscience, sports medicine, wound monitoring, and cancer detection. Typically DOI techniques monitor changes in concentrations of oxygenated and deoxygenated hemoglobin and may additionally measure redox states of cytochromes. The technique may also be referred to as diffuse optical tomography (DOT), near infrared optical tomography (NIROT) or fluorescence diffuse optical tomography (FDOT), depending on the usage. In neuroscience, functional measurements made using NIR wavelengths, DOI techniques may classify as functional near infrared spectroscopy (fNIRS). Ballistic optical imaging Ballistic photons are the light photons that travel through a scattering (turbid) medium in a straight line. Also known as ballistic light. If laser pulses are sent through a turbid medium such as fog or body tissue, most of the photons are
https://en.wikipedia.org/wiki/Solar%20neutrino
A solar neutrino is a neutrino originating from nuclear fusion in the Sun's core, and is the most common type of neutrino passing through any source observed on Earth at any particular moment. Neutrinos are elementary particles with extremely small rest mass and a neutral electric charge. They only interact with matter via the weak interaction and gravity, making their detection very difficult. This has led to the now-resolved solar neutrino problem. Much is now known about solar neutrinos, but the research in this field is ongoing. History and background Homestake experiment The timeline of solar neutrinos and their discovery dates back to the 1960s, beginning with the two astrophysicists John N. Bahcall and Raymond Davis Jr. The experiment, known as the Homestake experiment, named after the town in which it was conducted (Homestake, South Dakota), aimed to count the solar neutrinos arriving at Earth. Bahcall, using a solar model he developed, came to the conclusion that the most effective way to study solar neutrinos would be via the chlorine-argon reaction. Using his model, Bahcall was able to calculate the number of neutrinos expected to arrive at Earth from the Sun. Once the theoretical value was determined, the astrophysicists began pursuing experimental confirmation. Davis developed the idea of taking hundreds of thousands of liters of perchloroethylene, a chemical compound made up of carbon and chlorine, and searching for neutrinos using a chlorine-argon detector. The process was conducted very far underground, hence the decision to conduct the experiment in Homestake as the town was home to the Homestake Gold Mine. By conducting the experiment deep underground, Bahcall and Davis were able to avoid cosmic ray interactions which could affect the process and results. The entire experiment lasted several years as it was able to detect only a few chlorine to argon conversions each day, and the first results were not yielded by the team until 1968. To their s
https://en.wikipedia.org/wiki/History%20of%20superconductivity
Superconductivity is the phenomenon of certain materials exhibiting zero electrical resistance and the expulsion of magnetic fields below a characteristic temperature. The history of superconductivity began with Dutch physicist Heike Kamerlingh Onnes's discovery of superconductivity in mercury in 1911. Since then, many other superconducting materials have been discovered and the theory of superconductivity has been developed. These subjects remain active areas of study in the field of condensed matter physics. The study of superconductivity has a fascinating history, with several breakthroughs having dramatically accelerated publication and patenting activity in this field, as shown in the figure on the right and described in details below. Throughout its 100+ year history the number of non-patent publications per year about superconductivity has been a factor of 10 larger than the number of patent families, which is characteristic of a technology, that has not achieved a substantial commercial success (see Technological applications of superconductivity). Exploring ultra-cold phenomena (to 1908) James Dewar initiated research into electrical resistance at low temperatures. Dewar and John Ambrose Fleming predicted that at absolute zero, pure metals would become perfect electromagnetic conductors (though, later, Dewar altered his opinion on the disappearance of resistance, believing that there would always be some resistance). Walther Hermann Nernst developed the third law of thermodynamics and stated that absolute zero was unattainable. Carl von Linde and William Hampson, both commercial researchers, nearly at the same time filed for patents on the Joule–Thomson effect for the liquefaction of gases. Linde's patent was the climax of 20 years of systematic investigation of established facts, using a regenerative counterflow method. Hampson's designs was also of a regenerative method. The combined process became known as the Hampson–Linde liquefaction process. Onne
https://en.wikipedia.org/wiki/Irreversible%20circuit
An irreversible circuit is a circuit whose inputs cannot be reconstructed from its outputs. Such a circuit, of necessity, consumes energy. See also Reversible computing Integrated circuits
https://en.wikipedia.org/wiki/GRIB
GRIB (GRIdded Binary or General Regularly-distributed Information in Binary form) is a concise data format commonly used in meteorology to store historical and forecast weather data. It is standardized by the World Meteorological Organization's Commission for Basic Systems, known under number GRIB FM 92-IX, described in WMO Manual on Codes No.306. Currently there are three versions of GRIB. Version 0 was used to a limited extent by projects such as TOGA, and is no longer in operational use. The first edition (current sub-version is 2) is used operationally worldwide by most meteorological centers, for Numerical Weather Prediction output (NWP). A newer generation has been introduced, known as GRIB second edition, and data is slowly changing over to this format. Some of the second-generation GRIB is used for derived products distributed in the Eumetcast of Meteosat Second Generation. Another example is the NAM (North American Mesoscale) model. Format GRIB files are a collection of self-contained records of 2D data, and the individual records stand alone as meaningful data, with no references to other records or to an overall schema. So collections of GRIB records can be appended to each other or the records separated. Each GRIB record has two components - the part that describes the record (the header), and the actual binary data itself. The data in GRIB-1 are typically converted to integers using scale and offset, and then bit-packed. GRIB-2 also has the possibility of compression. GRIB History GRIB superseded the Aeronautical Data Format (ADF). The World Meteorological Organization (WMO) Commission for Basic Systems (CBS) met in 1985 to create the GRIB (GRIdded Binary) format. The Working Group on Data Management (WGDM) in February 1994, after major changes, approved revision 1 of the GRIB format. GRIB Edition 2 format was approved in 2003 at Geneva. Problems with GRIB There is no way in GRIB to describe a collection of GRIB records Each record is inde
https://en.wikipedia.org/wiki/Residual%20entropy
Residual entropy is the difference in entropy between a non-equilibrium state and crystal state of a substance close to absolute zero. This term is used in condensed matter physics to describe the entropy at zero kelvin of a glass or plastic crystal referred to the crystal state, whose entropy is zero according to the third law of thermodynamics. It occurs if a material can exist in many different states when cooled. The most common non-equilibrium state is vitreous state, glass. A common example is the case of carbon monoxide, which has a very small dipole moment. As the carbon monoxide crystal is cooled to absolute zero, few of the carbon monoxide molecules have enough time to align themselves into a perfect crystal (with all of the carbon monoxide molecules oriented in the same direction). Because of this, the crystal is locked into a state with different corresponding microstates, giving a residual entropy of , rather than zero. Another example is any amorphous solid (glass). These have residual entropy, because the atom-by-atom microscopic structure can be arranged in a huge number of different ways across a macroscopic system. History One of the first examples of residual entropy was pointed out by Pauling to describe water ice. In water, each oxygen atom is bonded to two hydrogen atoms. However, when water freezes it forms a tetragonal structure where each oxygen atom has four hydrogen neighbors (due to neighboring water molecules). The hydrogen atoms sitting between the oxygen atoms have some degree of freedom as long as each oxygen atom has two hydrogen atoms that are 'nearby', thus forming the traditional H2O water molecule. However, it turns out that for a large number of water molecules in this configuration, the hydrogen atoms have a large number of possible configurations that meet the 2-in 2-out rule (each oxygen atom must have two 'near' (or 'in') hydrogen atoms, and two far (or 'out') hydrogen atoms). This freedom exists down to absolute zero
https://en.wikipedia.org/wiki/Impredicativity
In mathematics, logic and philosophy of mathematics, something that is impredicative is a self-referencing definition. Roughly speaking, a definition is impredicative if it invokes (mentions or quantifies over) the set being defined, or (more commonly) another set that contains the thing being defined. There is no generally accepted precise definition of what it means to be predicative or impredicative. Authors have given different but related definitions. The opposite of impredicativity is predicativity, which essentially entails building stratified (or ramified) theories where quantification over lower levels results in variables of some new type, distinguished from the lower types that the variable ranges over. A prototypical example is intuitionistic type theory, which retains ramification so as to discard impredicativity. Russell's paradox is a famous example of an impredicative construction—namely the set of all sets that do not contain themselves. The paradox is that such a set cannot exist: If it would exist, the question could be asked whether it contains itself or not — if it does then by definition it should not, and if it does not then by definition it should. The greatest lower bound of a set , , also has an impredicative definition: if and only if for all elements of , is less than or equal to , and any less than or equal to all elements of is less than or equal to . This definition quantifies over the set (potentially infinite, depending on the order in question) whose members are the lower bounds of , one of which being the glb itself. Hence predicativism would reject this definition. History The terms "predicative" and "impredicative" were introduced by , though the meaning has changed a little since then. Solomon Feferman provides a historical review of predicativity, connecting it to current outstanding research problems. The vicious circle principle was suggested by Henri Poincaré (1905-6, 1908) and Bertrand Russell in the wake of
https://en.wikipedia.org/wiki/Didiereaceae
Didiereaceae is a family of flowering plants found in continental Africa and Madagascar. It contains 20 species classified in three subfamilies and six genera. Species of the family are succulent plants, growing in sub-arid to arid habitats. Several are known as ornamental plants in specialist succulent collections. The subfamily Didiereoideae is endemic to the southwest of Madagascar, where the species are characteristic elements of the spiny thickets. Systematics The family was long considered entirely endemic to Madagascar until the genera Calyptrotheca, Ceraria, and Portulacaria from the African mainland were included. Molecular phylogenetic analysis confirmed the monophyly of the family and its three subfamilies: The family is closely related to the New World family Cactaceae (cacti), sufficiently closely so that species of Didiereaceae can be grafted successfully on some cacti. Calyptrothecoideae Contains only one genus, Calyptrotheca, with two species found in tropical East Africa. Didiereoideae This subfamily is endemic to Madagascar, where it is found in the spiny thickets of the dry southwest. The plants are spiny succulent shrubs and trees from 2–20 m tall, with thick water-storing stems and leaves that are deciduous in the long dry season. All of the species except Alluaudiopsis have a distinct youth form. They start as small procumbent shrubs but eventually a dominant stem is produced that becomes a trunk. The trunk later branches forming a crown and the basal branches die off. All species are dioecious (Decarya female-dioecious). The plants have different long-shoots and short-shoots (brachyblasts). Long-shoot leaves are soon deciduous, but brachyblasts form in the leaf axils and from them grow small leaves that appear singly or in pairs and are accompanied by conical spines (much like the areoles found in cacti). The flowers are unisexual (except from Decarya) and radially symmetric, made up of four tepals with two basal bracts. Flowers rarely
https://en.wikipedia.org/wiki/Crookes%20tube
A Crookes tube (also Crookes–Hittorf tube) is an early experimental electrical discharge tube, with partial vacuum, invented by English physicist William Crookes and others around 1869-1875, in which cathode rays, streams of electrons, were discovered. Developed from the earlier Geissler tube, the Crookes tube consists of a partially evacuated glass bulb of various shapes, with two metal electrodes, the cathode and the anode, one at either end. When a high voltage is applied between the electrodes, cathode rays (electrons) are projected in straight lines from the cathode. It was used by Crookes, Johann Hittorf, Julius Plücker, Eugen Goldstein, Heinrich Hertz, Philipp Lenard, Kristian Birkeland and others to discover the properties of cathode rays, culminating in J.J. Thomson's 1897 identification of cathode rays as negatively charged particles, which were later named electrons. Crookes tubes are now used only for demonstrating cathode rays. Wilhelm Röntgen discovered X-rays using the Crookes tube in 1895. The term Crookes tube is also used for the first generation, cold cathode X-ray tubes, which evolved from the experimental Crookes tubes and were used until about 1920. Operation Crookes tubes are cold cathode tubes, meaning that they do not have a heated filament in them that releases electrons as the later electronic vacuum tubes usually do. Instead, electrons are generated by the ionization of the residual air by a high DC voltage (from a few kilovolts to about 100 kilovolts) applied between the cathode and anode electrodes in the tube, usually by an induction coil (a "Ruhmkorff coil"). The Crookes tubes require a small amount of air in them to function, from about 10−6 to 5×10−8 atmosphere (7×10−4 - 4×10−5 torr or 0.1-0.006 pascal). When high voltage is applied to the tube, the electric field accelerates the small number of electrically charged ions and free electrons always present in the gas, created by natural processes like photoionization and radioac
https://en.wikipedia.org/wiki/Panofsky%20Prize
The Panofsky Prize in Experimental Particle Physics is an annual prize of the American Physical Society. It is given to recognize and encourage outstanding achievements in experimental particle physics, and is open to scientists of any nation. It was established in 1985 by friends of Wolfgang K. H. Panofsky and by the Division of Particles and Fields of the American Physical Society. Panofsky was a physics professor at Stanford University and the first director of the Stanford Linear Accelerator Center (SLAC). Several of the prize winners have subsequently won the Nobel Prize in Physics. As of 2021, the prize included a $10,000 award. Recipients The names, citations, and short biographies for Panofsky Prize winners are posted by the American Physical Society. 2023: B. Lee Roberts, William M. Morse 2022: Byron G. Lundberg, Kimio Niwa, Regina Abby Rameika, Vittorio Paolone 2021: Edward Kearns, 2020: Wesley Smith 2019: Sheldon Leslie Stone 2018: Lawrence Sulak 2017: Tejinder Virdee, Michel Della Negra, Peter Jenni 2016: David Hitlin, , Jonathan Dorfan, 2015: Stanley Wojcicki 2014: Kam-Biu Luk, Wang Yifang 2013: Blas Cabrera Navarro, Bernard Sadoulet 2012: 2011: , , 2010: Eugene Beier 2009: , 2008: , Pierre Sokolsky 2007: Bruce Winstein, , 2006: , Nigel Lockyer, 2005: Piermaria J. Oddone 2004: Arie Bodek 2003: William J. Willis 2002: Masatoshi Koshiba, Takaaki Kajita, Yoji Totsuka 2001: Paul Grannis 2000: Martin Breidenbach 1999: 1998: David Robert Nygren 1997: , 1996: Gail G. Hanson, Roy Frederick Schwitters 1995: Frank J. Sciulli 1994: , 1993: , Nicholas P. Samios, 1992: Raymond Davis, Jr. and Frederick Reines 1991: Gerson Goldhaber and 1990: Michael S. Witherell 1989: Henry W. Kendall, Richard E. Taylor, Jerome I. Friedman 1988: Charles Y. Prescott See also List of physics awards
https://en.wikipedia.org/wiki/Morphant
An organism which has been treated with a morpholino antisense oligo to temporarily knock down expression of a targeted gene is called a morphant. Background This term was coined by Prof. Steve Ekker to describe the zebrafish with which he was experimenting; by knocking down embryonic gene expression using Morpholinos, Prof. Ekker "phenocopied" known zebrafish mutations, that is, he raised embryos that had the same morphological phenotype as embryonic zebrafish with specific gene mutations. Prof. Ekker's papers and presentations describing morphant phenocopies of mutant phenotypes, in combination with Prof. Janet Heasman's earlier work with Morpholinos in Xenopus embryos, led to rapid adoption of Morpholino technology by the developmental biology community.
https://en.wikipedia.org/wiki/Conformal%20anomaly
A conformal anomaly, scale anomaly, trace anomaly or Weyl anomaly is an anomaly, i.e. a quantum phenomenon that breaks the conformal symmetry of the classical theory. In quantum field theory when we set to zero we have only Feynman tree diagrams, which is a "classical" theory (equivalent to the Fredholm formulation of a classical field theory). One-loop (N-loop) Feynman diagrams are proportional to (). If a current is conserved classically () but develops a divergence at loop level in quantum field theory (), we say there is an "anomaly." A famous example is the axial current anomaly where massless fermions will have a classically conserved axial current, but which develops a nonzero divergence in the presence of gauge fields. A scale invariant theory, one in which there are no mass scales, will have a conserved Noether current called the "scale current." This is derived by performing scale transformations on the coordinates of space-time. The divergence of the scale current is then the trace of the stress tensor. In the absence of any mass scales the stress tensor trace vanishes (), hence the current is "classically conserved" and the theory is classically scale invariant. However, at loop level the scale current can develop a nonzero divergence. This is called the "scale anomaly" or "trace anomaly" and represents the generation of mass by quantum mechanics. It is related to the renormalization group, or the "running of coupling constants," when they are viewed at different mass scales. While this can be formulated without reference to gravity, it becomes more powerful when general relativity is considered. A classically conformal theory with arbitrary background metric has an action that is invariant under rescalings of the background metric and other matter fields, called Weyl transformations. Note that if we rescale the coordinates this is a general coordinate transformation, and merges with general covariance, the exact symmetry of general r
https://en.wikipedia.org/wiki/Critical%20dimension
In the renormalization group analysis of phase transitions in physics, a critical dimension is the dimensionality of space at which the character of the phase transition changes. Below the lower critical dimension there is no phase transition. Above the upper critical dimension the critical exponents of the theory become the same as that in mean field theory. An elegant criterion to obtain the critical dimension within mean field theory is due to V. Ginzburg. Since the renormalization group sets up a relation between a phase transition and a quantum field theory, this has implications for the latter and for our larger understanding of renormalization in general. Above the upper critical dimension, the quantum field theory which belongs to the model of the phase transition is a free field theory. Below the lower critical dimension, there is no field theory corresponding to the model. In the context of string theory the meaning is more restricted: the critical dimension is the dimension at which string theory is consistent assuming a constant dilaton background without additional confounding permutations from background radiation effects. The precise number may be determined by the required cancellation of conformal anomaly on the worldsheet; it is 26 for the bosonic string theory and 10 for superstring theory. Upper critical dimension in field theory Determining the upper critical dimension of a field theory is a matter of linear algebra. It is worthwhile to formalize the procedure because it yields the lowest-order approximation for scaling and essential input for the renormalization group. It also reveals conditions to have a critical model in the first place. A Lagrangian may be written as a sum of terms, each consisting of an integral over a monomial of coordinates and fields . Examples are the standard -model and the isotropic Lifshitz tricritical point with Lagrangians see also the figure on the right. This simple structure may be compatible with a scale
https://en.wikipedia.org/wiki/Self-energy
In quantum field theory, the energy that a particle has as a result of changes that it causes in its environment defines self-energy , and represents the contribution to the particle's energy, or effective mass, due to interactions between the particle and its environment. In electrostatics, the energy required to assemble the charge distribution takes the form of self-energy by bringing in the constituent charges from infinity, where the electric force goes to zero. In a condensed matter context relevant to electrons moving in a material, the self-energy represents the potential felt by the electron due to the surrounding medium's interactions with it. Since electrons repel each other the moving electron polarizes, or causes to displace the electrons in its vicinity and then changes the potential of the moving electron fields. These are examples of self-energy. Characteristics Mathematically, this energy is equal to the so-called on mass shell value of the proper self-energy operator (or proper mass operator) in the momentum-energy representation (more precisely, to times this value). In this, or other representations (such as the space-time representation), the self-energy is pictorially (and economically) represented by means of Feynman diagrams, such as the one shown below. In this particular diagram, the three arrowed straight lines represent particles, or particle propagators, and the wavy line a particle-particle interaction; removing (or amputating) the left-most and the right-most straight lines in the diagram shown below (these so-called external lines correspond to prescribed values for, for instance, momentum and energy, or four-momentum), one retains a contribution to the self-energy operator (in, for instance, the momentum-energy representation). Using a small number of simple rules, each Feynman diagram can be readily expressed in its corresponding algebraic form. In general, the on-the-mass-shell value of the self-energy operator in the momentum-
https://en.wikipedia.org/wiki/Atari%20ST%20BASIC
Atari ST BASIC (or ST Basic) was the first dialect of BASIC that was produced for the Atari ST line of computers. This BASIC interpreter was bundled with all new STs in the early years of the ST's lifespan, and quickly became the standard BASIC for that platform. However, many users disliked it, and improved dialects of BASIC quickly came out to replace it. Development Atari Corporation commissioned MetaComCo to write a version of BASIC that would take advantage of the GEM environment on the Atari ST. This was based on a version already written for Digital Research called DR-Basic, which was bundled with DR's CP/M-86 operating system. The result was called ST BASIC. At the time the ST was launched, ST BASIC was bundled with all new STs. A further port of the same language called ABasiC ended up being supplied for a time with the Amiga, but Commodore quickly replaced it with the Microsoft-developed AmigaBASIC. Interface The user interface consists of four windows: EDIT, for entering source code LIST, where the source code can be browsed COMMAND, where instructions are entered and immediately executed OUTPUT The windows can only be selected with the mouse. Bugs ST BASIC has many bugs. Compute! in September 1987 reported on one flaw that it described as "among the worst BASIC bugs of all time". Typing x = 18.9 results in function not yet done System error #%N, please restart Similar commands, such as x = 39.8 or x = 4.725, crash the computer; the magazine described the results of the last command as "as bad a crash as you can get on the ST without seeing the machine rip free from its cables, drag itself to the edge of the desk, and leap into the trash bin". After citing other flaws (such as ? 257 * 257 and ? 257 ^ 2 not being equivalent) the magazine recommended "avoid[ing] ST BASIC for serious programming". Regarding reports that MetaComCo was "one bug away" from releasing a long-delayed update to the language, it jokingly wondered "whether Atari has only one m
https://en.wikipedia.org/wiki/Name%20Service%20Switch
The Name Service Switch (NSS) connects the computer with a variety of sources of common configuration databases and name resolution mechanisms. These sources include local operating system files (such as , , and ), the Domain Name System (DNS), the Network Information Service (NIS, NIS+), and LDAP. This operating system mechanism, used in billions of computers, including all Unix-like operating systems, is indispensable to functioning as part of the networked organization and the Internet. Among other things, it is invoked every time a computer user clicks on or types a website address in the web browser or responds to the password challenge to be authorized access to the computer and the Internet. nsswitch.conf A system administrator usually configures the operating system's name services using the file . This file lists databases (such as passwd, shadow and group), and one or more sources for obtaining that information. Examples for sources are files for local files, ldap for the Lightweight Directory Access Protocol, nis for the Network Information Service, nisplus for NIS+, dns for the Domain Name System (DNS), and wins for Windows Internet Name Service. The nsswitch.conf file has line entries for each service consisting of a database name in the first field, terminated by a colon, and a list of possible source databases in the second field. A typical file might look like: passwd: files ldap shadow: files group: files ldap hosts: dns nis files ethers: files nis netmasks: files nis networks: files nis protocols: files nis rpc: files nis services: files nis automount: files aliases: files The order of the source databases determines the order the NSS will attempt to look up those sources to resolve queries for the specified service. A bracketed list of criteria may be specified following each source name to govern the conditions under which the NSS will proceed to querying the next source based on the preceding
https://en.wikipedia.org/wiki/Windows%20Fundamentals%20for%20Legacy%20PCs
Windows Fundamentals for Legacy PCs ("WinFLP") is a thin client release of the Windows NT operating system developed by Microsoft and optimized for older, less powerful hardware. It was released on July 8, 2006, nearly two years after its Windows XP SP2 counterpart was released in August 2004, and is not marketed as a full-fledged general purpose operating system, although it is functionally able to perform most of the tasks generally associated with one. It includes only certain functionality for local workloads such as security, management, document viewing related tasks and the .NET Framework. It is designed to work as a client–server solution with RDP clients or other third party clients such as Citrix ICA. Windows Fundamentals for Legacy PCs reached end of support on April 8, 2014 along with most other Windows XP editions. History Windows Fundamentals for Legacy PCs was originally announced with the code name "Eiger" on 12 May 2005. ("Mönch" was announced as a potential follow-up project at about the same time.) The name "Windows Fundamentals for Legacy PCs" appeared in a press release in September 2005, when it was introduced as "formerly code-named “Eiger”" and described as "an exclusive benefit to SA [Microsoft Software Assurance] customers". A Gartner evaluation from April 2006 stated that: The RTM version of Windows Fundamentals for Legacy PCs, which was released on July 8, 2006, was built from the Windows XP Service Pack 2 codebase. The release was announced to the press on July 12, 2006. Because Windows Fundamentals for Legacy PCs comes from a codebase of 32-bit Windows XP, its service packs are also developed separately. For the same reason, Service Pack 3 for Windows Fundamentals for Legacy PCs, released on October 7, 2008, is the same as Service Pack 3 for 32-bit (x86) editions of Windows XP. In fact, due to the earlier release date of the 32-bit version, many of the key features introduced by Service Pack 2 for 32-bit (x86) editions of Windows XP
https://en.wikipedia.org/wiki/Halo%20nucleus
In nuclear physics, an atomic nucleus is called a halo nucleus or is said to have a nuclear halo when it has a core nucleus surrounded by a "halo" of orbiting protons or neutrons, which makes the radius of the nucleus appreciably larger than that predicted by the liquid drop model. Halo nuclei form at the extreme edges of the table of nuclides — the neutron drip line and proton drip line — and have short half-lives, measured in milliseconds. These nuclei are studied shortly after their formation in an ion beam. Typically, an atomic nucleus is a tightly bound group of protons and neutrons. However, in some nuclides, there is an overabundance of one species of nucleon. In some of these cases, a nuclear core and a halo will form. Often, this property may be detected in scattering experiments, which show the nucleus to be much larger than the otherwise expected value. Normally, the cross-section (corresponding to the classical radius) of the nucleus is proportional to the cube root of its mass, as would be the case for a sphere of constant density. Specifically, for a nucleus of mass number A, the radius r is (approximately) where is 1.2 fm. One example of a halo nucleus is 11Li, which has a half-life of 8.6 ms. It contains a core of 3 protons and 6 neutrons, and a halo of two independent and loosely bound neutrons. It decays into 11Be by the emission of an antineutrino and an electron. Its mass radius of 3.16 fm is close to that of 32S or, even more impressively, of 208Pb, both much heavier nuclei. Experimental confirmation of nuclear halos is recent and ongoing. Additional candidates are suspected. Several nuclides including 9B, 13N, and 15N are calculated to have a halo in the excited state but not in the ground state. List of known nuclides with nuclear halo Nuclei that have a neutron halo include 11Be and 19C. A two-neutron halo is exhibited by 6He, 11Li, 17B, 19B and 22C. Two-neutron halo nuclei break into three fragments and are called Borromean because
https://en.wikipedia.org/wiki/Null%20dust%20solution
In mathematical physics, a null dust solution (sometimes called a null fluid) is a Lorentzian manifold in which the Einstein tensor is null. Such a spacetime can be interpreted as an exact solution of Einstein's field equation, in which the only mass–energy present in the spacetime is due to some kind of massless radiation. Mathematical definition By definition, the Einstein tensor of a null dust solution has the form where is a null vector field. This definition makes sense purely geometrically, but if we place a stress–energy tensor on our spacetime of the form , then Einstein's field equation is satisfied, and such a stress–energy tensor has a clear physical interpretation in terms of massless radiation. The vector field specifies the direction in which the radiation is moving; the scalar multiplier specifies its intensity. Physical interpretation Physically speaking, a null dust describes either gravitational radiation, or some kind of nongravitational radiation which is described by a relativistic classical field theory (such as electromagnetic radiation), or a combination of these two. Null dusts include vacuum solutions as a special case. Phenomena which can be modeled by null dust solutions include: a beam of neutrinos assumed for simplicity to be massless (treated according to classical physics), a very high-frequency electromagnetic wave, a beam of incoherent electromagnetic radiation. In particular, a plane wave of incoherent electromagnetic radiation is a linear superposition of plane waves, all moving in the same direction but having randomly chosen phases and frequencies. (Even though the Einstein field equation is nonlinear, a linear superposition of comoving plane waves is possible.) Here, each electromagnetic plane wave has a well defined frequency and phase, but the superposition does not. Individual electromagnetic plane waves are modeled by null electrovacuum solutions, while an incoherent mixture can be modeled by a null dust.
https://en.wikipedia.org/wiki/Mesoporous%20material
A mesoporous material (or super nanoporous ) is a nanoporous material containing pores with diameters between 2 and 50 nm, according to IUPAC nomenclature. For comparison, IUPAC defines microporous material as a material having pores smaller than 2 nm in diameter and macroporous material as a material having pores larger than 50 nm in diameter. Typical mesoporous materials include some kinds of silica and alumina that have similarly-sized mesopores. Mesoporous oxides of niobium, tantalum, titanium, zirconium, cerium and tin have also been reported. However, the flagship of mesoporous materials is mesoporous carbon, which has direct applications in energy storage devices. Mesoporous carbon has porosity within the mesopore range and this significantly increases the specific surface area. Another very common mesoporous material is activated carbon which is typically composed of a carbon framework with both mesoporosity and microporosity depending on the conditions under which it was synthesized. According to IUPAC, a mesoporous material can be disordered or ordered in a mesostructure. In crystalline inorganic materials, mesoporous structure noticeably limits the number of lattice units, and this significantly changes the solid-state chemistry. For example, the battery performance of mesoporous electroactive materials is significantly different from that of their bulk structure. A procedure for producing mesoporous materials (silica) was patented around 1970, and methods based on the Stöber process from 1968 were still in use in 2015. It went almost unnoticed and was reproduced in 1997. Mesoporous silica nanoparticles (MSNs) were independently synthesized in 1990 by researchers in Japan. They were later produced also at Mobil Corporation laboratories and named Mobil Crystalline Materials, or MCM-41. The initial synthetic methods did not allow to control the quality of the secondary level of porosity generated. It was only by employing quaternary ammonium cations an
https://en.wikipedia.org/wiki/Brinkmann%20coordinates
Brinkmann coordinates are a particular coordinate system for a spacetime belonging to the family of pp-wave metrics. They are named for Hans Brinkmann. In terms of these coordinates, the metric tensor can be written as where , the coordinate vector field dual to the covector field , is a null vector field. Indeed, geometrically speaking, it is a null geodesic congruence with vanishing optical scalars. Physically speaking, it serves as the wave vector defining the direction of propagation for the pp-wave. The coordinate vector field can be spacelike, null, or timelike at a given event in the spacetime, depending upon the sign of at that event. The coordinate vector fields are both spacelike vector fields. Each surface can be thought of as a wavefront. In discussions of exact solutions to the Einstein field equation, many authors fail to specify the intended range of the coordinate variables . Here we should take to allow for the possibility that the pp-wave develops a null curvature singularity.
https://en.wikipedia.org/wiki/Russian%20tortoise
The Russian tortoise (Testudo horsfieldii), also commonly known as the Afghan tortoise, the Central Asian tortoise, Horsfield's tortoise, four-clawed tortoise, and the (Russian) steppe tortoise, as well as the "Four-Toed Tortoise" is a threatened species of tortoise in the family Testudinidae. The species is endemic to Central Asia from the Caspian Sea south through Iran, Pakistan and Afghanistan, and east across Kazakhstan to Xinjiang, China. Human activities in its native habitat contribute to its threatened status. Etymology Both the specific name, horsfieldii, and the common name "Horsfield's tortoise" are in honor of the American naturalist Thomas Horsfield. He worked in Java (1796) and for the East India Company and later became a friend of Sir Thomas Raffies. Systematics This species is traditionally placed in Testudo. Due to distinctly different morphological characteristics, the monotypic genus Agrionemys was proposed for it in 1966, and was accepted for several decades, although not unanimously. DNA sequence analysis generally concurred, but not too robustly so. However, in 2021, it was again reclassified in Testudo by the Turtle Taxonomy Working Group and the Reptile Database, with Agrionemys being relegated to a distinct subgenus that T. horsfieldii belonged to. The Turtle Taxonomy Working Group lists five separate subspecies of Russian tortoise, but they are not widely accepted by taxonomists: T. h. bogdanovi Chkhikvadze, 2008 – southern Kyrgyzstan, Tajikistan, Uzbekistan, Turkmenistan T. h. horsfieldii (Gray, 1844) – Afghanistan/Pakistan and southern Central Asia T. h. kazachstanica Chkhikvadze, 1988 – Kazakhstan/Karakalpakhstan T. h. kuznetzovi Chkhikvadze, Ataev, Shammakov & Zatoka, 2009 – northern Turkmenistan, southern Uzbekistan T. h. rustamovi Chkhikvadze, Amiranschwili & Atajew, 1990 – southwestern Turkmenistan Description The Russian tortoise is a small tortoise species, with a size range of . Females grow slightly larger () to accommod
https://en.wikipedia.org/wiki/Exile%20%281988%20video%20game%20series%29
is an action role-playing video game series developed by Telenet Japan. The first two games in the series, XZR and XZR II were both released in Japan in 1988, with versions available for the NEC PC-8801, NEC PC-9801, MSX2 and the X1 turbo (for the first game only). In 1991, a remake of XZR II simply titled Exile was released for the PC Engine and Mega Drive. These versions were both released in North America the following year, with Working Designs handling the localization for the TurboGrafx-CD version, while Renovation Products published the Genesis version. A sequel exclusive to the Super CD-ROM2 format, titled Exile: Wicked Phenomenon, was released in 1992, which was also localized by Working Designs for the North American market. The Exile series centers on Sadler, a Syrian Assassin, who is the main character of each game. The original computer versions were notorious for featuring various references to religious historical figures, modern political leaders, iconography, drugs, and time-traveling assassins, although some of these aspects were considerably toned down or omitted in the later console games, with the English versions rewriting all the historical religious organizations into fictional groups. Games XZR: Hakai no Gūzō , the first game in the series, was originally released for the NEC PC-8801 in July 1988. It was subsequently released for the MSX2 and NEC PC-9801 on August and for the X1 turbo on September. The gameplay included action-platform elements, switching between an overhead perspective and side-scrolling sections. The plot of the original XZR has been compared to the later Assassin's Creed video game series. The soundtrack for the PC-8801 version was composed by Yujiroh, Shinobu Ogawa and Tenpei Sato. The game centers on Sadler, a Syrian Assassin (a Shia Islamic sect) who is on a journey to kill the Caliph. The intro sequence briefly covers the history of the Middle East from 622 CE, the first year of the Islamic calendar, including a
https://en.wikipedia.org/wiki/Epigram%20%28programming%20language%29
Epigram is a functional programming language with dependent types, and the integrated development environment (IDE) usually packaged with the language. Epigram's type system is strong enough to express program specifications. The goal is to support a smooth transition from ordinary programming to integrated programs and proofs whose correctness can be checked and certified by the compiler. Epigram exploits the Curry–Howard correspondence, also termed the propositions as types principle, and is based on intuitionistic type theory. The Epigram prototype was implemented by Conor McBride based on joint work with James McKinna. Its development is continued by the Epigram group in Nottingham, Durham, St Andrews, and Royal Holloway, University of London in the United Kingdom (UK). The current experimental implementation of the Epigram system is freely available together with a user manual, a tutorial and some background material. The system has been used under Linux, Windows, and macOS. It is currently unmaintained, and version 2, which was intended to implement Observational Type Theory, was never officially released but exists in GitHub. Syntax Epigram uses a two-dimensional, natural deduction style syntax, with versions in LaTeX and ASCII. Here are some examples from The Epigram Tutorial: Examples The natural numbers The following declaration defines the natural numbers: The declaration says that Nat is a type with kind * (i.e., it is a simple type) and two constructors: zero and suc. The constructor suc takes a single Nat argument and returns a Nat. This is equivalent to the Haskell declaration "data Nat = Zero | Suc Nat". In LaTeX, the code is displayed as: The horizontal-line notation can be read as "assuming (what is on the top) is true, we can infer that (what is on the bottom) is true." For example, "assuming n is of type Nat, then suc n is of type Nat." If nothing is on the top, then the bottom statement is always true: "zero is of type Nat (in all case
https://en.wikipedia.org/wiki/Hook%20above
In typesetting, the hook above () is a diacritic mark placed on top of vowels in the Vietnamese alphabet. In shape it looks like a tiny question mark without the dot underneath, or a tiny glottal stop (ʔ). For example, a capital A with a hook is "Ả", and a lower case "u" with a hook is "ủ". The hook is usually written to the right of the circumflex in conventional Vietnamese orthography. If Vietnamese characters are unavailable, it is often replaced by a question mark after the vowel (VIQR encoding). This diacritic functions as a tone marker, indicating a "mid falling" tone (): which is "dipping" (˨˩˥) in Southern Vietnamese or "falling" (˧˩) in Northern Vietnamese; see Vietnamese language § Regional variation: Tones. The Southern "dipping" tone is similar to the questioning intonation in English. The hook above can be used as a tone marker, but is not regarded as part of the alphabet. Letters with hook above Unicode Apart from precomposed characters, in multiple scripts, the combining diacritical mark is encoded at See also Horn (diacritic) () Hook (diacritic) External links More information for typographers Latin-script diacritics Vietnamese language Vietnamese alphabets
https://en.wikipedia.org/wiki/Swivel
A swivel is a connection that allows the connected object, such as a gun, chair, swivel caster, or an anchor rode to rotate horizontally or vertically. Swivel designs A common design for a swivel is a cylindrical rod that can turn freely within a support structure. The rod is usually prevented from slipping out by a nut, washer or thickening of the rod. The device can be attached to the ends of the rod or the center. Another common design is a sphere that is able to rotate within a support structure. The device is attached to the sphere. A third design is a hollow cylindrical rod that has a rod that is slightly smaller than its inside diameter inside of it. They are prevented from coming apart by flanges. The device may be attached to either end. A swivel joint for a pipe is often a threaded connection in between which at least one of the pipes is curved, often at an angle of 45 or 90 degrees. The connection is tightened enough to be water- or air-tight and then tightened further so that it is in the correct position. Anchor rode swivel Swivels are also used in the nautical sector as an element of the anchor rode and in a boat mooring systems. With yachts, the swivel is most commonly used between the anchor and chain. There is a school of thought that anchor swivels should not be connected to the anchor itself, but should be somewhere in the chain rode. The anchor swivel is expected to fulfill two purposes: If the boat swings in a circle the chain may become twisted and the swivel may alleviate this problem. If the anchor comes up turned around, some swivels may right it. Concerns The biggest concern about anchor swivels is that they might introduce a weak link to the rode. With most swivels the shaft is nice and tidily embedded in the other half of the swivel as in the example of the stainless steel anchor swivel shown here. When used in marine applications, and worse in tropical climates, this is a cause for corrosion, even in stainless steel. The chr
https://en.wikipedia.org/wiki/Scramdisk
Scramdisk is a free on-the-fly encryption program for Windows 95, Windows 98, and Windows Me. A non-free version was also available for Windows NT. The original Scramdisk is no longer maintained; its author, Shaun Hollingworth, joined Paul Le Roux (the author of E4M) to produce Scramdisk's commercial successor, DriveCrypt. The author of Scramdisk provided a driver for Windows 9x, and the author of E4M provided a driver for Windows NT, enabling cross-platform versions of both programs. There is a new project called Scramdisk 4 Linux which provides access to Scramdisk and TrueCrypt containers. Older versions of TrueCrypt included support for Scramdisk. Licensing Although Scramdisk's source code is still available, it's stated that it was only released and licensed for private study and not for further development. However, because it contains an implementation of the MISTY1 Encryption Algorithm (by Hironobu Suzuki, a.k.a. H2NP) licensed under the GNU GPL Version 2, it is in violation of the GPL. See also Disk encryption Disk encryption software Comparison of disk encryption software
https://en.wikipedia.org/wiki/Network%20tap
A network tap is a system that monitors events on a local network. A tap is typically a dedicated hardware device, which provides a way to access the data flowing across a computer network. The network tap has (at least) three ports: an A port, a B port, and a monitor port. A tap inserted between A and B passes all traffic (send and receive data streams) through unimpeded in real time, but also copies that same data to its monitor port, enabling a third party to listen. Network taps are commonly used for network intrusion detection systems, VoIP recording, network probes, RMON probes, packet sniffers, and other monitoring and collection devices and software that require access to a network segment. Taps are used in security applications because they are non-obtrusive, are not detectable on the network (having no physical or logical address), can deal with full-duplex and non-shared networks, and will usually pass through or bypass traffic even if the tap stops working or loses power. Terminology The term network tap is analogous to phone tap or vampire tap. Some vendors define TAP as an acronym for test access point or terminal access point; however, those are backronyms. The monitored traffic is sometimes referred to as the pass-through traffic, while the ports that are used for monitoring are the monitor ports. There may also be an aggregation port for full-duplex traffic, wherein the A traffic is aggregated with the B traffic, resulting in one stream of data for monitoring the full-duplex communication. The packets must be aligned into a single stream using a time-of-arrival algorithm. Vendors will tend to use terms in their marketing such as breakout, passive, aggregating, regeneration, bypass, active, inline power, and others; Unfortunately, vendors do not use such terms consistently. Before buying any product it is important to understand the available features, and check with vendors or read the product literature closely to figure out how marketing ter
https://en.wikipedia.org/wiki/Original%20antigenic%20sin
Original antigenic sin, also known as antigenic imprinting, the Hoskins effect, or immunological imprinting, is the propensity of the immune system to preferentially use immunological memory based on a previous infection when a second slightly different version of that foreign pathogen (e.g. a virus or bacterium) is encountered. This leaves the immune system "trapped" by the first response it has made to each antigen, and unable to mount potentially more effective responses during subsequent infections. Antibodies or T-cells induced during infections with the first variant of the pathogen are subject to repertoire freeze, a form of original antigenic sin. The phenomenon has been described in relation to influenza virus, SARS-CoV-2, dengue fever, human immunodeficiency virus (HIV) and to several other viruses. History This phenomenon was first described in 1960 by Thomas Francis Jr. in the article "On the Doctrine of Original Antigenic Sin". It is named by analogy to the Christian theological concept of original sin. According to Francis as cited by Richard Krause: The antibody of childhood is largely a response to dominant antigen of the virus causing the first type A influenza infection of the lifetime. [...] The imprint established by the original virus infection governs the antibody response thereafter. This we have called the Doctrine of the Original Antigenic Sin. In B cells During a primary infection, long-lived memory B cells are generated, which remain in the body and protect from subsequent infections. These memory B cells respond to specific epitopes on the surface of viral proteins to produce antigen-specific antibodies and can respond to infection much faster than naive B cells can to novel antigens. This effect lessens time needed to clear subsequent infections. Between primary and secondary infections or following vaccination, a virus may undergo antigenic drift, in which the viral surface proteins (the epitopes) change through natural mutatio
https://en.wikipedia.org/wiki/Brian%20Johnson%20%28special%20effects%20artist%29
Brian Johnson (born 29 June 1939 or 29 June 1940) is a British designer and director of film and television special effects. Life and career Born Brian Johncock, he changed his surname to Johnson during the 1960s. Joining the team of special effects artist Les Bowie, Johnson started his career behind the scenes for Bowie Films on productions such as On The Buses, and for Hammer Films. He is known for his special effects work on TV series including Thunderbirds (1965–66) and films including Alien (1979), for which he received the 1980 Academy Award for Best Visual Effects (shared with H. R. Giger, Carlo Rambaldi, Dennis Ayling and Nick Allder). Previously, he had built miniature spacecraft models for Stanley Kubrick's 1968 film 2001: A Space Odyssey. Johnson's work on Space: 1999 influenced the effects of the Star Wars films of the 1970s and 1980s. Impressed by his work, George Lucas visited Johnson during the production of the TV series to offer him the role of effects supervisor for the 1977 film. Having already been commissioned for the second series of Space: 1999, Johnson was unable to accept at the time. He worked on the sequel, The Empire Strikes Back (1980), whose special effects were recognised in the form of a 1981 Special Achievement Academy Award (which Johnson shared with Richard Edlund, Dennis Muren and Bruce Nicholson). Awards Johnson has won Academy Awards for both Alien (1979) and The Empire Strikes Back (1980). He was further nominated for an Academy Award for his work on Dragonslayer (1981). In addition, Johnson is the recipient of a Saturn Award for The Empire Strikes Back and a BAFTA Award for James Cameron's Aliens. Filmography Special effects Quatermass 2 (1957) (special effects assistant, uncredited) The Kiss of the Vampire (1963) (special effects assistant, uncredited) Thunderbirds (TV series) (1965–66) (special effects director) 2001: A Space Odyssey (1968) (special effects assistant, uncredited) Moon Zero Two (1969) (special ef
https://en.wikipedia.org/wiki/List%20of%20Mozilla%20products
The following is a list of Mozilla Foundation / Mozilla Corp. products. All products, unless specified, are cross-platform by design. Client applications Firefox Browser - An open-source web browser. Firefox Focus - A privacy-focused mobile web browser. Firefox Reality - A web browser optimized for virtual reality. Firefox for Android (also Firefox Daylight) - A web browser for mobile phones and smaller non-PC devices. Firefox Monitor - An online service for alerting the user when their email addresses and passwords have been leaked in data breaches. Firefox Relay - A privacy focused email masking service which allows for the creation of disposable email aliases Mozilla Thunderbird - An email and news client. Mozilla VPN - A virtual private network client. SeaMonkey (formerly Mozilla Application Suite) - An Internet suite. ChatZilla - The IRC component, also available as a Firefox extension. Mozilla Calendar - Originally planned to be a calendar component for the suite; became the base of Mozilla Sunbird. Mozilla Composer - The HTML editor component. Mozilla Mail & Newsgroups - The email and news component. Components DOM Inspector - An inspector for DOM. Gecko - The layout engine. Necko - The network library. Rhino - The JavaScript engine written in Java programming language. Servo - A layout engine. SpiderMonkey - The JavaScript engine written in C programming language. Venkman - A JavaScript debugger. Development tools Bugzilla - A bugtracker. Rust (programming language) Skywriter - An extensible and interoperable web-based framework for code editing. Treeherder - A detective tool that allows developers to manage software builds and to correlate build failures on various platforms and configurations with particular code changes (Predecessors: TBPL and Tinderbox). API/Libraries Netscape Portable Runtime (NSPR) - A platform abstraction layer that makes operating systems appear the same. Network Security Services (NSS) - A set of libr
https://en.wikipedia.org/wiki/Average%20bitrate
In telecommunications, average bitrate (ABR) refers to the average amount of data transferred per unit of time, usually measured per second, commonly for digital music or video. An MP3 file, for example, that has an average bit rate of 128 kbit/s transfers, on average, 128,000 bits every second. It can have higher bitrate and lower bitrate parts, and the average bitrate for a certain timeframe is obtained by dividing the number of bits used during the timeframe by the number of seconds in the timeframe. Bitrate is not reliable as a standalone measure of audio or video quality, since more efficient compression methods use lower bitrates to encode material at a similar quality. Average bitrate can also refer to a form of variable bitrate (VBR) encoding in which the encoder will try to reach a target average bitrate or file size while allowing the bitrate to vary between different parts of the audio or video. As it is a form of variable bitrate, this allows more complex portions of the material to use more bits and less complex areas to use fewer bits. However, bitrate will not vary as much as in variable bitrate encoding. At a given bitrate, VBR is usually higher quality than ABR, which is higher quality than CBR (constant bitrate). ABR encoding is desirable for users who want the general benefits of VBR encoding (an optimum bitrate from frame to frame) but with a relatively predictable file size. Two-pass encoding is usually needed for accurate ABR encoding, as on the first pass the encoder has no way of knowing what parts of the audio or video need the highest bitrates to be encoded. See also Variable bitrate Constant bitrate
https://en.wikipedia.org/wiki/Unicore
Unicore is the name of a computer instruction set architecture designed by the Microprocessor Research and Development Center (MPRC) of Peking University in the PRC. The computer built on this architecture is called the Unity-863. The CPU is integrated into a fully functional SoC to make a PC-like system. The processor is very similar to the ARM architecture, but uses a different instruction set. It is supported by the Linux kernel as of version 2.6.39. Support will be removed in Linux kernel version 5.9 as nobody seems to maintain it and the code is falling behind the rest of the kernel code and compiler requirements. Instruction set The instructions are almost identical to the standard ARM formats, except that conditional execution has been removed, and the bits reassigned to expand all the register specifiers to 5 bits. Likewise, the immediate format is 9 bits rotated by a 5-bit amount (rather than 8 bit rotated by 4), the load/store offset sizes are 14 bits for byte/word and 10 bits for signed byte or half-word. Conditional moves are provided by encoding the condition in the (unused by ARM) second source register field Rn for MOV and MVN instructions. The meaning of various flag bits (such as S=1 enables setting the condition codes) is identical to the ARM instruction set. The load/store multiple instruction can only access half of the register set, depending on the H bit. If H=0, the 16 bits indicate R0–R15; if H=1, R16–R31.
https://en.wikipedia.org/wiki/Explicit%20symmetry%20breaking
In theoretical physics, explicit symmetry breaking is the breaking of a symmetry of a theory by terms in its defining equations of motion (most typically, to the Lagrangian or the Hamiltonian) that do not respect the symmetry. Usually this term is used in situations where these symmetry-breaking terms are small, so that the symmetry is approximately respected by the theory. An example is the spectral line splitting in the Zeeman effect, due to a magnetic interaction perturbation in the Hamiltonian of the atoms involved. Explicit symmetry breaking differs from spontaneous symmetry breaking. In the latter, the defining equations respect the symmetry but the ground state (vacuum) of the theory breaks it. Explicit symmetry breaking is also associated with electromagnetic radiation. A system of accelerated charges results in electromagnetic radiation when the geometric symmetry of the electric field in free space is explicitly broken by the associated electrodynamic structure under time varying excitation of the given system. This is quite evident in an antenna where the electric lines of field curl around or have rotational geometry around the radiating terminals in contrast to linear geometric orientation within a pair of transmission lines which does not radiate even under time varying excitation. Perturbation theory in quantum mechanics A common setting for explicit symmetry breaking is perturbation theory in quantum mechanics. The symmetry is evident in a base Hamiltonian . This is often an integrable Hamiltonian, admitting symmetries which in some sense make the Hamiltonian integrable. The base Hamiltonian might be chosen to provide a starting point close to the system being modelled. Mathematically, the symmetries can be described by a smooth symmetry group . Under the action of this group, is invariant. The explicit symmetry breaking then comes from a second term in the Hamiltonian, , which is not invariant under the action of . This is sometimes interpre
https://en.wikipedia.org/wiki/Little%20Higgs
In particle physics, little Higgs models are based on the idea that the Higgs boson is a pseudo-Goldstone boson arising from some global symmetry breaking at a TeV energy scale. The goal of little Higgs models is to use the spontaneous breaking of such approximate global symmetries to stabilize the mass of the Higgs boson(s) responsible for electroweak symmetry breaking. The little Higgs models predict a naturally-light Higgs particle. Loop cancellation The main idea behind the little Higgs models is that the one-loop contribution to the tachyonic Higgs boson mass coming from the top quark cancels. The simplified reason for that cancellation is that a loop's contribution is proportional to the coupling constant of one of the SU(2) groups. Because of the symmetries in the theory, the contributions cancel until there is a two-loop contribution involving both groups. This restricts the Higgs boson mass for about one order of magnitude, which is good enough to evade many of the precision electroweak constraints. History Little Higgs theories were an outgrowth of dimensional deconstruction: In these theories, the gauge group has the form of a direct product of several copies of the same factor, for example SU(2) × SU(2). Each SU(2) factor may be visualized as the SU(2) group living at a particular point along an additional dimension of space. Consequently, many virtues of extra-dimensional theories are reproduced even though the little Higgs theory is 3+1 dimensional. Although the idea was first suggested in the 1970s, a viable model was only constructed by Nima Arkani-Hamed, Andrew G. Cohen, and Howard Georgi in 2001. The idea was explored further by Arkani-Hamed, Cohen, Thomas Gregoire, and Jay G. Wacker in 2002. Also in 2002, several other papers appeared that refined the ideas of little Higgs theories, notably the Littlest Higgs by Arkani-Hamed, Cohen, Emanuel Katz, and Ann Nelson. See also Composite Higgs models Two-Higgs-doublet model Higgsino Notes
https://en.wikipedia.org/wiki/Dimensional%20deconstruction
In theoretical physics, dimensional deconstruction is a method to construct 4-dimensional theories that behave as higher-dimensional theories in a certain range of higher energies. The resulting theory is a gauge theory whose gauge group is a direct product of many copies of the same group; each copy may be interpreted as the gauge group located at a particular point along a new, discrete, "deconstructed" (d+1)st dimension. The spectrum of matter fields is a set of bifundamental representations expressed by a quiver diagram that is analogous to lattices in lattice gauge theory. "Deconstruction" in physics was introduced by Nima Arkani-Hamed, Andy Cohen and Howard Georgi, and independently by Christopher T. Hill, Stefan Pokorski and Jing Wang. Deconstruction is a lattice approximation to the real space of extra dimensions, while maintaining the full gauge symmetries and yields the low energy effective description of the physics. This leads to a rationale for extensions of the Standard Model based upon product gauge groups, , such as anticipated in "topcolor" models of electroweak symmetry breaking. The little Higgs theories are also examples of phenomenologically interesting models inspired by deconstruction. Deconstruction is used in a supersymmetric context to address the hierarchy problem and model extra dimensions. "Clock models," which have become popular in recent years in particle physics, are completely equivalent to deconstruction.
https://en.wikipedia.org/wiki/Crossing%20%28physics%29
In quantum field theory, a branch of theoretical physics, crossing is the property of scattering amplitudes that allows antiparticles to be interpreted as particles going backwards in time. Crossing states that the same formula that determines the S-matrix elements and scattering amplitudes for particle to scatter with and produce particle and will also give the scattering amplitude for to go into , or for to scatter with to produce . The only difference is that the value of the energy is negative for the antiparticle. The formal way to state this property is that the antiparticle scattering amplitudes are the analytic continuation of particle scattering amplitudes to negative energies. The interpretation of this statement is that the antiparticle is in every way a particle going backwards in time. History Murray Gell-Mann and Marvin Leonard Goldberger introduced crossing symmetry in 1954. Crossing had already been implicit in the work of Richard Feynman, but came to its own in the 1950s and 1960s as part of the analytic S-matrix program. Overview Consider an amplitude . We concentrate our attention on one of the incoming particles with momentum p. The quantum field , corresponding to the particle is allowed to be either bosonic or fermionic. Crossing symmetry states that we can relate the amplitude of this process to the amplitude of a similar process with an outgoing antiparticle replacing the incoming particle : . In the bosonic case, the idea behind crossing symmetry can be understood intuitively using Feynman diagrams. Consider any process involving an incoming particle with momentum p. For the particle to give a measurable contribution to the amplitude, it has to interact with a number of different particles with momenta via a vertex. Conservation of momentum implies . In case of an outgoing particle, conservation of momentum reads as . Thus, replacing an incoming boson with an outgoing antiboson with opposite momentum yields the same S-matrix el
https://en.wikipedia.org/wiki/Pauli%E2%80%93Villars%20regularization
In theoretical physics, Pauli–Villars regularization (P–V) is a procedure that isolates divergent terms from finite parts in loop calculations in field theory in order to renormalize the theory. Wolfgang Pauli and Felix Villars published the method in 1949, based on earlier work by Richard Feynman, Ernst Stueckelberg and Dominique Rivier. In this treatment, a divergence arising from a loop integral (such as vacuum polarization or electron self-energy) is modulated by a spectrum of auxiliary particles added to the Lagrangian or propagator. When the masses of the fictitious particles are taken as an infinite limit (i.e., once the regulator is removed) one expects to recover the original theory. This regulator is gauge invariant in an abelian theory due to the auxiliary particles being minimally coupled to the photon field through the gauge covariant derivative. It is not gauge covariant in a non-abelian theory, though, so Pauli–Villars regularization cannot be used in QCD calculations. P–V serves as an alternative to the more favorable dimensional regularization in specific circumstances, such as in chiral phenomena, where a change of dimension alters the properties of the Dirac gamma matrices. Gerard 't Hooft and Martinus J. G. Veltman invented, in addition to dimensional regularization, the method of unitary regulators, which is a Lagrangian-based Pauli–Villars method with a discrete spectrum of auxiliary masses, using the path-integral formalism. Examples Pauli–Villars regularization consists of introducing a fictitious mass term. For example, we would replace a photon propagator , by , where can be thought of as the mass of a fictitious heavy photon, whose contribution is subtracted from that of an ordinary photon. See also Dimensional regularization Ghosts (physics) Regularization (physics) Notes
https://en.wikipedia.org/wiki/Dimensional%20regularization
In theoretical physics, dimensional regularization is a method introduced by Giambiagi and Bollini as well as – independently and more comprehensively – by 't Hooft and Veltman for regularizing integrals in the evaluation of Feynman diagrams; in other words, assigning values to them that are meromorphic functions of a complex parameter d, the analytic continuation of the number of spacetime dimensions. Dimensional regularization writes a Feynman integral as an integral depending on the spacetime dimension d and the squared distances (xi−xj)2 of the spacetime points xi, ... appearing in it. In Euclidean space, the integral often converges for −Re(d) sufficiently large, and can be analytically continued from this region to a meromorphic function defined for all complex d. In general, there will be a pole at the physical value (usually 4) of d, which needs to be canceled by renormalization to obtain physical quantities. showed that dimensional regularization is mathematically well defined, at least in the case of massive Euclidean fields, by using the Bernstein–Sato polynomial to carry out the analytic continuation. Although the method is most well understood when poles are subtracted and d is once again replaced by 4, it has also led to some successes when d is taken to approach another integer value where the theory appears to be strongly coupled as in the case of the Wilson–Fisher fixed point. A further leap is to take the interpolation through fractional dimensions seriously. This has led some authors to suggest that dimensional regularization can be used to study the physics of crystals that macroscopically appear to be fractals. It has been argued that Zeta regularization and dimensional regularization are equivalent since they use the same principle of using analytic continuation in order for a series or integral to converge. Example: potential of an infinite charged line Consider an infinite charged line with charge density , and we calculate the pote
https://en.wikipedia.org/wiki/Microconnect%20distributed%20antenna
Microconnect distributed antennas (MDA) are small-cell local area (100 metre radius) transmitter-receivers usually fitted to lampposts and other street furniture in order to provide Wireless LAN, GSM and GPRS connectivity. They are therefore less obtrusive than the usual masts and antennas used for these purposes and meet with less public opposition. Service provided The service provided by microconnect distributed antennas cover a market in heavily populated urban area addressing mobile and radio connection. Also MDA is suited for bustling cities and historical areas where mobile connection and ability is impaired. Having many low power, small antennae preforms and covers an area equal to or better than a traditional Macrocellular site. The centrally located radio base station connects to the antennae by fibre optical cable. Each antenna point contains a 63–65 GHz wireless unit alongside a large memory store providing proxy and cache services. Also users will be able to obtain 64 kbit uplink/ 384kbit downlink service. Multiple operators can share this infrastructure. So that different service providers can this technology to benefit their customers. Four part MDA System The four part MDA system is, the DAS (Distributed Antenna System) Master unit, access network optical fibre, and the Remote Radio over Fibre (RoF) Unit (Remote Antennae Points). Followed by the Supervisory and Management facilities. This system is compatible GSM (2g and 2.5G) and 3G network requirements of mobile users. The MDA is an economical device that gives a somewhat low-cost solution to give more people access to mobile and broadband connection. This solution also has a low environmental impact that might not clutter up a historical part of an urban area. As communities become more and more dependent on technology solutions like the MDA system is perfect for protecting the natural beauty. See also Distributed antenna system In-Building Cellular Enhancement System