source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Signal%20trace
In electronics, a signal trace or circuit trace on a printed circuit board (PCB) or integrated circuit (IC) is the equivalent of a wire for conducting signals. Each trace consists of a flat, narrow part of the copper foil that remains after etching. Signal traces are usually narrower than power or ground traces because the current carrying requirements are usually much less. See also Ground plane Stripline Microstrip
https://en.wikipedia.org/wiki/Marpolia
Marpolia has been interpreted as a cyanobacterium, but also resembles the modern cladophoran green algae. It is known from the Middle Cambrian Burgess shale and Early Cambrian deposits from the Czech Republic. It comprises a dense mass of entangled, twisted filaments. It may have been free-floating or grown on other objects, although there is no evidence of attachment structures. 40 specimens of Marpolia are known from the Greater Phyllopod bed, where they comprise 0.08% of the community.
https://en.wikipedia.org/wiki/Batman/Aliens
Batman/Aliens is a crossover between the Batman and Aliens comic book franchises. It was published in 1997. A sequel was released in 2003. Batman/Aliens Batman parachutes into the jungle near the Guatemala and Mexico borderline, investigating the disappearance of a Wayne Enterprises geologist. He encounters an American Special Ops team hunting a target, and both are set upon by the Aliens. Several members of the team are killed, but along the way Batman becomes familiar with the Aliens' life cycle, and collects two facehuggers in specimen jars. The team leader sacrifices himself to blow up a nest of the Aliens, leaving only Batman and two members of the team alive, making their way to the team's evacuation point. One of the survivors, an intensely ambitious woman named Hyatt, leaves her teammate to be killed by one of the last Aliens and ambushes Batman, holding him at gunpoint while she relieves him of the lost geologist's voice recorder and one of the specimen jars. She says that the Aliens are an incredibly potent weapon if properly used, and bringing the information about them back to the U.S. government will make her career. She is so fixated on Batman that she fails to notice a gargantuan Alien hybrid - the result of an Alien embryo being implanted into a crocodile - rise behind her. The hybrid kills Hyatt, but Batman kills the creature by tying its legs and tipping it into the mouth of an active volcano. Alone, he escapes from the jungle. In the Batcave, Bruce Wayne listens to the geologist's last message to his family, cut off as the man is attacked by an Alien. Bruce decides to drop the specimen jars containing the facehuggers into the cave's depths and tell no one about them. The Aliens are much too dangerous, he believes, "not because of what they are, but because of what we are". This story was spun off of a two-part short story featured in Dark Horse Presents #101-102 entitled Aliens: Incubation. The events that Batman discovers on the geologist's
https://en.wikipedia.org/wiki/Cornish%20ice%20cream
Cornish ice cream is a form of ice cream first made in Cornwall, England. It is made with Cornish clotted cream, and may be made with sorbet. Today, it is still produced using milk from many farms in Cornwall, although Cornish ice cream (and brands of Cornish ice cream) are sold in supermarkets all over the United Kingdom. It may be made with regular ice cream and vanilla essence. Some companies of Cornwall, such as a company in East Looe, claim to make Cornish ice cream using only Cornish milk and cream. See also Kelly's of Cornwall
https://en.wikipedia.org/wiki/S-layer
An S-layer (surface layer) is a part of the cell envelope found in almost all archaea, as well as in many types of bacteria. The S-layers of both archaea and bacteria consists of a monomolecular layer composed of only one (or, in a few cases, two) identical proteins or glycoproteins. This structure is built via self-assembly and encloses the whole cell surface. Thus, the S-layer protein can represent up to 15% of the whole protein content of a cell. S-layer proteins are poorly conserved or not conserved at all, and can differ markedly even between related species. Depending on species, the S-layers have a thickness between 5 and 25 nm and possess identical pores with 2–8 nm in diameter. The terminology “S-layer” was used the first time in 1976. The general use was accepted at the "First International Workshop on Crystalline Bacterial Cell Surface Layers, Vienna (Austria)" in 1984, and in the year 1987 S-layers were defined at the European Molecular Biology Organization Workshop on “Crystalline Bacterial Cell Surface Layers”, Vienna as “Two-dimensional arrays of proteinaceous subunits forming surface layers on prokaryotic cells” (see "Preface", page VI in Sleytr "et al. 1988"). For a brief summary on the history of S-layer research see "References". Location of S-layers In Gram-negative bacteria, S-layers are associated to the lipopolysaccharides via ionic, carbohydrate–carbohydrate, protein–carbohydrate interactions and/or protein–protein interactions. In Gram-positive bacteria whose S-layers often contain surface layer homology (SLH) domains, the binding occurs to the peptidoglycan and to a secondary cell wall polymer (e.g., teichoic acids). In the absence of SLH domains, the binding occurs via electrostatic interactions between the positively charged N-terminus of the S-layer protein and a negatively charged secondary cell wall polymer. In Lactobacilli the binding domain may be located at the C-terminus. In Gram-negative archaea, S-layer proteins possess a
https://en.wikipedia.org/wiki/Vietnam%20War%20Song%20Project
The Vietnam War Song Project (VWSP) is an archive and interpretive examination of over 6000 Vietnam War songs identified. It was founded in 2007 by its current editor, Justin Brummer, a historian with a PhD in contemporary Anglo-American relations from University College London. The project analyses the lyrics, and collects data on the genre, location, ethnicity, nationality, language, and time period of the recordings. It also involves the preservation of the original physical vinyl records. Additional items collected include cassette tapes, CDs, MP3s, record label scans, and sheet music. The project is currently hosted on the online collaborative database Rate Your Music, with components on YouTube, Twitter, and at the University of Maryland. Part of the project includes a discography, Vietnam War Songs: An incomplete discography, which has over 6000 titles, both unique songs and cover songs, a collaboration between Hugo Keesing, Wouter Keesing, C.L. Yarbrough, and Justin Brummer at the University of Maryland Libraries. Hugo Keesing, adjunct professor of American Studies at the University of Maryland, and the producer of the 13 CD box-set compilation Next Stop Is Vietnam is also a major contributor of songs and record scans. The project has categorised songs into a variety of themes, from anti-war / protest / peace songs, to patriotic / pro-government / anti-protest songs during the war years, as well an analysis of songs released in the post-war period. Other themes include regional songs, such as Puerto Ricans in the Vietnam War, Australia in the Vietnam War, New Zealand in the Vietnam War, Mexican-Americans, and songs from South America, Central America, and the Caribbean. Genres include soul, gospel & funk, the blues, garage rock, and punk music. The project also looks at songs about key events and issues, which include the Chicago Seven, Kent State shootings, the My Lai Massacre, and the Vietnam War POW/MIA issue. Other topics include songs about the Vietn
https://en.wikipedia.org/wiki/Blind%20wine%20tasting
Blinded wine tasting is wine tasting undertaken in circumstances in which the tasters are kept unaware of the wines' identities. The blind approach is routine for wine professionals (wine tasters, sommeliers and others) who wish to ensure impartiality in the judgment of the quality of wine during wine competitions or in the evaluation of a sommelier for professional certification. More recently wine scientists (physiologists, psychologists, food chemists and others) have used blinded tastings to explore the objective parameters of the human olfactory system as they apply to the ability of wine drinkers (both wine professionals and ordinary consumers) to identify and characterize the extraordinary variety of compounds that contribute to a wine’s aroma. Similarly, economists testing hypotheses relating to the wine market have used the technique in their research. Some blinded trials among wine consumers have indicated that people can find nothing in a wine's aroma or taste to distinguish between ordinary and pricey brands. Academic research on blinded wine tastings have also cast doubt on the ability of professional tasters to judge wines consistently. Technique Blind tasting, at a minimum, involves denying taster(s) the ability to see the wine label or wine bottle shape. Informal tastings may simply conceal the bottles in a plain paper bag. More exacting competitions or evaluations utilize more stringent procedures, including safeguards against cheating. For example, the wine may be tasted from a black wine glass to mask the color . Biases A taster's judgment can be prejudiced by knowing details of a wine, such as geographic origin, price, reputation, color, or other considerations. Scientific research has long demonstrated the power of suggestion in perception as well as the strong effects of expectancies. For example, people expect more expensive wine to have more desirable characteristics than less expensive wine. When given wine that they are falsely told is e
https://en.wikipedia.org/wiki/Data%20Base%20Task%20Group
The Data Base Task Group (DBTG) was a working group founded in 1965 (initially named the List Processing Task Force and later renamed to DBTG in 1967) by the Cobol Committee, formerly Programming Language Committee, of the Conference of Data Systems Language (CODASYL). The DBTG was chaired by William Olle of RCA. In April 1971, the DBTG published a report containing specifications of a Data Manipulation Language (DML) and a Data Definition Language (DDL) for standardization of network database model. The first DBTG proposals had already been published in 1969. The specification was subsequently modified and developed in various committees and published by other reports in 1973 and 1978. The specification is often referred to as the DBTG database model or the CODASYL database model. As well as the data model, many basic concepts of database terminology were introduced by this group, notably the concepts of schema and subschema. External links
https://en.wikipedia.org/wiki/Bigoni%E2%80%93Piccolroaz%20yield%20criterion
The Bigoni–Piccolroaz yield criterion is a yielding model, based on a phenomenological approach, capable of describing the mechanical behavior of a broad class of pressure-sensitive granular materials such as soil, concrete, porous metals and ceramics. General concepts The idea behind the Bigoni-Piccolroaz criterion is that of deriving a function capable of transitioning between the yield surfaces typical of different classes of materials only by changing the function parameters. The reason for this kind of implementation lies in the fact that the materials towards which the model is targeted undergo consistent changes during manufacturing and working conditions. The typical example is that of the hardening of a power specimen by compaction and sintering during which the material changes from granular to dense. The Bigoni-Piccolroaz yielding criterion can be represented in the Haigh–Westergaard stress space as a convex smooth surface and in fact the criterion itself is based on the mathematical definition of the surface in the above-mentioned space as a proper interpolation of experimental points. Mathematical formulation The Bigoni-Piccolroaz yield surface is thought as a direct interpolation of experimental data. This criterion represents a smooth and convex surface, which is closed both in hydrostatic tension and compression and has a drop-like shape, particularly suited to describe frictional and granular materials. This criterion has also been generalized to the case of surfaces with corners. Design principles Since the whole idea of the model is to tailor a function to experimental data, the authors have defined a certain group of features as desirable, even if not essential, among those: smoothness of the surface; possibility of changing the shape and thus the interpolation on a broad class of experimental data for different materials; possibility to represent known criteria with limit set of parameters; convexity of the surface. Parametric funct
https://en.wikipedia.org/wiki/Slugging%20percentage
In baseball statistics, slugging percentage (SLG) is a measure of the batting productivity of a hitter. It is calculated as total bases divided by at-bats, through the following formula, where AB is the number of at-bats for a given player, and 1B, 2B, 3B, and HR are the number of singles, doubles, triples, and home runs, respectively: Unlike batting average, slugging percentage gives more weight to extra-base hits such as doubles and home runs, relative to singles. Plate appearances resulting in walks, hit-by-pitches, catcher's interference, and sacrifice bunts or flies are specifically excluded from this calculation, as such an appearance is not counted as an at-bat (these are not factored into batting average either). The name is a misnomer, as the statistic is not a percentage but an average of how many bases a player achieves per at bat. It is a scale of measure whose computed value is a number from 0 to 4. This might not be readily apparent given that a Major League Baseball player's slugging percentage is almost always less than 1 (as a majority of at bats result in either 0 or 1 base). The statistic gives a double twice the value of a single, a triple three times the value, and a home run four times. The slugging percentage would have to be divided by 4 to actually be a percentage (of bases achieved per at bat out of total bases possible). As a result, it is occasionally called slugging average, or simply slugging, instead. A slugging percentage is always expressed as a decimal to three decimal places, and is generally spoken as if multiplied by 1000. For example, a slugging percentage of .589 would be spoken as "five eighty nine," and one of 1.127 would be spoken as "eleven twenty seven." Facts about slugging percentage A slugging percentage is not just for the use of measuring the productivity of a hitter. It can be applied as an evaluative tool for pitchers. It is not as common but it is referred to as slugging-percentage against. In 2019, the me
https://en.wikipedia.org/wiki/Cit%C3%A9%20du%20Vin
The Cité du Vin is a museum as well as a place of exhibitions, shows, movie projections and academic seminars on the theme of wine located in Bordeaux, France. On August 29, 2018, the Cité du Vin passed the milestone of one million visitors since its opening and in May 2022 that of 2 million visitors. Construction Architects The architects were Anouk Legendre and Nicholas Desmazières of XTU Agency. Cost The cost of the construction has been underestimated. In January 2011, the cost of the construction had been estimated at 63 million euros excluding taxes. But at the end of 2014, as the construction was in progress, the cost of the structure has been re-evaluated to reach 81.1 million euros excluding taxes. History and opening The project for a wine cultural and tourist center started in 2009. An association prefiguring La Cité du Vin was created for this purpose, made up of the Aquitaine region, Bordeaux Métropole, the city of Bordeaux, the Interprofessional Council of Bordeaux wine and the Bordeaux Chamber of Commerce and Industry. Its official opening by the President of France François Hollande and Alain Juppé took place on May 31, 2016. It was followed by the public opening on June 1, 2016. Transportation The Cité du Vin is accessible by tram (the Cité du Vin tram stop is on line B of the Bordeaux tramway), by the ring road, by the Pont Jacques Chaban-Delmas and by lines 7 and 32 of the Transports Bordeaux Métropole (TBM) network. A stop of the Batcub is located nearby. Gallery
https://en.wikipedia.org/wiki/North%20American%20Numbering%20Council
The North American Numbering Council is an advisory committee of the Federal Communications Commission (FCC) of the United States, chartered in 1995. Its function is to develop and recommend efficient and fair administrative procedures in the administration of the North American Numbering Plan (NANP), including telephone numbering plan policy and technical implementation. The council is headed by a Designated Federal Officer appointed by the FCC. It is renewed on a two-year term schedule pursuant to the Federal Advisory Committee Act, and meets approximately four times per year. Its work is structured and conducted in working groups. The committee reports to the FCC via the Wireline Competition Bureau. Founding The original charter of the North American Numbering Council was filed with the United States Congress on October 5, 1995. The charter is renewed periodically in accordance with the provisions of the Federal Advisory Committee Act (FACA). The council held its first meeting on October 1, 1996. Purpose The North American Numbering Council advises the Federal Communications Commission in an oversight capacity over telephone numbering plan issues. Through consensus decisions of its members, the council recommends procedures, policies, and technical means for efficient numbering administration, impartial to any telecommunication industry interest group. The council develops policy for numbering issues and initially resolves disputes. It provides guidance to the North American Numbering Plan Administrator (NANPA). Working groups The North American Numbering Council conducts several working groups in assistance of its advisory function to the FCC. As of 2020, the following working groups are established. Numbering Administration Oversight Call Authentication Trust Anchor Toll Free Assignment Modernization Nationwide Number Portability Interoperable Video Calling See also Number Portability Administration Center Telecommunications Act of 1996 Communications Act o
https://en.wikipedia.org/wiki/Why%20We%20Sleep
Why We Sleep: The New Science of Sleep and Dreams (or simply known as Why We Sleep) is a 2017 popular science book about sleep written by Matthew Walker, an English scientist and the director of the Center for Human Sleep Science at the University of California, Berkeley, who specializes in neuroscience and psychology. In the book, Walker discusses about the importance of sleeping, the side effects of failing to do so and its impact on society. Walker spent four years writing the book, in which he asserts that sleep deprivation is linked to numerous fatal diseases, including dementia. Why We Sleep has gone on to become a bestseller under The New York Times and The Sunday Times that discusses the topic of sleep from the viewpoint of neuroscience. The book has received generally positive reviews from critics, who praised Walker's research and views on the science of sleep, while criticizing the book for its certain claims regarding sleep. Background According to Walker, who had never written a book at the time, he was motivated to write the book after an encounter with a woman who glanced at his work related to sleep and its benefits for health, stating, "When that comes out, I want to read it". Walker described this encounter as a sincere "independent ratification" that made him write the book. The book took Walker roughly four and a half years to write. Walker and his team spent roughly 20 years studying the rejuvenating ability sleep has. Walker's communication style, in which he makes use of "metaphors and analogies effectively," allowed him to explain ideas related to sleep in detail. At 18 years of age, Walker, who was a medical student at the time, became an "accidental sleep researcher" and moved over to studying neuroscience because of his habit of asking many questions. It was during his PhD at London's Medical Research Council when Walker learned about how little information there was on sleep. A scientific paper helped Walker with his research after
https://en.wikipedia.org/wiki/List%20of%20quantum%20logic%20gates
In gate-based quantum computing, various sets of quantum logic gates are commonly used to express quantum operations. The following tables list several unitary quantum logic gates, together with their common name, how they are represented, and some of their properties. Controlled or conjugate transpose (adjoint) versions of some of these gates may not be listed. Identity gate and global phase The identity gate is the identity operation , most of the times this gate is not indicated in circuit diagrams, but it is useful when describing mathematical results. It has been described as being a "wait cycle", and a NOP. The global phase gate introduces a global phase to the whole qubit quantum state. A quantum state is uniquely defined up to a phase. Because of the Born rule, a phase factor has no effect on a measurement outcome: for any . Because when the global phase gate is applied to a single qubit in a quantum register, the entire register's global phase is changed. Also, These gates can be extended to any number of qubits or qudits. Clifford qubit gates This table includes commonly used Clifford gates for qubits. Other Clifford gates, including higher dimensional ones are not included here but by definition can be generated using and . Note that if a Clifford gate A is not in the Pauli group, or controlled-A are not in the Clifford gates. The Clifford set is not a universal quantum gate set. Non-Clifford qubit gates Relative phase gates The phase shift is a family of single-qubit gates that map the basis states and . The probability of measuring a or is unchanged after applying this gate, however it modifies the phase of the quantum state. This is equivalent to tracing a horizontal circle (a line of latitude), or a rotation along the z-axis on the Bloch sphere by radians. A common example is the T gate where (historically known as the gate), the phase gate. Note that some Clifford gates are special cases of the phase shift gate: The ar
https://en.wikipedia.org/wiki/Accuracy%20and%20precision
Accuracy and precision are two measures of observational error. Accuracy is how close a given set of measurements (observations or readings) are to their true value, while precision is how close the measurements are to each other. In other words, precision is a description of random errors, a measure of statistical variability. Accuracy has two definitions: More commonly, it is a description of only systematic errors, a measure of statistical bias of a given measure of central tendency; low accuracy causes a difference between a result and a true value; ISO calls this trueness. Alternatively, the International Organization for Standardization (ISO) defines accuracy as describing a combination of both types of observational error (random and systematic), so high accuracy requires both high precision and high trueness. In the first, more common definition of "accuracy" above, the concept is independent of "precision", so a particular set of data can be said to be accurate, precise, both, or neither. In simpler terms, given a statistical sample or set of data points from repeated measurements of the same quantity, the sample or set can be said to be accurate if their average is close to the true value of the quantity being measured, while the set can be said to be precise if their standard deviation is relatively small. Common technical definition In the fields of science and engineering, the accuracy of a measurement system is the degree of closeness of measurements of a quantity to that quantity's true value. The precision of a measurement system, related to reproducibility and repeatability, is the degree to which repeated measurements under unchanged conditions show the same results. Although the two words precision and accuracy can be synonymous in colloquial use, they are deliberately contrasted in the context of the scientific method. The field of statistics, where the interpretation of measurements plays a central role, prefers to use the terms bias and
https://en.wikipedia.org/wiki/Direct-attached%20storage
Direct-attached storage (DAS) is digital storage directly attached to the computer accessing it, as opposed to storage accessed over a computer network (i.e. network-attached storage). DAS consists of one or more storage units such as hard drives, solid-state drives, optical disc drives within an external enclosure. The term "DAS" is a retronym to contrast with storage area network (SAN) and network-attached storage (NAS). Features A typical DAS system is made of a data storage device (for example enclosures holding a number of hard disk drives) connected directly to a computer through a host bus adapter (HBA). Between those two points there is no network device (like hub, switch, or router), and this is the main characteristic of DAS. The main protocols used for DAS connections are ATA, SATA, eSATA, NVMe, SCSI, SAS, USB, USB 3.0 and IEEE 1394. Storage features of SAN, DAS, and NAS Most functions found in modern storage do not depend on whether the storage is attached directly to servers (DAS), or via a network (SAN and NAS). In enterprise environments, direct-attached storage systems can utilize storage devices that have higher endurance in terms of data workload capability, along with scalability in the amount of capacity that storage arrays can achieve compared to NAS and other consumer-graded storage devices. Advantages and disadvantages The key difference between DAS and NAS is that DAS storage does not incorporate any network hardware and related operating environment to provide a facility to share storage resources independently of the host so is only available via the host to which the DAS is attached. DAS is typically considered much faster than NAS due to lower latency in the type of host connection although contemporary network and direct connection throughput typically exceeds the raw read/write performance of the storage units themselves. A SAN (storage area network) has more in common with a DAS than a NAS with the key difference being that DA
https://en.wikipedia.org/wiki/Fitness%20%28biology%29
Fitness (often denoted or ω in population genetics models) is the quantitative representation of individual reproductive success. It is also equal to the average contribution to the gene pool of the next generation, made by the same individuals of the specified genotype or phenotype. Fitness can be defined either with respect to a genotype or to a phenotype in a given environment or time. The fitness of a genotype is manifested through its phenotype, which is also affected by the developmental environment. The fitness of a given phenotype can also be different in different selective environments. With asexual reproduction, it is sufficient to assign fitnesses to genotypes. With sexual reproduction, recombination scrambles alleles into different genotypes every generation; in this case, fitness values can be assigned to alleles by averaging over possible genetic backgrounds. Natural selection tends to make alleles with higher fitness more common over time, resulting in Darwinian evolution. The term "Darwinian fitness" can be used to make clear the distinction with physical fitness. Fitness does not include a measure of survival or life-span; Herbert Spencer's well-known phrase "survival of the fittest" should be interpreted as: "Survival of the form (phenotypic or genotypic) that will leave the most copies of itself in successive generations." Inclusive fitness differs from individual fitness by including the ability of an allele in one individual to promote the survival and/or reproduction of other individuals that share that allele, in preference to individuals with a different allele. One mechanism of inclusive fitness is kin selection. Fitness as propensity Fitness is often defined as a propensity or probability, rather than the actual number of offspring. For example, according to Maynard Smith, "Fitness is a property, not of an individual, but of a class of individuals—for example homozygous for allele A at a particular locus. Thus the phrase 'expected nu
https://en.wikipedia.org/wiki/List%20of%20botanists%20by%20author%20abbreviation%20%28C%29
A–B To find entries for A–B, use the table of contents above. C C.A.Arnold – Chester Arthur Arnold (1901–1977) Cabactulan – Derek Cabactulan (fl. 2016) Cabanès – Jean Gustave Cabanès (1864–1944) C.A.Barber – Charles Alfred Barber (1860–1933) C.Abbot – Charles Abbot (1761–1817) C.Abel – Clarke Abel (1789–1826) Cabezudo – Baltasar Cabezudo (born 1946) C.A.Br. – Clair Alan Brown (1903–1982) Cabrera – Ángel Lulio Cabrera (1908–1999) (not to be confused with botanist Ángel Cabrera (1879–1960)) C.A.Clark – Carolyn A. Clark (fl. 1979) Cadet – (1937–1987) Cady – Leonard Isaacs Cady (born 1933) Caflisch – Jakob Friedrich Caflisch (1817–1882) C.Agardh – Carl Adolph Agardh (1785–1859) C.A.Gardner – Charles Austin Gardner (1896–1970) Cajander – Aimo Cajander (1879–1943) Calder – James Alexander Calder (1915–1990) Calderón – Graciela Calderón (1931–2022) Calest. – Vittorio Calestani (1882–1949) Caley – George Caley (1770–1829) Callm. – Martin Wilhelm Callmander (born 1975) Calonge – (born 1938) Calzada – Juan Ismael Calzada (fl. 1997) Camarda – Ignazio Camarda (born 1946) Cambage – Richard Hind Cambage (1859–1928) Cambess. – Jacques Cambessèdes (1799–1863) Cameron – Alexander Kenneth Cameron (born 1908) C.A.Mey. – Carl Anton von Meyer (1795–1855) Caminhoá – (1835–1896) Camp – Wendell Holmes Camp (1904–1963) Campacci – Marcos Antonio Campacci (born 1948) Campb. – Douglas Houghton Campbell (1859–1953) Campd. – (also François Campderá) (1793–1862) Camper – Petrus Camper (1722–1789) Campb.-Young – Gael Jean Campbell-Young (born 1973) Camus – Giulio (Jules) Camus (1847–1917) Canby – William Marriott Canby (1831–1904) Canne-Hill. – Judith Marie Canne-Hilliker (also Judith Marie Canne) (1943–2013) (also Canne) Cannon – John Francis Michael Cannon (1930–2008) Cantley – Nathaniel Cantley (died 1888) Cantor – Theodore Cantor (1809–1854) C.A.Paris – (born 1962) Capuron – René Paul Raymond Capuron (1921–1971) Carbonó – Eduino Carbonó
https://en.wikipedia.org/wiki/Current%20differencing%20transconductance%20amplifier
Current differencing transconductance amplifier (CDTA) is a new active circuit element. Properties The CDTA is not free from parasitic input capacitances and it can operate in a wide frequency range due to current-mode operation. Some voltage and current mode applications using this element have already been reported in literature, particularly from the area of frequency filtering: general higher-order filters, biquad circuits, all-pass sections, gyrators, simulation of grounded and floating inductances and LCR ladder structures. Other studies propose CDTA-based high-frequency oscillators. Nonlinear CDTA applications are also expected, particularly precise rectifiers, current-mode Schmitt triggers for measuring purposes and signal generation, current-mode multipliers, etc. Basic operation The CDTA element with its schematic symbol in Fig 1 has a pair of low-impedance current inputs and p, n and an auxiliary terminal z, whose outgoing current is the difference of input currents. Here, output terminal currents are equal in magnitude, but flow in opposite directions, and the product of transconductance () and the voltage at the z terminal gives their magnitudes. Therefore, this active element can be characterized with the following equations: , , , . where and is the external impedance connected to z terminal of the CDTA. CDTA can be thought as a combination of a current differencing unit followed by a dual-output operational transconductance amplifier, DO-OTA. Ideally, the OTA is assumed as an ideal voltage-controlled current source and can be described by , where Ix is output current, and denote non-inverting and inverting input voltage of the OTA, respectively. Note that gm is a function of the bias current. When this element is used in CDTA, one of its input terminals is grounded (e.g., ). With dual output availability, condition is assumed.
https://en.wikipedia.org/wiki/Ion%20Barbu
Ion Barbu (, pen name of Dan Barbilian; 18 March 1895 –11 August 1961) was a Romanian mathematician and poet. His name is associated with the Mathematics Subject Classification number 51C05, which is a major posthumous recognition reserved only to pioneers of investigations in an area of mathematical inquiry. Early life Born in Câmpulung-Muscel, Argeș County, he was the son of Constantin Barbilian and Smaranda, born Șoiculescu. He attended elementary school in Câmpulung, Dămienești, and Stâlpeni, and for secondary studies he went to the Ion Brătianu High School in Pitești, the Dinicu Golescu High School in Câmpulung, and finally the Gheorghe Lazăr High School and the Mihai Viteazul High School in Bucharest. During that time, he discovered that he had a talent for mathematics, and started publishing in Gazeta Matematică; it was also then that he discovered his passion for poetry. Barbu was known as "one of the greatest Romanian poets of the twentieth century and perhaps the greatest of all" according to Romanian literary critic Alexandru Ciorănescu. As a poet, he is known for his volume Joc secund ("Mirrored Play"). He was a student at the University of Bucharest when World War I caused his studies to be interrupted by military service. He completed his degree in 1921. He then went to the University of Göttingen to study number theory with Edmund Landau for two years. Returning to Bucharest, he studied with Gheorghe Țițeica, completing in 1929 his thesis, Canonical representation of the addition of hyperelliptic functions. Achievements in mathematics Apollonian metric In 1934, Barbilian published his article describing metrization of a region K, the interior of a simple closed curve J. Let xy denote the Euclidean distance from x to y. Barbilian's function for the distance from a to b in K is At the University of Missouri in 1938 Leonard Blumenthal wrote Distance Geometry. A Study of the Development of Abstract Metrics, where he used the term "Barbilian spaces" f
https://en.wikipedia.org/wiki/Prokhorov%27s%20theorem
In measure theory Prokhorov's theorem relates tightness of measures to relative compactness (and hence weak convergence) in the space of probability measures. It is credited to the Soviet mathematician Yuri Vasilyevich Prokhorov, who considered probability measures on complete separable metric spaces. The term "Prokhorov’s theorem" is also applied to later generalizations to either the direct or the inverse statements. Statement Let be a separable metric space. Let denote the collection of all probability measures defined on (with its Borel σ-algebra). Theorem. A collection of probability measures is tight if and only if the closure of is sequentially compact in the space equipped with the topology of weak convergence. The space with the topology of weak convergence is metrizable. Suppose that in addition, is a complete metric space (so that is a Polish space). There is a complete metric on equivalent to the topology of weak convergence; moreover, is tight if and only if the closure of in is compact. Corollaries For Euclidean spaces we have that: If is a tight sequence in (the collection of probability measures on -dimensional Euclidean space), then there exist a subsequence and a probability measure such that converges weakly to . If is a tight sequence in such that every weakly convergent subsequence has the same limit , then the sequence converges weakly to . Extension Prokhorov's theorem can be extended to consider complex measures or finite signed measures. Theorem: Suppose that is a complete separable metric space and is a family of Borel complex measures on . The following statements are equivalent: is sequentially precompact; that is, every sequence has a weakly convergent subsequence. is tight and uniformly bounded in total variation norm. Comments Since Prokhorov's theorem expresses tightness in terms of compactness, the Arzelà–Ascoli theorem is often used to substitute for compactness: in function sp
https://en.wikipedia.org/wiki/Mobile%20computer-supported%20collaborative%20learning
Mobile computer-supported collaborative learning may have different meanings depending on the context in which it is applied. Mobile CSCL includes any in-class and out-of-class use of handheld mobile devices such as cell phones, smart phones, and personal digital assistants (PDAs) to enable collaborative learning. Overview The adoption of mobile devices as tools for teaching and learning is referred to as M-Learning. M-Learning is a rapidly emerging educational technology trend. The New Media Consortium has listed adoption of mobiles for teaching and learning on a "One Year or Less" Adoption Horizon. M-Learning research comprises a range of mobile devices and teaching and learning applications. However, the research available for collaborative applications that involve mobile devices is limited. Examples of collaborative mobile learning applications can be found in examples from early adoption of PDA technology, and in recent examples of location-based, mobile collaborative games. History Wireless-enabled handheld devices have been used as early as 2004 to facilitate collaborative learning. Devices such as PDAs and PocketPC's traditionally lack cellular connectivity, but are capable of wireless connectivity. This connectivity enables collaborative learning through software-based, decision-making tools and shared display of learning material. Elementary school learners Wireless interconnected handhelds have been used to foster collaborative construction of words among elementary school students. Students in a first grade classroom in Chile were organized into groups and asked to construct words from syllables. Each student was issued a handheld which identified their group and presented one syllable. Students had to read the syllable and communicate with the rest of their group and decide the appropriate syllable sequence required for word formation. The mobile system employed incorporated a group-based answer approval system that allowed students
https://en.wikipedia.org/wiki/Laser-heated%20pedestal%20growth
Laser-heated pedestal growth (LHPG) or laser floating zone (LFZ) is a crystal growth technique. A narrow region of a crystal is melted with a powerful CO2 or YAG laser. The laser and hence the floating zone, is moved along the crystal. The molten region melts impure solid at its forward edge and leaves a wake of purer material solidified behind it. This technique for growing crystals from the melt (liquid/solid phase transition) is used in materials research. Advantages The main advantages of this technique are the high pulling rates (60 times greater than the conventional Czochralski technique) and the possibility of growing materials with very high melting points. In addition, LHPG is a crucible-free technique, which allows single crystals to be grown with high purity and low stress. The geometric shape of the crystals (the technique can produce small diameters), and the low production cost, make the single-crystal fibers (SCF) produced by LHPG suitable substitutes for bulk crystals in many devices, especially those that use high-melting-point materials. However, single-crystal fibers must have equal or superior optical and structural qualities compared to bulk crystals to substitute for them in technological devices. This can be achieved by carefully controlling the growth conditions. Optical elements Until 1980, laser-heated crystal growth used only two laser beams focused over the source material. This condition generated a high radial thermal gradient in the molten zone, making the process unstable. Increasing the number of beams to four did not solve the problem, although it improved the growth process. An improvement to the laser-heated crystal growth technique was made by Fejer et al., who incorporated a special optical component known as a reflaxicon, consisting of an inner cone surrounded by a larger coaxial cone section, both with reflecting surfaces. This optical element converts the cylindrical laser beam into a larger diameter hollow cylinder s
https://en.wikipedia.org/wiki/Metachronal%20rhythm
A metachronal rhythm or metachronal wave refers to wavy movements produced by the sequential action (as opposed to synchronized) of structures such as cilia, segments of worms, or legs. These movements produce the appearance of a travelling wave. A Mexican wave is a large scale example of a metachronal wave. This pattern is found widely in nature such as on the cilia of many aquatic organisms such as ctenophores, molluscs, ciliates as well as on the epithelial surfaces of many body organs. Individual cilia, when part of a metachronal wave being used for protist locomotion, individually beat in a pattern similar to the planar stroke of a flagellum. The difference is that the recovery stroke is at 90 degrees to the power stroke, so that the cilia avoid hitting each other. Metachronal rhythms may be seen in the coordinated movements of the legs of millipedes and other multi-legged land invertebrates, as well as in the coordinated movements of social insects. Such metachronal motion has been shown to enhance fluid transport properties in natural cilia. Metachronal motion has also been replicated in synthetic microfluidic systems using magnetic filaments. See also Beta movement Phi phenomenon
https://en.wikipedia.org/wiki/FEZ-like%20protein
In molecular biology, the FEZ-like protein family is a family of eukaryotic proteins thought to be involved in axonal outgrowth and fasciculation. The N-terminal regions of these sequences are less conserved than the C-terminal regions, and are highly acidic. The Caenorhabditis elegans homologue, UNC-76, may play structural and signalling roles in the control of axonal extension and adhesion (particularly in the presence of adjacent neuronal cells) and these roles have also been postulated for other FEZ family proteins. Certain homologues have been definitively found to interact with the N-terminal variable region (V1) of PKC-zeta, and this interaction causes cytoplasmic translocation of the FEZ family protein in mammalian neuronal cells. The C-terminal region probably participates in the association with the regulatory domain of PKC-zeta. The members of this family are predicted to form coiled-coil structures which may interact with members of the RhoA family of signalling proteins, but are not thought to contain other characteristic protein motifs. Certain members of this family are expressed almost exclusively in the brain, whereas others (such as FEZ2) are expressed in other tissues, and are thought to perform similar but unknown functions in these tissues.
https://en.wikipedia.org/wiki/Geodat
Geodat was a commercial project, begun in 1980 and completed by 1991, that provided digital geographic mapping data for commercial users at scales equal to or greater than 1:1,000,000. The term "Geodat" was derived from "GEOgraphic DATa". Geodat data was primarily "medium scale", a nominal 1:100,000, but ranged from 1:50,000 to 1:250,000. The cartographic data was vector-based digitisation of coastline, hydrography, internal and international political boundaries, primary transportation routes and city locations. The data was intended to be used on its own to produce quick, cheap, consistent maps, initially for oil exploration firms. Harry Wassall, the founder of Petroconsultants SA, a Geneva-based energy information services firm, began the project in 1979 by hiring a researcher from the Harvard Laboratory for Computer Graphics and Spatial Analysis, Michael Mainelli, to explore how to automate Petroconsultants' extensive paper map series. Mainelli became Project Director in 1981. Petroconsultants concluded that a cooperative project among the oil firms acknowledged the high degree of overlap in their computer mapping interests. Petroconsultants SA assessed client interest at a meeting in Geneva on 20–21 August 1981 with attendees from Amoco, BP, Cities Service, Deminex, Elf Acquitaine, Exxon, Gulf and Shell. The need for computerised mapping data was high and the response positive enough to form an advisory committee with paid sponsorship. The sponsors commissioned Petroconsultants to produce four sample digitised maps of the Ivory Coast. The Ivorian pilot project resulted in four 1:200,000 maps with 800 features and 40,000 data points. The pilot established Common Geographic Format (CGF) records, for a time the industry standard for computer cartographic information exchange. These digitised map files, and their attendant file structures, feature codes, segment records, map records, annotation records and set records were reviewed at a meeting in Dublin
https://en.wikipedia.org/wiki/Principal%20Triangulation%20of%20Great%20Britain
The Principal Triangulation of Britain was the first high-precision triangulation survey of the whole of Great Britain and Ireland, carried out between 1791 and 1853 under the auspices of the Board of Ordnance. The aim of the survey was to establish precise geographical coordinates of almost 300 significant landmarks which could be used as the fixed points of local topographic surveys from which maps could be drawn. In addition there was a purely scientific aim in providing precise data for geodetic calculations such as the determination of the length of meridian arcs and the figure of the Earth. Such a survey had been proposed by William Roy (1726–1790) on his completion of the Anglo-French Survey but it was only after his death that the Board of Ordnance initiated the trigonometric survey, motivated by military considerations in a time of a threatened French invasion. Most of the work was carried out under the direction of Isaac Dalby, William Mudge and Thomas Frederick Colby, but the final synthesis and report (1858) was the work of Alexander Ross Clarke. The survey stood the test of time for a century, until the Retriangulation of Great Britain between 1935 and 1962. History In the aftermath of the Jacobite rising of 1745 it was recognised that there was a need for an accurate map of the Scottish Highlands and the necessary survey was initiated in 1747 by Lieutenant-Colonel David Watson, a Deputy Quartermaster-General of the Board of Ordnance. Watson employed William Roy as a civilian assistant to carry out the bulk of the work. Subsequently, Roy, having enlisted in the army and having become a very competent surveyor, proposed (1763) a national survey which would be a plan for defence at a time when French invasions were threatened. The proposal was rejected on grounds of expense. Roy continued to lobby for a survey and his ambitions were realised to a certain extent by an unexpected development. In 1783 the French Academy of Sciences claimed that the l
https://en.wikipedia.org/wiki/Alonzo%20Church
Alonzo Church (June 14, 1903 – August 11, 1995) was an American mathematician, computer scientist, logician, and philosopher who made major contributions to mathematical logic and the foundations of theoretical computer science. He is best known for the lambda calculus, the Church–Turing thesis, proving the unsolvability of the Entscheidungsproblem ("decision problem"), the Frege–Church ontology, and the Church–Rosser theorem. He also worked on philosophy of language (see e.g. Church 1970). Alongside his doctoral student Alan Turing, Church is considered one of the founders of computer science. Life Alonzo Church was born on June 14, 1903, in Washington, D.C., where his father, Samuel Robbins Church, was a justice of the peace and the judge of the Municipal Court for the District of Columbia. He was the grandson of Alonzo Webster Church (1829-1909), United States Senate Librarian from 1881 to 1901, and great grandson of Alonzo Church, a Professor of Mathematics and Astronomy and 6th President of the University of Georgia. As a young boy, Church was partially blinded by an air gun accident. The family later moved to Virginia after his father lost his position at the university because of failing eyesight. With help from his uncle, also named Alonzo Church, the son attended the private Ridgefield School for Boys in Ridgefield, Connecticut. After graduating from Ridgefield in 1920, Church attended Princeton University, where he was an exceptional student. He published his first paper on Lorentz transformations in 1924 and graduated the same year with a degree in mathematics. He stayed at Princeton for graduate work, earning a Ph.D. in mathematics in three years under Oswald Veblen. He married Mary Julia Kuczinski in 1925. The couple had three children, Alonzo Jr. (1929), Mary Ann (1933) and Mildred (1938). After receiving his Ph.D., he taught briefly as an instructor at the University of Chicago. He received a two-year National Research Fellowship that enabled him t
https://en.wikipedia.org/wiki/Boundary%20vector%20field
The boundary vector field (BVF) is an external force for parametric active contours (i.e. Snakes). In the fields of computer vision and image processing, parametric active contours are widely used for segmentation and object extraction. The active contours move progressively towards its target based on the external forces. There are a number of shortcomings in using the traditional external forces, including the capture range problem, the concave object extraction problem, and high computational requirements. The BVF is generated by an interpolation scheme which reduces the computational requirement significantly, and at the same time, improves the capture range and concave object extraction capability. The BVF is also tested in moving object tracking and is proven to provide fast detection method for real time video applications.
https://en.wikipedia.org/wiki/Cambium
A cambium (plural cambia or cambiums), in plants, is a tissue layer that provides partially undifferentiated cells for plant growth. It is found in the area between xylem and phloem. A cambium can also be defined as a cellular plant tissue from which phloem, xylem, or cork grows by division, resulting (in woody plants) in secondary thickening. It forms parallel rows of cells, which result in secondary tissues. There are several distinct kinds of cambium found in plant stems and roots: Cork cambium, a tissue found in many vascular plants as part of the periderm. Unifacial cambium, which ultimately produces cells to the interior of its cylinder. Vascular cambium, a lateral meristem in the vascular tissue of plants. Uses The cambium of many species of woody plants are edible; however, due to its vital role in the homeostasis and growth of woody plants, this may result in death of the plant if enough cambium is removed at once. The cambium can generally be eaten raw or cooked, and can be ground to flour for use in baking.
https://en.wikipedia.org/wiki/IEEE/ACM%20Transactions%20on%20Computational%20Biology%20and%20Bioinformatics
IEEE/ACM Transactions on Computational Biology and Bioinformatics (abbreviated TCBB) is a bimonthly peer-reviewed scientific journal. It is a joint publication of the IEEE Computer Society, Association for Computing Machinery (ACM), IEEE Computational Intelligence Society (CIS), and the IEEE Engineering in Medicine and Biology Society. It is published in cooperation with the IEEE Control Systems Society. The journal covers research related to: algorithmic, mathematical, statistical, and computational methods used in bioinformatics and computational biology development and testing of effective computer programs in bioinformatics development and optimization of biological databases biological results that are obtained from the use of these methods, programs, and databases the field of systems biology
https://en.wikipedia.org/wiki/Chameleon%20ranching
Chameleon ranching is the action of releasing chameleons into an area with the intent of establishing them and later collecting them to sell for a profit. This type of ranching has existed since the 1970s, but has become more widespread around the early 2000s. It is an example of people intentionally releasing a foreign species. Background and history Chameleons have always been a staple of the wildlife trade, with the United States in particular accounting for 69% of chameleon imports from 1977 to 2001. As importing the chameleons from their native countries can be costly, some people have decided to release chameleons into the wild on purpose, intending to let them reproduce and then recapturing them. The first case of this was in 1972, when Jackson's chameleons were released by a petshop owner on Kāneʻohe in Hawaii. By 1981, when Kenya stopped importation of its chameleons, all Jackson's chameleons in the pet trade were then sourced from Hawaii. As Hawaii started to make it illegal to transport or export chameleons in recent years, chameleon ranching shifted towards the state of Florida. Several populations of chameleons were discovered in the early 2000s and have been proved to be reproducing and spreading to new areas. As Florida's law states that no non-native species can be released into the wild, most chameleons that are found or captured are sold into the pet trade. Locations Hawaii Hawaii has one species of chameleon established on it, the Jackson's chameleon, which was introduced when a pet store owner released a shipment of chameleons on Kāneʻohe in 1972. The shipment of chameleons were skinny and dehydrated, and were released into the owner's backyard so that they could revitalize themselves, instead escaping outside his property and into the adjacent wilderness. From here, the chameleons managed to reach Maui by the 1980s, where they thrived in the warm and humid climate similar to their natural habitat of Eastern Africa. While they prefer high e
https://en.wikipedia.org/wiki/Listeriolysin%20O
Listeriolysin O (LLO) is a hemolysin produced by the bacterium Listeria monocytogenes, the pathogen responsible for causing listeriosis. The toxin may be considered a virulence factor, since it is crucial for the virulence of L. monocytogenes. Biochemistry Listeriolysin O is a non-enzymatic, cytolytic, thiol-activated, cholesterol-dependent cytolysin; hence, it is activated by reducing agents and inhibited by oxidizing agents. However, LLO differs from other thiol-activated toxins, since its cytolytic activity is maximized at a pH of 5.5. By maximizing activity at a pH of 5.5, LLO is selectively activated within the acidic phagosomes (average pH ~ 5.9) of cells that have phagocytosed L. monocytogenes. After LLO lyses the phagosome, the bacterium escapes into the cytosol, where it can grow intracellularly. Upon release from the phagosome, the toxin has little activity in the more basic cytosol. Furthermore, LLO permits L. monocytogenes to escape from phagosomes into the cytosol without damaging the plasma membrane of the infected cell. This allows the bacteria to live intracellularly, where they are protected from extracellular immune system factors such as the complement system and antibodies. LLO also causes dephosphorylation of histone H3 and deacetylation of histone H4 during the early phases of infection, prior to entry of L. monocytogenes into the host cell. The pore-forming activity is not involved in causing the histone modifications. The alterations of the histones cause the down regulation of genes encoding proteins involved in the inflammatory response. Thus, LLO may be important in subverting the host immune response to L. monocytogenes. A PEST-like sequence is present in LLO and is considered essential for virulence, since mutants lacking the sequence lysed the host cell. However, contrary to PEST's supposed role in protein degradation, evidence suggests that the PEST-like sequence may regulate LLO production in the cytosol rather than increase
https://en.wikipedia.org/wiki/37%20%28number%29
37 (thirty-seven) is the natural number following 36 and preceding 38. In mathematics 37 is the 12th prime number, and the 3rd isolated prime without a twin prime. 37 is the first irregular prime. The sum of the squares of the first 37 primes is divisible by 37. Every positive integer is the sum of at most 37 fifth powers (see Waring's problem). It is the third cuban prime following 7 and 19. 37 is the fifth Padovan prime, after the first four prime numbers 2, 3, 5, and 7. It is also the fifth lucky prime, after 3, 7, 13, and 31. 37 is the third star number and the fourth centered hexagonal number. There are exactly 37 complex reflection groups. The smallest magic square, using only primes and 1, contains 37 as the value of its central cell: Its magic constant is 37 x 3 = 111, where 3 and 37 are the first and third base-ten unique primes (the second such prime is 11). In decimal 37 is a permutable prime with 73, which is the 21st prime number. By extension, the mirroring of their digits and prime indexes makes 73 the only Sheldon prime. In moonshine theory, whereas all p ⩾ 73 are non-supersingular primes, the smallest such prime is 37. 37 requires twenty-one steps to return to 1 in the Collatz problem, as do adjacent numbers 36 and 38. The two closest numbers to cycle through the elementary {16, 8, 4, 2, 1} Collatz pathway are 5 and 32, whose sum is 37. The trajectories for 3 and 21 both require seven steps to reach 1. The first two integers that return for the Mertens function (2 and 39) have a difference of 37. Their product (2 × 39) is the twelfth triangular number 78. Their sum is 41, which is the constant term in Euler's lucky numbers that yield prime numbers of the form k2 − k + 41; the largest of which (1601) is a difference of 78 from the second-largest prime (1523) generated by this quadratic polynomial. In decimal For a three-digit number that is divisible by 37, a rule of divisibility is that another divisible by 37 can be generated by t
https://en.wikipedia.org/wiki/Uterovaginal%20plexus%20%28nerves%29
The Uterovaginal plexus is a division of the inferior hypogastric plexus. In older texts, it is referred to as two structures, the "vaginal plexus" and "uterine plexus". The Vaginal Plexus arises from the lower part of the pelvic plexus. It is distributed to the walls of the vagina, to the erectile tissue of the vestibule, and to the cavernous nerves of the clitoris. The nerves composing this plexus contain, like the vesical, a large proportion of spinal nerve fibers. The Uterine Plexus accompanies the uterine artery to the side of the uterus, between the layers of the broad ligament; it communicates with the ovarian plexus.
https://en.wikipedia.org/wiki/Stratification%20%28vegetation%29
Stratification in the field of ecology refers to the vertical layering of a habitat; the arrangement of vegetation in layers. It classifies the layers (sing. stratum, pl. strata) of vegetation largely according to the different heights to which their plants grow. The individual layers are inhabited by different animal and plant communities (stratozones). Vertical structure in terrestrial plant habitats The following layers are generally distinguished: forest floor (root and moss layers), herb, shrub, understory and canopy layers. These vegetation layers are primarily determined by the height of their individual plants, the different elements may however have a range of heights. The actual layer is characterised by the height range in which the vast majority of photosynthetic organs (predominantly leaves) are found. Taller species will have part of their shoot system in the underlying layers. In addition to the above-ground stratification there is also a “root layer”. In the broadest sense, the layering of diaspores in the soil may be counted as part of the vertical structure. The plants of a layer, especially with regard to their way of life and correspondingly similar root distribution interact closely and compete strongly for space, light, water and nutrients. The stratification of a plant community is the result of long selection and adaptation processes. Through the formation of different layers a given habitat is better utilized. Strongly vertically stratified habitats are very stable ecosystems. The opposite is not true, because several less stratified vegetation types, such as reed beds, can be very stable. The layers of a habitat are closely interrelated and at least partly interdependent. This is often the case as a result of the changes in microclimate of the top layers, the light factor being of particular importance. Besides the superposition of different plants growing on the same soil, there is a lateral impact of the higher layers on adjacent pl
https://en.wikipedia.org/wiki/Integral%20probability%20metric
In probability theory, integral probability metrics are types of distance functions between probability distributions, defined by how well a class of functions can distinguish the two distributions. Many important statistical distances are integral probability metrics, including the Wasserstein-1 distance and the total variation distance. In addition to theoretical importance, integral probability metrics are widely used in areas of statistics and machine learning. The name "integral probability metric" was given by German statistician Alfred Müller; the distances had also previously been called "metrics with a -structure." Definition Integral probability metrics (IPMs) are distances on the space of distributions over a set , defined by a class of real-valued functions on as here the notation refers to the expectation of under the distribution . The absolute value in the definition is unnecessary, and often omitted, for the usual case where for every its negation is also in . The functions being optimized over are sometimes called "critic" functions; if a particular achieves the supremum, it is often termed a "witness function" (it "witnesses" the difference in the distributions). These functions try to have large values for samples from and small (likely negative) values for samples from ; this can be thought of as a weaker version of classifers, and indeed IPMs can be interpreted as the optimal risk of a particular classifier. The choice of determines the particular distance; more than one can generate the same distance. For any choice of , satisfies all the definitions of a metric except that we may have we may have for some ; this is variously termed a "pseudometric" or a "semimetric" depending on the community. For instance, using the class which only contains the zero function, is identically zero. is a metric if and only if separates points on the space of probability distributions, i.e. for any there is some such that ; most, but no
https://en.wikipedia.org/wiki/Perspective-taking
Perspective-taking is the act of perceiving a situation or understanding a concept from an alternative point of view, such as that of another individual. A vast amount of scientific literature suggests that perspective-taking is crucial to human development and that it may lead to a variety of beneficial outcomes. Perspective-taking may also be possible in some non-human animals. Both theory and research have suggested ages when children begin to perspective-take and how that ability develops over time. Research suggests that certain people who have attention deficit hyperactivity disorder with comorbid conduct problems (such as Oppositional Defiant Disorder) or autism may have reduced ability to engage in perspective-taking. Studies to assess the brain regions involved in perspective-taking suggest that several regions may be involved, including the prefrontal cortex and the precuneus. Perspective-taking is related to other theories and concepts including theory of mind and empathy. Definition Perspective-taking takes place when an individual views a situation from another's point-of-view. Perspective-taking has been defined along two dimensions: perceptual and conceptual. Perceptual perspective-taking is the ability to understand how another person experiences things through their senses (i.e. visually or auditorily). Most of the literature devoted to perceptual perspective-taking focuses on visual perspective-taking: the ability to understand the way another person sees things in physical space. Conceptual perspective-taking is the ability to comprehend and take on the viewpoint of another person's psychological experience (i.e. thoughts, feelings, and attitudes). Related terms Theory of mind Theory of mind is the awareness that people have individual psychological states that differ from one another. Within perspective-taking literature, the term perspective-taking and theory of mind are sometimes used interchangeably; some studies use theory of mind
https://en.wikipedia.org/wiki/Xfire
Xfire was a proprietary freeware instant messaging service for gamers that also served as a game server browser with various other features. It was available for Microsoft Windows. Xfire was originally developed by Ultimate Arena based in Menlo Park, California. On January 3, 2014, it had over 24 million registered users. Xfire's partnership with Livestream allowed users to broadcast live video streams of their current game to an audience. The Xfire website also maintained a "Top Ten" games list, ranking games by the number of hours Xfire users spend playing each game every day. World of Warcraft had been the most played game for many years, but was surpassed by League of Legends on June 20, 2011. Social.xfire.com was a community site for Xfire users, allowing them to upload screenshots, photos and videos and to make contacts. Xfire hosted events every month, which included debates, game tournaments, machinima contests, and chat sessions with Xfire or game developers. Xfire's web based social media was discontinued on June 12, 2015, and the messaging function was shut down on June 27, 2015. The last of Xfire's services were shut down on April 30, 2016. History Xfire, Inc. was founded in 2002 by Dennis "Thresh" Fong, Mike Cassidy, Max Woon, and David Lawee. The company was formerly known as Ultimate Arena, but changed its name to Xfire when its desktop client Xfire became more popular and successful than its gaming website. The first version of the Xfire desktop client was code-named Scoville, which was first developed in 2003 by Garrett Blythe, Chris Kirmse, Mike Judge, and others. The services ability to track game play hours and quickly launch web games, compared to other services at the time quickly gained it popularity. On April 25, 2006, Xfire was acquired by Viacom in a US$102 million deal. In September 2006, Sony was misinterpreted to have announced that Xfire would be used for the PlayStation 3. The confusion came when one PlayStation 3 game, Untold
https://en.wikipedia.org/wiki/Theoretical%20motivation%20for%20general%20relativity
A theoretical motivation for general relativity, including the motivation for the geodesic equation and the Einstein field equation, can be obtained from special relativity by examining the dynamics of particles in circular orbits about the Earth. A key advantage in examining circular orbits is that it is possible to know the solution of the Einstein Field Equation a priori. This provides a means to inform and verify the formalism. General relativity addresses two questions: How does the curvature of spacetime affect the motion of matter? How does the presence of matter affect the curvature of spacetime? The former question is answered with the geodesic equation. The second question is answered with the Einstein field equation. The geodesic equation and the field equation are related through a principle of least action. The motivation for the geodesic equation is provided in the section Geodesic equation for circular orbits. The motivation for the Einstein field equation is provided in the section Stress–energy tensor. Geodesic equation for circular orbits Kinetics of circular orbits For definiteness consider a circular Earth orbit (helical world line) of a particle. The particle travels with speed v. An observer on Earth sees that length is contracted in the frame of the particle. A measuring stick traveling with the particle appears shorter to the Earth observer. Therefore, the circumference of the orbit, which is in the direction of motion appears longer than times the diameter of the orbit. In special relativity the 4-proper-velocity of the particle in the inertial (non-accelerating) frame of the earth is where c is the speed of light, is the 3-velocity, and is . The magnitude of the 4-velocity vector is always constant where we are using a Minkowski metric . The magnitude of the 4-velocity is therefore a Lorentz scalar. The 4-acceleration in the Earth (non-accelerating) frame is where is c times the proper time interval measured in the fra
https://en.wikipedia.org/wiki/Extinction%20cross
The extinction cross is an optical phenomenon that is seen when trying to extinguish a laser beam or non-planar white light using crossed polarizers. Ideally, crossed (90° rotated) polarizers block all light, since light which is polarized along the polarization axis of the first polarizer is perpendicular to the polarization axis of the second. When the beam is not perfectly collimated, however, a characteristic fringing pattern is produced. See also Polarization (waves) Further reading Mineralogy notes 6 See "6.3.5. Review of Uniaxial Optical Properties" Nikon MicroscopyU See Figure 1a Polarization (waves) Optical phenomena
https://en.wikipedia.org/wiki/Parsons%20code
The Parsons code, formally named the Parsons code for melodic contours, is a simple notation used to identify a piece of music through melodic motion – movements of the pitch up and down. Denys Parsons developed this system for his 1975 book The Directory of Tunes and Musical Themes. Representing a melody in this manner makes it easier to index or search for pieces, particularly when the notes' values are unknown. Parsons covered around 15,000 classical, popular and folk pieces in his dictionary. In the process he found out that *UU is the most popular opening contour, used in 23% of all the themes, something that applies to all the genres. The book was also published in Germany in 2002 and reissued by Piatkus in 2008 as the Directory of Classical Themes. An earlier method of classifying and indexing melody was devised by Harold Barlow and Sam Morgenstern in A Dictionary of Musical Themes (1950). The code The first note of a melody is denoted with an asterisk (*), although some Parsons code users omit the first note. All succeeding notes are denoted with one of three letters to indicate the relationship of its pitch to the previous note: * = first tone as reference, u = "up", for when the note is higher than the previous note, d = "down", for when the note is lower than the previous note, r = "repeat", for when the note has the same pitch as the previous note. Some examples "Twinkle Twinkle Little Star": * "Silent Night": * "Aura Lea" ("Love Me Tender"): * "White Christmas": * First verse in Madonna's "Like a Virgin": * See also List of music software
https://en.wikipedia.org/wiki/Maurandya%20scandens
Maurandya scandens, also known as trailing snapdragon and snapdragon vine, is a climbing herbaceous perennial native to Mexico, with snapdragon-like flowers and untoothed leaves. It is grown as an ornamental plant in many parts of the world, and has commonly escaped from cultivation to become naturalized. Other names for this plant include creeping snapdragon, vining snapdragon, creeping gloxinia and chickabiddy. Description The perennial plant grows up to 2-3 meters tall or long. The alternate, lanceolate to arrow-shaped, entire and lobed to coarsely toothed, pointed, on the lobes, teeth often fine-pointed leaves sit on 8 to 42 millimeters long petioles. The bare leaf blades are 11 to 62 long and 4 to 45 millimeters wide. The shoot axes often form adventitious roots. It has been confused with Lophospermum scandens, which has longer flowers and larger, toothed leaves. It resembles Maurandya barclayana, which has blue-violet flowers and hairy rather than hairless sepals. It is semi-deciduous in the colder areas. Flowers and reproduction The hermaphrodite, tubular flowers appear axillary and solitary, and come in many different colours including rose pink, violet, indigo blue or white, with double perianth. The fivefold flowers feature a wide throat on long, glabrous pedicels, 30 to 85 millimeters long. The small, ovate-lanceolate and just overgrown tips of the calyx are 10 to 15 millimeters long. They are 2 to 4 millimeters wide at the base and they are bare to sparsely covered with glandular hairs. The crown, slightly hairy on the outside, with shorter, rounded to indented, expansive lobes has two lips. The 4 short, dynamic stamens are included. The superior, two-chambered ovary is usually bald and the bald, enclosed, relatively short style is 13 to 16 millimeters long. It flowers profusely between spring and summer, and irregularly in the cool months. The asymmetrical, irregularly ovoid and many-seeded, cartilaginous seed capsules are 10 to 12 millimeters
https://en.wikipedia.org/wiki/International%20Plant%20Protection%20Convention
The International Plant Protection Convention (IPPC) is a 1951 multilateral treaty overseen by the United Nations Food and Agriculture Organization that aims to secure coordinated, effective action to prevent and to control the introduction and spread of pests of plants and plant products. The Convention extends beyond the protection of cultivated plants to the protection of natural flora and plant products. It also takes into consideration both direct and indirect damage by pests, so it includes weeds. IPPC promulgates International Standards for Phytosanitary Measures (ISPMs). The Convention created a governing body consisting of each party, known as the Commission on Phytosanitary Measures, which oversees the implementation of the convention (see ). As of August 2017, the convention has 183 parties, being 180 United Nations member states and the Cook Islands, Niue, and the European Union. The convention is recognized by the World Trade Organization's (WTO) Agreement on the Application of Sanitary and Phytosanitary Measures (the SPS Agreement) as the only international standard setting body for plant health. Goals While the IPPC's primary focus is on plants and plant products moving in international trade, the convention also covers research materials, biological control organisms, germplasm banks, containment facilities, food aid, emergency aid and anything else that can act as a vector for the spread of plant pests – for example, containers, packaging materials, soil, vehicles, vessels and machinery. The IPPC was created by member countries of the Food and Agriculture Organization (UN FAO). The IPPC places emphasis on three core areas: international standard setting, information exchange and capacity development for the implementation of the IPPC and associated international phytosanitary standards. The Secretariat of the IPPC is housed at FAO headquarters in Rome, Italy, and is responsible for the coordination of core activities under the IPPC work program
https://en.wikipedia.org/wiki/Z%C3%A9%20Povinho
Zé Povinho is the cartoon character of a Portuguese everyman created in 1875 by Rafael Bordalo Pinheiro. He became first a symbol of the Portuguese working-class people, and eventually into the unofficial personification of Portugal. Gallery
https://en.wikipedia.org/wiki/Cognitive%20ethology
Cognitive ethology is a branch of ethology concerned with the influence of conscious awareness and intention on the behaviour of an animal. Donald Griffin, a zoology professor in the United States, set up the foundations for researches in the cognitive awareness of animals within their habitats. The fusion of cognitive science and classical ethology into cognitive ethology "emphasizes observing animals under more-or-less natural conditions, with the objective of understanding the evolution, adaptation (function), causation, and development of the species-specific behavioral repertoire" (Niko Tinbergen 1963). According to Jamieson & Bekoff (1993), "Tinbergen's four questions about the evolution, adaptation, causation and development of behavior can be applied to the cognitive and mental abilities of animals." Allen & Bekoff (1997, chapter 5) attempt to show how cognitive ethology can take on the central questions of cognitive science, taking as their starting point the four questions described by Barbara Von Eckardt in her 1993 book What is Cognitive Science?, generalizing the four questions and adding a fifth. Kingstone, Smilek & Eastwood (2008) suggested that cognitive ethology should include human behavior. They proposed that researchers should firstly study how people behave in their natural, real world environments and then move to the lab. Anthropocentric claims for the ways non-human animals interact in their social and non-social worlds are often used to influence decisions on how the non-human animals can or should be used by humans. Relation to laboratory experimental psychology Traditionally, cognitive ethologists have questioned research methods that isolate animals in unnatural surroundings and present them with a limited set of artificial stimuli, arguing that such techniques favor the study of artificial issues that are not relevant to an understanding of the natural behavior of animals. However, many modern researchers favor a judicious combinat
https://en.wikipedia.org/wiki/National%20Animal%20Resource%20Facility%20for%20Biomedical%20Research
The National Animal Resource Facility for Biomedical Research is an Indian Biomedical research facility, and vivarium under the Indian Council of Medical Research. The new 33rd flagship institute of ICMR was founded in 2015, at Genome Valley in Hyderabad, India. The center is a state of the art Animal house and Animal sciences facility located near Turkapally, Shamirpet spread over 102 acres of land. The institute proposes to breed specific pathogen free large and small animals such as mice, rats, hamsters, rabbit, guinea pigs, mini pigs, canines, swine, equines, horses, sheep, and goats. Various species of non-human primates such as rhesus, bonnet monkey, cynomolgus monkey, pig tail monkey, owl monkey and squirrel monkey among others needed for research purpose. By tenth year of its functioning, the institute proposes to be self sustainable. The project was conceived way back in 2001. It got delayed due to financial and technical reasons. However, it got impetus in 2015. Background On 18 November 2015 the union Government of India, approved a long-pending proposal envisaging Rs 338.58 crore-world-class-facility for breeding beagle dogs, horses, and monkeys besides other animals on a large scale to indigenously to meet the needs of the country's pharma firms for drug testing and clinical research. Subsequently, The National Center For Laboratory Animal Sciences at the National Institute of Nutrition, Hyderabad is being integrated to form the National Animal Resource Facility for Biomedical Research. History "The need for the NARF-BR has been consistently felt as the existing institutes like Central Drug Research Institute, Lucknow and National Institute of Nutrition, Hyderabad are working on small animals, mostly rodents. They cannot meet the demand and requirement of biomedical sector, which has no option but to depend on other countries like Indonesia, Singapore and Malaysia for testing their products," said a senior official from the Ministry of Health and F
https://en.wikipedia.org/wiki/Bi-directional%20hypothesis%20of%20language%20and%20action
The bi-directional hypothesis of language and action proposes that the sensorimotor and language comprehension areas of the brain exert reciprocal influence over one another. This hypothesis argues that areas of the brain involved in movement and sensation, as well as movement itself, influence cognitive processes such as language comprehension. In addition, the reverse effect is argued, where it is proposed that language comprehension influences movement and sensation. Proponents of the bi-directional hypothesis of language and action conduct and interpret linguistic, cognitive, and movement studies within the framework of embodied cognition and embodied language processing. Embodied language developed from embodied cognition, and proposes that sensorimotor systems are not only involved in the comprehension of language, but that they are necessary for understanding the semantic meaning of words. Development of the bi-directional hypothesis The theory that sensory and motor processes are coupled to cognitive processes stems from action-oriented models of cognition. These theories, such as the embodied and situated cognitive theories, propose that cognitive processes are rooted in areas of the brain involved in movement planning and execution, as well as areas responsible for processing sensory input, termed sensorimotor areas or areas of action and perception. According to action-oriented models, higher cognitive processes evolved from sensorimotor brain regions, thereby necessitating sensorimotor areas for cognition and language comprehension. With this organization, it was then hypothesized that action and cognitive processes exert influence on one another in a bi-directional manner: action and perception influence language comprehension, and language comprehension influences sensorimotor processes. Although studied in a unidirectional manner for many years, the bi-directional hypothesis was first described and tested in detail by Aravena et al. These authors
https://en.wikipedia.org/wiki/Washington%20Redskins%20name%20opinion%20polls
During the years of increasing awareness of the Washington Redskins name controversy, public opinion polls were part of the discussion about whether Native Americans found the term redskin insulting. Other polls gauged how the general public viewed the controversy. Two national political polls, the first in 2004 and another in 2016, were particularly influential. When a respondent identified themselves as Native American, these polls asked, "The professional football team in Washington calls itself the Washington Redskins. As a Native American, do you find that name offensive or doesn't it bother you?". In both polls, 90% responded that they were not bothered, 9% that they were offended, and 1% gave no response. These polls were widely cited by teams, fans, and mainstream media as evidence that there was no need to change the name of the Washington football team or the names and mascots of other teams. But academics noted that standard polling methods cannot accurately measure the opinions of a small, yet culturally and socially diverse population such as Native Americans. More detailed and focused academic studies found that most Native Americans found the term offensive, particularly those with more identification and involvement with their Native cultures. Native American organizations that represented a significant percentage of tribal citizens and that opposed Native mascots criticized these polls on technical and other grounds, including that their widespread use represented white privilege and the erasure of authentic Native voices. In 2013, the National Congress of American Indians (NCAI) said that the misrepresentation of Native opinion by polling had impeded progress for decades. More than a half century passed between the 1968 resolution by the NCAI condemning the name and the February 2, 2022, announcement that the team would be renamed the Washington Commanders. Limitations on polling Polls to assess the opinions of Native Americans are unusual
https://en.wikipedia.org/wiki/Caprazamycin
Caprazamycins are chemical compounds isolated from Streptomyces which have some antibiotic activity.
https://en.wikipedia.org/wiki/Macrocnemus
Macrocnemus is an extinct genus of archosauromorph reptile known from the Middle Triassic (Late Anisian to Ladinian) of Europe and China. Macrocnemus is a member of the Tanystropheidae family and includes three species. Macrocnemus bassanii, the first species to be named and described, is known from the Besano Formation and adjacent paleontological sites in the Italian and Swiss Alps. Macrocnemus fuyuanensis, on the other hand, is known from the Zhuganpo Formation in southern China. A third species, Macrocnemus obristi, is known from the Prosanto Formation of Switzerland and is characterized by gracile limbs. The name Macrocnemus is Greek for "long tibia". Description Macrocnemus is known from multiple specimens, most belonging to M. bassanii. It is a small reptile measuring long. Macrocnemus possessed at least 52 or 53 caudal vertebrae. Like many other early archosauromorphs, Macrocnemus had a small and low head on the end of a thin neck containing vertebrae with low neural spines and long cervical ribs. Many archosauromorphs with these features have been grouped within the order Protorosauria, although it is debatable whether this order is valid. Features that are common to most "protorosaurs" like Macrocnemus include the ankle having a hooked fifth metatarsal attached to elongated limb elements, with tarsal elements with well-ossified proximal and distal ends. Unlike in Tanystropheus, digit V of the proximal phalanx of Macrocnemus is shorter than the other digits. Species M. bassanii Macrocnemus bassanii is the most well-known and numerous species of Macrocnemus. Although the holotype specimen of this species was destroyed during World War II, a cast of the specimen (MSNM 14624, or alternatively PIMUZ T 2473) survived. Numerous complete or partial specimens of M. bassanni housed at PIMUZ (Paläontologisches Institut und Museum der Universität Zürich) include A III/208, T 1534, T 2470, T 2472, T 2474 through T 2477, T 2809, T 2812 through T 2816, T 4822, an
https://en.wikipedia.org/wiki/Destroying%20angel
The name destroying angel applies to several similar, closely related species of deadly all-white mushrooms in the genus Amanita. They are Amanita virosa in Europe and A. bisporigera and A. ocreata in eastern and western North America, respectively. Another European species of Amanita referred to as the destroying angel, Amanita verna - also referred to as the 'Fool's mushroom' - was first described in France in 1780. Destroying angels are among the most toxic known mushrooms; both they and the closely related death caps (A. phalloides) contain amatoxins. Description Destroying angels are characterized by having a white stalk and gills. The cap can be pure white, or white at the edge and yellowish, pinkish, or tan at the center. It has a partial veil, or ring (annulus) circling the upper stalk, and the gills are "free", not attached to the stalk. Perhaps the most telltale of the features is the presence of a volva, or universal veil, so called because it is a membrane that encapsulates the entire mushroom, rather like an egg, when it is very young. This structure breaks as the young mushroom expands, leaving parts that can be found at the base of the stalk as a boot or cuplike structure, and there may be patches of removable material on the cap surface. This combination of features, all found together in the same mushroom, is the hallmark of the family. While other families may have any one or two of these features, none has them all. The cap is usually about across; the stipe is usually long and about thick. They are found singly or in small groups. Destroying angels can be mistaken for edible fungi such as the button mushroom, meadow mushroom, or the horse mushroom. Young destroying angels that are still enclosed in their universal veil can be mistaken for puffballs, but slicing them in half longitudinally will reveal internal mushroom structures. This is the basis for the common recommendation to slice in half all puffball-like mushrooms picked when mush
https://en.wikipedia.org/wiki/Quiescent%20centre
The quiescent centre is a group of cells, up to 1,000 in number, in the form of a hemisphere, with the flat face toward the root tip of vascular plants. It is a region in the apical meristem of a root where cell division proceeds very slowly or not at all, but the cells are capable of resuming meristematic activity when the tissue surrounding them is damaged. Cells of root apical meristems do not all divide at the same rate. Determinations of relative rates of DNA synthesis show that primary roots of Zea, Vicia and Allium have quiescent centres to the meristems, in which the cells divide rarely or never in the course of normal root growth (Clowes, 1958). Such a quiescent centre includes the cells at the apices of the histogens of both stele and cortex. Its presence can be deduced from the anatomy of the apex in Zea (Clowes, 1958), but not in the other species which lack discrete histogens. History In 1953, during the course of analysing the organization and function of the root apices, Frederick Albert Lionel Clowes (born 10 September 1921), at the School of Botany (now Department of Plant Sciences), University of Oxford, proposed the term ‘cytogenerative centre’ to denote ‘the region of an apical meristem from which all future cells are derived’. This term had been suggested to him by Mr Harold K. Pusey, a lecturer in embryology at the Department of Zoology and Comparative Anatomy at the same university. The 1953 paper of Clowes reported results of his experiments on Fagus sylvatica and Vicia faba, in which small oblique and wedge-shaped excisions were made at the tip of the primary root, at the most distal level of the root body, near the boundary with the root cap. The results of these experiments were striking and showed that: the root which grew on following the excision was normal at the undamaged meristem side; the nonexcised meristem portion contributed to the regeneration of the excised portion; the regenerated part of the root had abnormal patterning and
https://en.wikipedia.org/wiki/Universality%20probability
Universality probability is an abstruse probability measure in computational complexity theory that concerns universal Turing machines. Background A Turing machine is a basic model of computation. Some Turing machines might be specific to doing particular calculations. For example, a Turing machine might take input which comprises two numbers and then produce output which is the product of their multiplication. Another Turing machine might take input which is a list of numbers and then give output which is those numbers sorted in order. A Turing machine which has the ability to simulate any other Turing machine is called universal - in other words, a Turing machine (TM) is said to be a universal Turing machine (or UTM) if, given any other TM, there is a some input (or "header") such that the first TM given that input "header" will forever after behave like the second TM. An interesting mathematical and philosophical question then arises. If a universal Turing machine is given random input (for suitable definition of random), how probable is it that it remains universal forever? Definition Given a prefix-free Turing machine, the universality probability of it is the probability that it remains universal even when every input of it (as a binary string) is prefixed by a random binary string. More formally, it is the probability measure of reals (infinite binary sequences) which have the property that every initial segment of them preserves the universality of the given Turing machine. This notion was introduced by the computer scientist Chris Wallace and was first explicitly discussed in print in an article by Dowe (and a subsequent article). However, relevant discussions also appear in an earlier article by Wallace and Dowe. Universality probabilities of prefix-free UTMs are non-zero Although the universality probability of a UTM (UTM) was originally suspected to be zero, relatively simple proofs exist that the supremum of the set of universality probabil
https://en.wikipedia.org/wiki/Feferman%E2%80%93Vaught%20theorem
Feferman–Vaught theorem in model theory is a theorem by Solomon Feferman and Robert Lawson Vaught that shows how to reduce, in an algorithmic way, the first-order theory of a product of first-order structures to the first-order theory of elements of the structure. The theorem is considered as one of the standard results in model theory. The theorem extends the previous result of Andrzej Mostowski on direct products of theories. It generalizes (to formulas with arbitrary quantifiers) the property in universal algebra that equalities (identities) carry over to direct products of algebraic structures (which is a consequence of one direction of Birkhoff's theorem). Direct product of structures Consider a first-order logic signature L. The definition of product structures takes a family of L-structures for for some index set I and defines the product structure , which is also an L-structure, with all functions and relations defined pointwise. The definition generalizes direct product in universal algebra to relational first-order structures, which contain not only function symbols but also relation symbols. If is a relation symbol with arguments in L and are elements of the cartesian product, we define the interpretation of in by When is a functional relation, this definition reduces to the definition of direct product in universal algebra. Statement of the theorem for direct products For a first-order logic formula in signature L with free variables, and for an interpretation of the variables , we define the set of indices for which holds in Given a first-order formula with free variables , there is an algorithm to compute its equivalent game normal form, which is a finite disjunction of mutually contradictory formulas. The Feferman-Vaught theorem gives an algorithm that takes a first-order formula and constructs a formula that reduces the condition that holds in the product to the condition that holds in the interpretation of sets of in
https://en.wikipedia.org/wiki/Low-temperature%20technology%20timeline
The following is a timeline of low-temperature technology and cryogenic technology (refrigeration down to close to absolute zero, i.e. –273.15 °C, –459.67 °F or 0 K). It also lists important milestones in thermometry, thermodynamics, statistical physics and calorimetry, that were crucial in development of low temperature systems. Prior to the 19th century – Zimri-Lim, ruler of Mari in Syria commanded the construction of one of the first ice houses near the Euphrates. – The yakhchal (meaning "ice pit" in Persian) is an ancient Persian type of refrigerator. The structure was formed from a mortar resistant to heat transmission, in the shape of a dome. Snow and ice was stored beneath the ground, effectively allowing access to ice even in hot months and allowing for prolonged food preservation. Often a badgir was coupled with the yakhchal in order to slow the heat loss. Modern refrigerators are still called yakhchal in Persian. - Hero of Alexandria knew of the principle that certain substances, notably air, expand and contract and described a demonstration in which a closed tube partially filled with air had its end in a container of water. The expansion and contraction of the air caused the position of the water/air interface to move along the tube. This was the first established principle of gas behaviour vs temperature, and principle of first thermometers later on. The idea could predate him even more (Empedocles of Agrigentum in his 460 B.C. book On Nature). 1396 AD - Ice storage warehouses called "Dong-bing-go-tango" (meaning "east ice storage warehouse" in Korean) and Seo-bing-go ("west ice storage warehouse") were built in Han-Yang (currently Seoul, Korea). The buildings housed ice that was collected from the frozen Han River in January (by lunar calendar). The warehouse was well-insulated, providing the royal families with ice into the summer months. These warehouses were closed in 1898 AD but the buildings are still intact in Seoul. 1593 – Galileo Gali
https://en.wikipedia.org/wiki/MRC-5
MRC-5 (Medical Research Council cell strain 5) is a diploid cell culture line composed of fibroblasts, originally developed from the lung tissue of a 14-week-old aborted Caucasian male fetus. The cell line was isolated by J.P. Jacobs and colleagues in September 1966 from the seventh population doubling of the original strain, and MRC-5 cells themselves are known to reach senescence in around 45 population doublings. Applications MRC-5 cells are currently used to produce several vaccines including for hepatitis A, varicella and polio. Culture and society During the COVID-19 pandemic, anti-vaccination and anti-abortion activists believed that MRC-5 was an ingredient of the Oxford–AstraZeneca COVID-19 vaccine, citing a study from the University of Bristol. David Matthews, a co-author for this study, clarified that MRC-5 was solely used for testing purposes to determine "how the Oxford vaccine behaves when it is inside a genetically normal human cell." The manufacturing of the vaccine used the HEK 293 fetal cell line, the kidney cells of an aborted or spontaneously miscarried female fetus, though the cells are filtered out of the final product. See also Use of fetal tissue in vaccine development WI-38
https://en.wikipedia.org/wiki/Harry%20Frederick%20Recher
Emeritus Professor Harry Frederick Recher RZS (NSW) AM (born 27 March 1938, New York City) is an Australian ecologist, ornithologist and advocate for conservation. Recher grew up in the United States of America. He studied at the State University of New York College of Forestry and received his B.S. in 1959 from Syracuse University. At Stanford University, ecologist Paul Ehrlich,  supervised his PhD on migratory shorebirds that was awarded in 1964. Ehrlich became a lifelong friend and mentor to Recher; also sharing his commitment to a strong sense of social responsibility of science. Recher held an NIH postdoctoral fellowship at the University of Pennsylvania and Princeton University. In his early career, Recher worked with leading American ecologists Eugene Odum and Robert McArthur. He moved to Australia in 1967. From 1968 he worked for 20 years at the Australian Museum as a Principal Research Scientist, focussing on conservation issues and the biology of forest and woodland birds. In 1988 he moved to the University of New England.He was also a member of the National Parks & Wildlife Service (NPWS) Scientific Advisory Committee. Recher was co-editor and author of three books, A natural legacy: ecology in Australia ( 1979), Birds of eucalypt forests and woodland: ecology, conservation, management. (1985) and Woodlands of Australia, all of which were awarded the Whitley Medal by the Royal Zoological Society of New South Wales. As an early Australian ecology textbook, A Natural Legacy with co-editors Irina Dunn and Dan Lunney with David Milledge's hand-drawings illustrating the principles of community ecology and succession, Recher influenced a generation in an era of resurgent environmentalism. Recher is heralded for his long-term field studies, especially of bird communities. In the 1980s, Recher and his colleagues applied these studies to identify the conservation requirements for native birds and animals in their specific habitats.In 2003 the statutory managem
https://en.wikipedia.org/wiki/Inversive%20geometry
In geometry, inversive geometry is the study of inversion, a transformation of the Euclidean plane that maps circles or lines to other circles or lines and that preserves the angles between crossing curves. Many difficult problems in geometry become much more tractable when an inversion is applied. Inversion seems to have been discovered by a number of people contemporaneously, including Steiner (1824), Quetelet (1825), Bellavitis (1836), Stubbs and Ingram (1842-3) and Kelvin (1845). The concept of inversion can be generalized to higher-dimensional spaces. Inversion in a circle Inverse of a point To invert a number in arithmetic usually means to take its reciprocal. A closely related idea in geometry is that of "inverting" a point. In the plane, the inverse of a point P with respect to a reference circle (Ø) with center O and radius r is a point P, lying on the ray from O through P such that This is called circle inversion or plane inversion. The inversion taking any point P (other than O) to its image P also takes P back to P, so the result of applying the same inversion twice is the identity transformation on all the points of the plane other than O (self-inversion). To make inversion an involution it is necessary to introduce a point at infinity, a single point placed on all the lines, and extend the inversion, by definition, to interchange the center O and this point at infinity. It follows from the definition that the inversion of any point inside the reference circle must lie outside it, and vice versa, with the center and the point at infinity changing positions, whilst any point on the circle is unaffected (is invariant under inversion). In summary, the nearer a point to the center, the further away its transformation, and vice versa. Compass and straightedge construction Point outside circle To construct the inverse P of a point P outside a circle Ø: Draw the segment from O (center of circle Ø) to P. Let M be the midpoint of OP. (Not shown)
https://en.wikipedia.org/wiki/Consecutive%20case%20series
A consecutive case series is a clinical study that includes all eligible patients identified by the researchers during the study registration period. The patients are treated in the order in which they are identified. This type of study usually does not have a control group. For example, in Sugrue, et al. (2016), a consecutive case series design was used to determine trends in hand surgery research.
https://en.wikipedia.org/wiki/Grassmann%20graph
In graph theory, Grassmann graphs are a special class of simple graphs defined from systems of subspaces. The vertices of the Grassmann graph are the -dimensional subspaces of an -dimensional vector space over a finite field of order ; two vertices are adjacent when their intersection is -dimensional. Many of the parameters of Grassmann graphs are -analogs of the parameters of Johnson graphs, and Grassmann graphs have several of the same graph properties as Johnson graphs. Graph-theoretic properties is isomorphic to . For all , the intersection of any pair of vertices at distance is -dimensional. The clique number of is given by an expression in terms its least and greatest eigenvalues and : Automorphism group There is a distance-transitive subgroup of isomorphic to the projective linear group . In fact, unless or , ; otherwise or respectively. Intersection array As a consequence of being distance-transitive, is also distance-regular. Letting denote its diameter, the intersection array of is given by where: for all . for all . Spectrum The characteristic polynomial of is given by . See also Grassmannian Johnson graph
https://en.wikipedia.org/wiki/Lemniscate
In algebraic geometry, a lemniscate is any of several figure-eight or -shaped curves. The word comes from the Latin meaning "decorated with ribbons", from the Greek meaning "ribbon", or which alternatively may refer to the wool from which the ribbons were made. Curves that have been called a lemniscate include three quartic plane curves: the hippopede or lemniscate of Booth, the lemniscate of Bernoulli, and the lemniscate of Gerono. The study of lemniscates (and in particular the hippopede) dates to ancient Greek mathematics, but the term "lemniscate" for curves of this type comes from the work of Jacob Bernoulli in the late 17th century. History and examples Lemniscate of Booth The consideration of curves with a figure-eight shape can be traced back to Proclus, a Greek Neoplatonist philosopher and mathematician who lived in the 5th century AD. Proclus considered the cross-sections of a torus by a plane parallel to the axis of the torus. As he observed, for most such sections the cross section consists of either one or two ovals; however, when the plane is tangent to the inner surface of the torus, the cross-section takes on a figure-eight shape, which Proclus called a horse fetter (a device for holding two feet of a horse together), or "hippopede" in Greek. The name "lemniscate of Booth" for this curve dates to its study by the 19th-century mathematician James Booth. The lemniscate may be defined as an algebraic curve, the zero set of the quartic polynomial when the parameter d is negative (or zero for the special case where the lemniscate becomes a pair of externally tangent circles). For positive values of d one instead obtains the oval of Booth. Lemniscate of Bernoulli In 1680, Cassini studied a family of curves, now called the Cassini oval, defined as follows: the locus of all points, the product of whose distances from two fixed points, the curves' foci, is a constant. Under very particular circumstances (when the half-distance between the points is
https://en.wikipedia.org/wiki/List%20of%20comets%20with%20no%20meaningful%20orbit
This is a list of comets designated with X/ prefix. The majority of these comets were discovered before the invention of the telescope in 1610, and as such there was nobody to plot the positions of the comets to a high enough precision to generate any meaningful orbit. Later comets, observed in the 17th century or later, either did not have enough observations, sometimes as few as one or two, or the comet disintegrated or moved out of a favorable location in the sky before it was possible to make more observations of it.
https://en.wikipedia.org/wiki/Sieved%20Jacobi%20polynomials
In mathematics, sieved Jacobi polynomials are a family of sieved orthogonal polynomials, introduced by . Their recurrence relations are a modified (or "sieved") version of the recurrence relations for Jacobi polynomials.
https://en.wikipedia.org/wiki/Bleacher%20Creature%20%28mascot%29
The Bleacher Creature was the official mascot for the Atlanta Braves Major League Baseball team during the late 1970s and early 1980s. It featured green shaggy fur with a Braves cap and logo on top. The word Braves was written across its chest in big red letters. It had a permanent toothless smile. The mascot usually roamed the stands from time to time during home games and was intended more for the entertainment of younger fans. Creation The mascot started in 1976 and was originally costumed by Alan Stensland, then a student at Georgia Tech. Stensland was working as an usher at Atlanta–Fulton County Stadium when he was approached to wear the costume. The outfit required someone who was 5"8" to 5'10" tall, and Alan met the height and shoe size requirements. Alan recalls having one of his costume's eyes removed by a youngster on his first night out. They also attempted to bust his kneecaps on bat night. During the 1977 season, the mascot made some 250 appearances at games, parties, and parades. Stensland was only 18 at the time he first donned the costume. The most intense problem he had was the heat. With the added humidity, a really "funky smell" permeated the inside of the costume. Once Stensland graduated, he left the Braves organization. The mascot role was then taken over by Dennis Coffey, a friend of Alan and a student at M.D. Collins High School in College Park, Georgia. Coffey had worked as an usher during the 1977 Braves and Atlanta Falcons seasons at Atlanta–Fulton County Stadium and served as an assistant to Stensland. Dennis performed as the Braves Bleacher Creature from 1978 to 1981, when the mascot was retired. During that time, the Bleacher Creature was present at all Atlanta home games, and numerous home games for the Savannah Braves and Greenwood Braves. Dennis appeared as the Bleacher Creature in various parades, schools, hospitals, little league events, mall openings, etc. Coffey graduated from M.D. Collins High and went on to also attend
https://en.wikipedia.org/wiki/Divergent%20geometric%20series
In mathematics, an infinite geometric series of the form is divergent if and only if | r | ≥ 1. Methods for summation of divergent series are sometimes useful, and usually evaluate divergent geometric series to a sum that agrees with the formula for the convergent case This is true of any summation method that possesses the properties of regularity, linearity, and stability. Examples In increasing order of difficulty to sum: 1 − 1 + 1 − 1 + · · ·, whose common ratio is −1 1 − 2 + 4 − 8 + · · ·, whose common ratio is −2 1 + 2 + 4 + 8 + · · ·, whose common ratio is 2 1 + 1 + 1 + 1 + · · ·, whose common ratio is 1. Motivation for study It is useful to figure out which summation methods produce the geometric series formula for which common ratios. One application for this information is the so-called Borel-Okada principle: If a regular summation method sums Σzn to 1/(1 - z) for all z in a subset S of the complex plane, given certain restrictions on S, then the method also gives the analytic continuation of any other function on the intersection of S with the Mittag-Leffler star for f. Summability by region Open unit disk Ordinary summation succeeds only for common ratios |z| < 1. Closed unit disk Cesàro summation Abel summation Larger disks Euler summation Half-plane The series is Borel summable for every z with real part < 1. Any such series is also summable by the generalized Euler method (E, a) for appropriate a. Shadowed plane Certain moment constant methods besides Borel summation can sum the geometric series on the entire Mittag-Leffler star of the function 1/(1 − z), that is, for all z except the ray z ≥ 1. Everywhere Notes
https://en.wikipedia.org/wiki/51st%20meridian%20west
The meridian 51° west of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, Greenland, the Atlantic Ocean, South America, the Southern Ocean, and Antarctica to the South Pole. The 51st meridian west forms a great circle with the 129th meridian east. From Pole to Pole Starting at the North Pole and heading south to the South Pole, the 51st meridian west passes through: {| class="wikitable plainrowheaders" ! scope="col" width="120" | Co-ordinates ! scope="col" | Country, territory or sea ! scope="col" | Notes |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Arctic Ocean | style="background:#b0e0e6;" | |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Lincoln Sea | style="background:#b0e0e6;" | |- | ! scope="row" | |Wulff Land |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Sherard Osborn Fjord | style="background:#b0e0e6;" | |-valign="top" | ! scope="row" | | Passing through several fjords, the Nuussuaq Peninsula and Alluttoq Island |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Disko Bay | style="background:#b0e0e6;" | |- | ! scope="row" | | Passing through several fjords |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Atlantic Ocean | style="background:#b0e0e6;" | |-valign="top" | ! scope="row" | | Amapá Pará — from , passing through several islands in the mouth of the Amazon River Mato Grosso — from Goiás — from Mato Grosso do Sul — from Minas Gerais — from São Paulo — from Paraná — from Santa Catarina — from Rio Grande do Sul — from , passing through Lagoa dos Patos |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Atlantic Ocean | style="background:#b0e0e6;" | |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Southern Ocean | style="background:#b0e0e6;" | |-valign="top" | ! scope="row" | Ant
https://en.wikipedia.org/wiki/Branch%20and%20bound
Branch and bound (BB, B&B, or BnB) is a method for solving optimization problems by breaking them down into smaller sub-problems and using a bounding function to eliminate sub-problems that cannot contain the optimal solution. It is an algorithm design paradigm for discrete and combinatorial optimization problems, as well as mathematical optimization. A branch-and-bound algorithm consists of a systematic enumeration of candidate solutions by means of state space search: the set of candidate solutions is thought of as forming a rooted tree with the full set at the root. The algorithm explores branches of this tree, which represent subsets of the solution set. Before enumerating the candidate solutions of a branch, the branch is checked against upper and lower estimated bounds on the optimal solution, and is discarded if it cannot produce a better solution than the best one found so far by the algorithm. The algorithm depends on efficient estimation of the lower and upper bounds of regions/branches of the search space. If no bounds are available, the algorithm degenerates to an exhaustive search. The method was first proposed by Ailsa Land and Alison Doig whilst carrying out research at the London School of Economics sponsored by British Petroleum in 1960 for discrete programming, and has become the most commonly used tool for solving NP-hard optimization problems. The name "branch and bound" first occurred in the work of Little et al. on the traveling salesman problem. Overview The goal of a branch-and-bound algorithm is to find a value that maximizes or minimizes the value of a real-valued function , called an objective function, among some set of admissible, or candidate solutions. The set is called the search space, or feasible region. The rest of this section assumes that minimization of is desired; this assumption comes without loss of generality, since one can find the maximum value of by finding the minimum of . A B&B algorithm operates according to
https://en.wikipedia.org/wiki/Dynamic%20timing%20verification
Dynamic timing verification refers to verifying that an ASIC design is fast enough to run without errors at the targeted clock rate. This is accomplished by simulating the design files used to synthesize the integrated circuit (IC) design. This is in contrast to static timing analysis, which has a similar goal as dynamic timing verification except it does not require simulating the real functionality of the IC. Hobbyists often perform a type of dynamic timing verification when they over-clock the CPUs in their computers in order to find the fastest clock rate at which they can run the CPU without errors. This is a type of dynamic timing verification that is performed after the silicon is manufactured. In the field of ASIC design, this timing verification is preferably performed before manufacturing the IC in order to make sure that IC works under the required conditions before mass production of the IC.
https://en.wikipedia.org/wiki/AMD%20Turbo%20Core
AMD Turbo Core a.k.a. AMD Core Performance Boost (CPB) is a dynamic frequency scaling technology implemented by AMD that allows the processor to dynamically adjust and control the processor operating frequency in certain versions of its processors which allows for increased performance when needed while maintaining lower power and thermal parameters during normal operation. AMD Turbo Core technology has been implemented beginning with the Phenom II X6 microprocessors based on the AMD K10 microarchitecture. AMD Turbo Core is available with some AMD A-Series accelerated processing units. AMD Turbo Core is similar to Intel Turbo Boost, which is another dynamic processor frequency adjustment technology used to increase performance, as well as AMD PowerNow!, which is used to dynamically adjust laptop processor's operating frequencies in order to decrease power consumption (saving battery life), reduce heat, and lower noise. AMD PowerNow! is used to decrease processor frequency, whereas AMD Turbo Core is used to increase processor frequency. Background To decide a processor's clock speed, the processor is stress tested to determine the maximum speed that the processor can run at before the maximum amount of power allowed is reached, which is called thermal design power or TDP. It has been reported that customers would complain that the processors rarely consumed the rated TDP, which meant that most consumers do not come close to the power consumed during maximum stress testing. A parameter called average CPU power (ACP) is used to address this issue. ACP defines the average power expected to be consumed with regular use, whereas TDP gives the maximum power consumed. Power consumed is an important factor when considering thermal limits and determining CPU power dissipation. AMD Turbo Core and similar dynamic processor frequency adjustment technologies take advantage of average power consumed being less than the maximum design limits, allowing frequency (and the
https://en.wikipedia.org/wiki/Lusitropy
Lusitropy or Lucitropy is the rate of myocardial relaxation. The increase in cytosolic calcium of cardiomyocytes via increased uptake leads to increased myocardial contractility (positive inotropic effect), but the myocardial relaxation, or lusitropy, decreases. This should not be confused, however, with catecholamine-induced calcium uptake into the sarcoplasmic reticulum, which increases lusitropy. Positive Increased catecholamine levels promote positive lusitropy, enabling the heart to relax more rapidly. This effect is mediated by the phosphorylation of phospholamban and troponin I via a cAMP-dependent pathway. Catecholamine-induced calcium influx into the sarcoplasmic reticulum increases both inotropy and lusitropy. In other words, a quicker reduction in cytosolic calcium levels (because the calcium enters the sarcoplasmic reticulum) causes an increased rate of relaxation (a positive lusitropy), however, this also enables a greater degree of calcium efflux, back into the cytosol, when the next action potential arrives, thereby increasing inotropy as well. However, unlike the previously mentioned mechanism, a calcium uptake from the extracellular fluid into the cytosol without any catecholamine stimulation simply results in a sustained rise in calcium concentration in the cytosol. This only serves to increase isotropy but doesn't allow total relaxation of the cardiac myocytes between contractions, decreasing lusitropy. Negative Relaxation of the heart is negatively impacted by the following factors: Calcium overload – too much intracellular calcium Reduced rate of calcium removal from myocyte through pumps if calcium is not removed from the cell quickly enough. a. Plasma membrane Calcium ATPase (Ca ATPase) this primary active transporter pumps calcium out of the myocyte between beats b. Sodium-Calcium (Na/Ca) exchanger this secondary active transporter pumps calcium out of cell between beats Impaired Sarco-Endoplasmic Reticulum Calcium ATPase (SERCA)
https://en.wikipedia.org/wiki/Enterprise%20bookmarking
Enterprise bookmarking is a method for Web 2.0 users to tag, organize, store, and search bookmarks of both web pages on the Internet and data resources stored in a distributed database or fileserver. This is done collectively and collaboratively in a process by which users add tag (metadata) and knowledge tags. In early versions of the software, these tags are applied as non-hierarchical keywords, or terms assigned by a user to a web page, and are collected in tag clouds. Examples of this software are Connectbeam and Dogear. New versions of the software such as Jumper 2.0 and Knowledge Plaza expand tag metadata in the form of knowledge tags that provide additional information about the data and are applied to structured and semi-structured data and are collected in tag profiles. History Enterprise bookmarking is derived from Social bookmarking that got its modern start with the launch of the website del.icio.us in 2003. The first major announcement of an enterprise bookmarking platform was the IBM Dogear project, developed in Summer 2006. Version 1.0 of the Dogear software was announced at Lotusphere 2007, and shipped later that year on June 27 as part of IBM Lotus Connections. The second significant commercial release was Cogenz in September 2007. Since these early releases, Enterprise bookmarking platforms have diverged considerably. The most significant new release was the Jumper 2.0 platform, with expanded and customizable knowledge tagging fields. Differences Versus social bookmarking In a social bookmarking system, individuals create personal collections of bookmarks and share their bookmarks with others. These centrally stored collections of Internet resources can be accessed by other users to find useful resources. Often these lists are publicly accessible, so that other people with similar interests can view the links by category or by the tags themselves. Most social bookmarking sites allow users to search for bookmarks which are associated with gi
https://en.wikipedia.org/wiki/Han%20unification
Han unification is an effort by the authors of Unicode and the Universal Character Set to map multiple character sets of the Han characters of the so-called CJK languages into a single set of unified characters. Han characters are a feature shared in common by written Chinese (hanzi), Japanese (kanji), Korean (hanja) and Vietnamese (chữ Hán). Modern Chinese, Japanese and Korean typefaces typically use regional or historical variants of a given Han character. In the formulation of Unicode, an attempt was made to unify these variants by considering them as allographsdifferent glyphs representing the same "grapheme" or orthographic unit hence, "Han unification", with the resulting character repertoire sometimes contracted to Unihan. Nevertheless, many characters have regional variants assigned to different code points, such as Traditional (U+500B) versus Simplified (U+4E2A). Rationale and controversy The Unicode Standard details the principles of Han unification. The Ideographic Research Group (IRG), made up of experts from the Chinese-speaking countries, North and South Korea, Japan, Vietnam, and other countries, is responsible for the process. One rationale was the desire to limit the size of the full Unicode character set, where CJK characters as represented by discrete ideograms may approach or exceed 100,000 characters. Version 1 of Unicode was designed to fit into 16 bits and only 20,940 characters (32%) out of the possible 65,536 were reserved for these CJK Unified Ideographs. Unicode was later extended to 21 bits allowing many more CJK characters (97,680 are assigned, with room for more). An article hosted by IBM attempts to illustrate part of the motivation for Han unification: In fact, the three ideographs for "one" (, , or ) are encoded separately in Unicode, as they are not considered national variants. The first is the common form in all three countries, while the second and third are used on financial instruments to prevent tampering (they may be
https://en.wikipedia.org/wiki/Bonferroni%20correction
In statistics, the Bonferroni correction is a method to counteract the multiple comparisons problem. Background The method is named for its use of the Bonferroni inequalities. An extension of the method to confidence intervals was proposed by Olive Jean Dunn. Statistical hypothesis testing is based on rejecting the null hypothesis if the likelihood of the observed data under the null hypotheses is low. If multiple hypotheses are tested, the probability of observing a rare event increases, and therefore, the likelihood of incorrectly rejecting a null hypothesis (i.e., making a Type I error) increases. The Bonferroni correction compensates for that increase by testing each individual hypothesis at a significance level of , where is the desired overall alpha level and is the number of hypotheses. For example, if a trial is testing hypotheses with a desired , then the Bonferroni correction would test each individual hypothesis at . Likewise, when constructing multiple confidence intervals the same phenomenon appears. Definition Let be a family of hypotheses and their corresponding p-values. Let be the total number of null hypotheses, and let be the number of true null hypotheses (which is presumably unknown to the researcher). The family-wise error rate (FWER) is the probability of rejecting at least one true , that is, of making at least one type I error. The Bonferroni correction rejects the null hypothesis for each , thereby controlling the FWER at . Proof of this control follows from Boole's inequality, as follows: This control does not require any assumptions about dependence among the p-values or about how many of the null hypotheses are true. Extensions Generalization Rather than testing each hypothesis at the level, the hypotheses may be tested at any other combination of levels that add up to , provided that the level of each test is decided before looking at the data. For example, for two hypothesis tests, an overall of 0.05 could be maintai
https://en.wikipedia.org/wiki/Quotient%20of%20a%20formal%20language
In mathematics and computer science, the right quotient (or simply quotient) of a language with respect to language is the language consisting of strings w such that wx is in for some string x in Formally: In other words, we take all the strings in that have a suffix in , and remove this suffix. Similarly, the left quotient of with respect to is the language consisting of strings w such that xw is in for some string x in . Formally: In other words, we take all the strings in that have a prefix in , and remove this prefix. Note that the operands of are in reverse order: the first operand is and is second. Example Consider and Now, if we insert a divider into an element of , the part on the right is in only if the divider is placed adjacent to a b (in which case i ≤ n and j = n) or adjacent to a c (in which case i = 0 and j ≤ n). The part on the left, therefore, will be either or ; and can be written as Properties Some common closure properties of the quotient operation include: The quotient of a regular language with any other language is regular. The quotient of a context free language with a regular language is context free. The quotient of two context free languages can be any recursively enumerable language. The quotient of two recursively enumerable languages is recursively enumerable. These closure properties hold for both left and right quotients. See also Brzozowski derivative
https://en.wikipedia.org/wiki/Ligand%20field%20theory
Ligand field theory (LFT) describes the bonding, orbital arrangement, and other characteristics of coordination complexes. It represents an application of molecular orbital theory to transition metal complexes. A transition metal ion has nine valence atomic orbitals - consisting of five nd, one (n+1)s, and three (n+1)p orbitals. These orbitals are of appropriate energy to form bonding interaction with ligands. The LFT analysis is highly dependent on the geometry of the complex, but most explanations begin by describing octahedral complexes, where six ligands coordinate to the metal. Other complexes can be described by reference to crystal field theory. Inverted ligand field theory (ILFT) elaborates on LFT by breaking assumptions made of relative metal and ligand orbital energies. History Ligand field theory resulted from combining the principles laid out in molecular orbital theory and crystal field theory, which describes the loss of degeneracy of metal d orbitals in transition metal complexes. John Stanley Griffith and Leslie Orgel championed ligand field theory as a more accurate description of such complexes, although the theory originated in the 1930s with the work on magnetism of John Hasbrouck Van Vleck. Griffith and Orgel used the electrostatic principles established in crystal field theory to describe transition metal ions in solution and used molecular orbital theory to explain the differences in metal-ligand interactions, thereby explaining such observations as crystal field stabilization and visible spectra of transition metal complexes. In their paper, they proposed that the chief cause of color differences in transition metal complexes in solution is the incomplete d orbital subshells. That is, the unoccupied d orbitals of transition metals participate in bonding, which influences the colors they absorb in solution. In ligand field theory, the various d orbitals are affected differently when surrounded by a field of neighboring ligands and are raise
https://en.wikipedia.org/wiki/Eastern%20forest%E2%80%93boreal%20transition
The eastern forest–boreal transition is a temperate broadleaf and mixed forests ecoregion of North America, mostly in eastern Canada. It is a transitional zone or region between the predominantly coniferous Boreal Forest and the mostly deciduous broadleaf forest region further south. Location and climate The ecoregion includes most of the southern Canadian Shield in Ontario and Quebec north and west of the Saint Lawrence River lowlands. The portion in Northeastern Ontario includes the eastern shores of Lake Superior, Greater Sudbury, North Bay, Ontario, Lake Nipissing, the Clay Belt and Temagami. Areas in Central Ontario include Muskoka, Parry Sound, Algonquin Park, and Haliburton. The Quebec portion takes in Lake Timiskaming, the southern Laurentian Mountains, Quebec City, the Saguenay River, and Saguenay, Quebec. There is a separate section of the ecoregion in the Adirondack Mountains in upper New York State, United States. However the higher elevations of the Laurentian Mountains and the northern Appalachian Mountains in Canada constitute the Eastern Canadian forests ecoregion. The region has a humid continental climate consisting of warm summers and cold, snowy winters, and is warmer towards the south. Flora The flora in this ecoregion varies considerably based on soil conditions and elevation. These mixed forests are distinct from the deciduous forests south of the Canadian Shield and the cooler boreal forests to the north. Conifer swamp Conifer swamps occur in areas that are seasonally flooded. Trees can be very dense or sparse; mats of sphagnum moss cover the ground. Black spruce (Picea mariana) and tamarack (Larix laricina) are the predominant tree species. Where the soil is not saturated year round grows northern white cedar (Thuja occidentalis). Speckled alder (Alnus incana) grows around the edges of these swamps and red spruce (Picea rubens) and white pine (Pinus strobus) grow on higher, drier ground. Lowland conifer forests Lowland conifer forests
https://en.wikipedia.org/wiki/Kavli%20Institute%20for%20Theoretical%20Physics
The Kavli Institute for Theoretical Physics (KITP) is a research institute of the University of California, Santa Barbara. KITP is one of the most renowned institutes for theoretical physics in the world, and brings theorists in physics and related fields together to work on topics at the forefront of theoretical science. The National Science Foundation has been the principal supporter of the institute since it was founded as the Institute for Theoretical Physics in 1979. In a 2007 article in the Proceedings of the National Academy of Sciences, KITP was given the highest impact index in a comparison of nonbiomedical research organizations across the U.S. About In the early 2000s, the institute, formerly known as the Institute for Theoretical Physics, or ITP, was named for the Norwegian-American physicist and businessman Fred Kavli, in recognition of his donation of $7.5 million to the institute. Kohn Hall, which houses KITP, is located just beyond the Henley Gate at the East Entrance of the UCSB campus. The building was designed by the Driehaus Prize winner and New Classical architect Michael Graves, and a new wing designed by Graves was added in 2003–2004. Members The directors of the KITP since its beginning have been: Walter Kohn, 1979–1984 (Nobel Prize in Chemistry, 1998) Robert Schrieffer, 1984–1989 (Nobel Prize for Physics, 1972) James S. Langer, 1989–1995 (Oliver Buckley Prize (APS), 1997) James Hartle, 1995–1997 (Einstein Prize (APS), 2009) David Gross, 1997–2012 (Nobel Prize in Physics, 2004) Lars Bildsten, 2012–present (Helen B. Warner Prize (AAS), 1999; Dannie Heineman Prize for Astrophysics (AAS & American Institute of Physics), 2017) The Director, Deputy Director Mark Bowick, and Permanent Members of the KITP (Leon Balents, Lars Bildsten, David Gross, and Boris Shraiman) are also on the faculty of the UC Santa Barbara Physics Department. Former Permanent Members include Joseph Polchinski and Physics Nobel laureate Frank Wilczek. See also
https://en.wikipedia.org/wiki/Strength%20of%20glass
Glass typically has a tensile strength of . However, the theoretical upper bound on its strength is orders of magnitude higher: . This high value is due to the strong chemical Si–O bonds of silicon dioxide. Imperfections of the glass, such as bubbles, and in particular surface flaws, such as scratches, have a great effect on the strength of glass and decrease it even more than for other brittle materials. The chemical composition of the glass also impacts its tensile strength. The processes of thermal and chemical toughening can increase the tensile strength of glass. Glass has a compressive strength of . Strength of glass fiber Glass fibers have a much higher tensile strength than regular glass (200-500 times stronger than regular glass). This is due to the reduction of flaws in glass fibers and that glass fibers have a small cross sectional area, constraining maximum defect size (Size effect on structural strength). Strength of fiberglass Fiberglass's strength depends on the type. S-glass has a strength of while E-glass and C-glass have a strength of . Hardness Glass has a hardness of 6.5 on the Mohs scale of mineral hardness.
https://en.wikipedia.org/wiki/Vegv%C3%ADsir
A (Icelandic for "wayfinder", ) is an Icelandic magical stave intended to help the bearer find their way through rough weather. The symbol is attested in the Huld Manuscript, collected in Iceland by Geir Vigfusson in Akureyri in 1860, and does not have any earlier attestations. A leaf of the manuscript provides an image of the , gives its name, and, in prose, declares that "if this sign is carried, one will never lose one's way in storms or bad weather, even when the way is not known". It has been claimed that it also features in the Galdrabók, a magical grimoire. although this latter location is denied and contested by Jackson Crawford. Stephen E. Flowers lists the Vegvisir in his translation of the Galdrabók, but in a later publication cites it in “Isländische Zauberzeichen und Zauberbücher” by Ólafur Davíðsson rather than the Galdrabók. It is also only claimed to be in the Huld manuscript by Daniel McCoy. Tomáš Vlasatý claims that it is not only in the Huld manuscript but also in two other Icelandic grimoires, Galdrakver (designated Lbs 2917 a 4to and Lbs 4627 8vo) and has Christian roots. The is often confused to be a Viking symbol. There is however no evidence of this, and the Huld Manuscript, where it is mentioned, was collected eight centuries after the end of the Viking Age. Etymology is a compound word formed from the two Icelandic words, and . means 'way, road, path' (), and , inflection form of , 'to show, to let know, to guide' (). is derived from the Old Norse , Proto-Germanic , or the Proto-Indo-European . is derived from the Old Norse meaning 'to show, point out, indicate', or the Proto-Germanic or , meaning 'to visit'. ('way') + ('pointer') derives its meaning from the same word as the English wise. It points someone the right way. See also Helm of Awe Notes Bibliography Flowers, Stephen (1989). The Galdrabók: An Icelandic Grimoire. Samuel Weiser, Inc. Justin Foster Huld Manuscript of Galdrastafir Witchcraft Magic Symbols and
https://en.wikipedia.org/wiki/Siffernotskrift
Siffernotskrift () or sifferskrift is a form of numbered musical notation in which numerals are given which correspond to musical notes on given instruments. The system was devised and used by Swedish clergyman, psalmist, and music educator Johan Dillner (1785-1862) in the hymnal he wrote 1830 for the psalmodicon - a one-string, bowed string instrument. Unlike the Galin-Paris-Chevé system of numbered notation, octave shifts are described using a combination of under/overlining and a shift in the position of numbers. The first phrase of the Vor Gud han er saa fast en Borg would become the following in the Asian GPC notation: | | 5 7 | 6 ...
https://en.wikipedia.org/wiki/Sacred%20Harp
Sacred Harp singing is a tradition of sacred choral music that originated in New England and was later perpetuated and carried on in the American South. The name is derived from The Sacred Harp, a ubiquitous and historically important tunebook printed in shape notes. The work was first published in 1844 and has reappeared in multiple editions ever since. Sacred Harp music represents one branch of an older tradition of American music that developed over the period 1770 to 1820 from roots in New England, with a significant, related development under the influence of "revival" services around the 1840s. This music was included in, and became profoundly associated with, books using the shape note style of notation popular in America in the 18th and early 19th centuries. Sacred Harp music is performed a cappella (voice only, without instruments) and originated as Protestant music. The music and its notation The name of the tradition comes from the title of the shape-note book from which the music is sung, The Sacred Harp. This book exists today in various editions, discussed below. In shape-note music, notes are printed in special shapes that help the reader identify them on the musical scale. There are two prevalent systems, one using four shapes, and one using seven. In the four-shape system used in The Sacred Harp, each of the four shapes is connected to a particular syllable, fa, sol, la, or mi, and these syllables are employed in singing the notes, just as in the more familiar system that uses do, re, mi, etc. (see solfege). The four-shape system is able to cover the full musical scale because each syllable-shape combination other than mi is assigned to two distinct notes of the scale. For example, the C major scale would be notated and sung as follows: The shape for fa is a triangle, sol an oval, la a rectangle, and mi a diamond. In Sacred Harp singing, pitch is not absolute. The shapes and notes designate degrees of the scale, not particular pitches. Thus f
https://en.wikipedia.org/wiki/Weird%20Nature
Weird Nature is a 2002 British documentary television series produced by John Downer Productions for the BBC and Discovery Channel. The series features strange behavior in nature—specifically, the animal world. The series now airs on the Science Channel and Animal Planet. The series took three years to make and a new filming technique was used to show animal movements in 3D. Each episode, however, tended to end with a piece about how humans are probably the oddest species of all. For example, in the end of the episode about locomotion, the narrator states how unusual it is for a mammal to be bipedal. In the episode about defences, the narrator explains that humans have no real natural defences, save for their big brains. Episodes Series 1 (2002)
https://en.wikipedia.org/wiki/List%20of%20operator%20splitting%20topics
This is a list of operator splitting topics. General Alternating direction implicit method — finite difference method for parabolic, hyperbolic, and elliptic partial differential equations GRADELA — simple gradient elasticity model Matrix splitting — general method of splitting a matrix operator into a sum or difference of matrices Paul Tseng — resolved question on convergence of matrix splitting algorithms PISO algorithm — pressure-velocity calculation for Navier-Stokes equations Projection method (fluid dynamics) — computational fluid dynamics method Reactive transport modeling in porous media — modeling of chemical reactions and fluid flow through the Earth's crust Richard S. Varga — developed matrix splitting Strang splitting — specific numerical method for solving differential equations using operator splitting Numerical analysis Mathematics-related lists Outlines of mathematics and logic Outlines
https://en.wikipedia.org/wiki/Centered%20hexagonal%20number
In mathematics and combinatorics, a centered hexagonal number, or hex number, is a centered figurate number that represents a hexagon with a dot in the center and all other dots surrounding the center dot in a hexagonal lattice. The following figures illustrate this arrangement for the first four centered hexagonal numbers: {|style="min-width: 325px;"| ! 1 !! !! 7 !! !! 19 !! !! 37 |- style="text-align:center; color:red; vertical-align:middle;" | +1 || || +6 || || +12 || || +18 |- style="vertical-align:middle; text-align:center; line-height:1.1em;" | | |     | |               | |                               |} Centered hexagonal numbers should not be confused with cornered hexagonal numbers, which are figurate numbers in which the associated hexagons share a vertex. The sequence of hexagonal numbers starts out as follows : 1, 7, 19, 37, 61, 91, 127, 169, 217, 271, 331, 397, 469, 547, 631, 721, 817, 919. Formula The th centered hexagonal number is given by the formula Expressing the formula as shows that the centered hexagonal number for is 1 more than 6 times the th triangular number. In the opposite direction, the index corresponding to the centered hexagonal number can be calculated using the formula This can be used as a test for whether a number is centered hexagonal: it will be if and only if the above expression is an integer. Recurrence and generating function The centered hexagonal numbers satisfy the recurrence relation From this we can calculate the generating function . The generating function satisfies The latter term is the Taylor series of , so we get and end up at Properties In base 10 one can notice that the hexagonal numbers' rightmost (least significant) digits follow the pattern 1–7–9–7–1 (repeating with period 5). This follows from the last digit of the triangle numbers which repeat 0-1-3-1-0 when taken modulo 5. In base 6 the rightmost digit is always 1: 16, 116, 316, 1016, 1416, 2316, 3316, 4416... This follows from
https://en.wikipedia.org/wiki/Wafer%20backgrinding
Wafer backgrinding is a semiconductor device fabrication step during which wafer thickness is reduced to allow stacking and high-density packaging of integrated circuits (IC). ICs are produced on semiconductor wafers that undergo a multitude of processing steps. The silicon wafers predominantly used today have diameters of 200 and 300 mm. They are roughly 750 μm thick to ensure a minimum of mechanical stability and to avoid warping during high-temperature processing steps. Smartcards, USB memory sticks, smartphones, handheld music players, and other ultra-compact electronic products would not be feasible in their present form without minimizing the size of their various components along all dimensions. The backside of the wafers are thus ground prior to wafer dicing (separation of the individual microchips). Wafers thinned down to 75 to 50 μm are common today. Prior to grinding, wafers are commonly laminated with UV-curable back-grinding tape, which ensures against wafer surface damage during back-grinding and prevents wafer surface contamination caused by infiltration of grinding fluid and/or debris. The wafers are also washed with deionized water throughout the process, which helps prevent contamination. The process is also known as "backlap", "backfinish" or "wafer thinning". See also Back-illuminated sensor
https://en.wikipedia.org/wiki/Optogenetics
Optogenetics is a biological technique to control the activity of neurons or other cell types with light. This is achieved by expression of light-sensitive ion channels, pumps or enzymes specifically in the target cells. On the level of individual cells, light-activated enzymes and transcription factors allow precise control of biochemical signaling pathways. In systems neuroscience, the ability to control the activity of a genetically defined set of neurons has been used to understand their contribution to decision making, learning, fear memory, mating, addiction, feeding, and locomotion. In a first medical application of optogenetic technology, vision was partially restored in a blind patient. Optogenetic techniques have also been introduced to map the functional connectivity of the brain. By altering the activity of genetically labelled neurons with light and using imaging and electrophysiology techniques to record the activity of other cells, researchers can identify the statistical dependencies between cells and brain regions. In a broader sense, optogenetics also includes methods to record cellular activity with genetically encoded indicators. In 2010, optogenetics was chosen as the "Method of the Year" across all fields of science and engineering by the interdisciplinary research journal Nature Methods. At the same time, optogenetics was highlighted in the article on "Breakthroughs of the Decade" in the academic research journal Science. History In 1979, Francis Crick suggested that controlling all cells of one type in the brain, while leaving the others more or less unaltered, is a real challenge for neuroscience. Francis Crick speculated that a technology using light might be useful to control neuronal activity with temporal and spatial precision but at the time there was no technique to make neurons responsive to light. By early 1990s LC Katz and E Callaway had shown that light could uncage glutamate. Heberle and Büldt in 1994 had already shown fun
https://en.wikipedia.org/wiki/Matroid%20rank
In the mathematical theory of matroids, the rank of a matroid is the maximum size of an independent set in the matroid. The rank of a subset S of elements of the matroid is, similarly, the maximum size of an independent subset of S, and the rank function of the matroid maps sets of elements to their ranks. The rank function is one of the fundamental concepts of matroid theory via which matroids may be axiomatized. Matroid rank functions form an important subclass of the submodular set functions. The rank functions of matroids defined from certain other types of mathematical object such as undirected graphs, matrices, and field extensions are important within the study of those objects. Examples In all examples, E is the base set of the matroid, and B is some subset of E. Let M be the free matroid, where the independent sets are all subsets of E. Then the rank function of M is simply: r(B) = |B|. Let M be a uniform matroid, where the independent sets are the subsets of E with at most k elements, for some integer k. Then the rank function of M is: r(B) = min(k, |B|). Let M be a partition matroid: the elements of E are partitioned into categories, each category c has capacity kc, and the independent sets are those containing at most kc elements of category c. Then the rank function of M is: r(B) = sumc min(kc, |Bc|) where Bc is the subset B contained in category c. Let M be a graphic matroid, where the independent sets are all the acyclic edge-sets (forests) of some fixed undirected graph G. Then the rank function r(B) is the number of vertices in the graph, minus the number of connected components of B (including single-vertex components). Properties and axiomatization The rank function of a matroid obeys the following properties. (R1) The value of the rank function is always a non-negative integer and the rank of the empty set is 0. (R2) For any two subsets and of , . That is, the rank is a submodular set function. (R3) For any set and element ,
https://en.wikipedia.org/wiki/Quaternionic%20analysis
In mathematics, quaternionic analysis is the study of functions with quaternions as the domain and/or range. Such functions can be called functions of a quaternion variable just as functions of a real variable or a complex variable are called. As with complex and real analysis, it is possible to study the concepts of analyticity, holomorphy, harmonicity and conformality in the context of quaternions. Unlike the complex numbers and like the reals, the four notions do not coincide. Properties The projections of a quaternion onto its scalar part or onto its vector part, as well as the modulus and versor functions, are examples that are basic to understanding quaternion structure. An important example of a function of a quaternion variable is which rotates the vector part of q by twice the angle represented by u. The quaternion multiplicative inverse is another fundamental function, but as with other number systems, and related problems are generally excluded due to the nature of dividing by zero. Affine transformations of quaternions have the form Linear fractional transformations of quaternions can be represented by elements of the matrix ring operating on the projective line over . For instance, the mappings where and are fixed versors serve to produce the motions of elliptic space. Quaternion variable theory differs in some respects from complex variable theory. For example: The complex conjugate mapping of the complex plane is a central tool but requires the introduction of a non-arithmetic, non-analytic operation. Indeed, conjugation changes the orientation of plane figures, something that arithmetic functions do not change. In contrast to the complex conjugate, the quaternion conjugation can be expressed arithmetically, as This equation can be proven, starting with the basis {1, i, j, k}: . Consequently, since is linear, The success of complex analysis in providing a rich family of holomorphic functions for scientific work has engaged some work
https://en.wikipedia.org/wiki/Focal%20adhesion%20targeting%20region
In structural and cell biology, the focal adhesion targeting domain is a conserved protein domain that was first identified in focal adhesion kinase (FAK), also known as PTK2 protein tyrosine kinase 2 (PTK2). Focal adhesions are multi-protein intracellular signalling complexes that link the cellular actin microfilament cytoskeleton, through the cell membrane via transmembrane integrin proteins, to the extracellular matrix. Focal adhesions form and dissipate as cells attach and detach from matrix during cell adhesion and cell migration. The FAK focal adhesion targeting (FAT) domain is a C-terminal region necessary and sufficient for localizing FAK to focal adhesions, allowing FAK to regulate cell adhesion and migration by localizing its protein kinase activity at the junction of internal cytoskeleton and external cell attachment points. The crystal structure of FAT shows it to form a four-helix bundle that binds specifically to Leucine-Aspartate (LD)-repeat motif peptides in the related focal adhesion proteins paxillin (PXN), leupaxin (LPXN) and TGFB1I1/Hic-5. FAT domains with a similar 4-helix bundle structure are also found in other proteins that localize to paxillin-containing focal adhesions and are involved in cell adhesion and migration, including the FAK-related protein kinase PTK2B/FAK2/PYK2, and alpha-catenin, vinculin, Programmed cell death protein 10 (PDCD10)/Cerebral Cavernous Malformation protein 3 (CCM3) and GIT1/GIT2.
https://en.wikipedia.org/wiki/Photodissociation
Photodissociation, photolysis, photodecomposition, or photofragmentation is a chemical reaction in which molecules of a chemical compound are broken down by photons. It is defined as the interaction of one or more photons with one target molecule. Photodissociation is not limited to visible light. Any photon with sufficient energy can affect the chemical bonds of a chemical compound. Since a photon's energy is inversely proportional to its wavelength, electromagnetic radiations with the energy of visible light or higher, such as ultraviolet light, X-rays, and gamma rays can induce such reactions. Photolysis in photosynthesis Photolysis is part of the light-dependent reaction or light phase or photochemical phase or Hill reaction of photosynthesis. The general reaction of photosynthetic photolysis can be given in terms of photons as: The chemical nature of "A" depends on the type of organism. Purple sulfur bacteria oxidize hydrogen sulfide () to sulfur (S). In oxygenic photosynthesis, water () serves as a substrate for photolysis resulting in the generation of diatomic oxygen (). This is the process which returns oxygen to Earth's atmosphere. Photolysis of water occurs in the thylakoids of cyanobacteria and the chloroplasts of green algae and plants. Energy transfer models The conventional semi-classical model describes the photosynthetic energy transfer process as one in which excitation energy hops from light-capturing pigment molecules to reaction center molecules step-by-step down the molecular energy ladder. The effectiveness of photons of different wavelengths depends on the absorption spectra of the photosynthetic pigments in the organism. Chlorophylls absorb light in the violet-blue and red parts of the spectrum, while accessory pigments capture other wavelengths as well. The phycobilins of red algae absorb blue-green light which penetrates deeper into water than red light, enabling them to photosynthesize in deep waters. Each absorbed photon causes
https://en.wikipedia.org/wiki/Lie%20point%20symmetry
Lie point symmetry is a concept in advanced mathematics. Towards the end of the nineteenth century, Sophus Lie introduced the notion of Lie group in order to study the solutions of ordinary differential equations (ODEs). He showed the following main property: the order of an ordinary differential equation can be reduced by one if it is invariant under one-parameter Lie group of point transformations. This observation unified and extended the available integration techniques. Lie devoted the remainder of his mathematical career to developing these continuous groups that have now an impact on many areas of mathematically based sciences. The applications of Lie groups to differential systems were mainly established by Lie and Emmy Noether, and then advocated by Élie Cartan. Roughly speaking, a Lie point symmetry of a system is a local group of transformations that maps every solution of the system to another solution of the same system. In other words, it maps the solution set of the system to itself. Elementary examples of Lie groups are translations, rotations and scalings. The Lie symmetry theory is a well-known subject. In it are discussed continuous symmetries opposed to, for example, discrete symmetries. The literature for this theory can be found, among other places, in these notes. Overview Types of symmetries Lie groups and hence their infinitesimal generators can be naturally "extended" to act on the space of independent variables, state variables (dependent variables) and derivatives of the state variables up to any finite order. There are many other kinds of symmetries. For example, contact transformations let coefficients of the transformations infinitesimal generator depend also on first derivatives of the coordinates. Lie-Bäcklund transformations let them involve derivatives up to an arbitrary order. The possibility of the existence of such symmetries was recognized by Noether. For Lie point symmetries, the coefficients of the infinitesimal generato
https://en.wikipedia.org/wiki/%E2%88%921
In mathematics, −1 (negative one or minus one) is the additive inverse of 1, that is, the number that when added to 1 gives the additive identity element, 0. It is the negative integer greater than negative two (−2) and less than 0. Algebraic properties Multiplication Multiplying a number by −1 is equivalent to changing the sign of the number – that is, for any we have . This can be proved using the distributive law and the axiom that 1 is the multiplicative identity: . Here we have used the fact that any number times 0 equals 0, which follows by cancellation from the equation . In other words, , so is the additive inverse of , i.e. , as was to be shown. Square of −1 The square of −1, i.e. −1 multiplied by −1, equals 1. As a consequence, a product of two negative numbers is positive. For an algebraic proof of this result, start with the equation . The first equality follows from the above result, and the second follows from the definition of −1 as additive inverse of 1: it is precisely that number which when added to 1 gives 0. Now, using the distributive law, it can be seen that . The third equality follows from the fact that 1 is a multiplicative identity. But now adding 1 to both sides of this last equation implies . The above arguments hold in any ring, a concept of abstract algebra generalizing integers and real numbers. Square roots of −1 Although there are no real square roots of −1, the complex number satisfies , and as such can be considered as a square root of −1. The only other complex number whose square is −1 is − because there are exactly two square roots of any non‐zero complex number, which follows from the fundamental theorem of algebra. In the algebra of quaternions – where the fundamental theorem does not apply – which contains the complex numbers, the equation has infinitely many solutions. Exponentiation to negative integers Exponentiation of a non‐zero real number can be extended to negative integers. We make the defi
https://en.wikipedia.org/wiki/Mathematics%20Education%20Research%20Journal
Mathematics Education Research Journal is a quarterly peer-reviewed scientific journal covering mathematics education. It was established in 1989 and is published by Springer Science+Business Media on behalf of the Mathematics Education Research Group of Australasia. The editor-in-chief is Peter Grootenboer (Griffith University). Abstracting and indexing The journal is abstracted and indexed in the Astrophysics Data System, EBSCO databases, Emerging Sources Citation Index, ERIC, ProQuest databases, and Scopus. See also List of scientific journals in mathematics education
https://en.wikipedia.org/wiki/47th%20meridian%20east
The meridian 47° east of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, Europe, Asia, Africa, the Indian Ocean, the Southern Ocean, and Antarctica to the South Pole. The 47th meridian east forms a great circle with the 133rd meridian west. From Pole to Pole Starting at the North Pole and heading south to the South Pole, the 47th meridian east passes through: {| class="wikitable plainrowheaders" ! scope="col" width="115" | Co-ordinates ! scope="col" | Country, territory or sea ! scope="col" | Notes |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Arctic Ocean | style="background:#b0e0e6;" | |- | ! scope="row" | | Island of Alexandra Land, Franz Josef Land |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Barents Sea | style="background:#b0e0e6;" | Cambridge Channel |- | ! scope="row" | | Island of Zemlya Georga, Franz Josef Land |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Barents Sea | style="background:#b0e0e6;" | |- | ! scope="row" | | |- | ! scope="row" | | |- | ! scope="row" | | For about 18km |- | ! scope="row" | | |- | ! scope="row" | | |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Caspian Sea | style="background:#b0e0e6;" | Kizlyar Bay |- | ! scope="row" | | |- | ! scope="row" | | Passing through Nagorno-Karabakh |- | ! scope="row" | | |- | ! scope="row" | | |- | ! scope="row" | | |- | ! scope="row" | | Passing just east of Riyadh |- | ! scope="row" | | |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Indian Ocean | style="background:#b0e0e6;" | Gulf of Aden |- | ! scope="row" | | Somaliland |- | ! scope="row" | | |- | ! scope="row" | | |-valign="top" | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Indian Ocean | style="background:#b0e0e6;" | Passing just west of the Glorioso Islands atoll, |
https://en.wikipedia.org/wiki/Orangutan%E2%80%93human%20last%20common%20ancestor
The phylogenetic split of Hominidae into the subfamilies Homininae and Ponginae is dated to the middle Miocene, roughly 18 to 14 million years ago. This split is also referenced as the "orangutan–human last common ancestor" by Jeffrey H. Schwartz, professor of anthropology at the University of Pittsburgh School of Arts and Sciences, and John Grehan, director of science at the Buffalo Museum. Phylogeny {{Clade|{{Clade |label1=Ponginae (14) |1= |label2=(13) |2= |label3=Homininae (13) |3={{Clade |1=Graecopithecini (†8) |label2=Crown Homininae (10) |2={{Clade |label1=Hominini (7)|1= |label2=Gorillini |2={{Clade |1=Crown Gorillini |2=Chororapithecus (†) }} }} |3=Dryopithecini (†7) |state4=dashed |4=Samburupithecus (†9) }} }}|label1=Hominidae (18)}} Hominoidea (commonly known as apes) are thought to have evolved in Africa by about 18 million years ago. Among the genera thought to be in the ape lineage leading up to the emergence of the great apes (Hominidae) about 13 million years ago are Proconsul, Rangwapithecus, Dendropithecus, Nacholapithecus, Equatorius, Afropithecus and Kenyapithecus, all from East Africa. During the early Miocene, Europe and Africa were connected by land bridges over the Tethys Sea. Apes showed up in Europe in the fossil record beginning 17 million years ago. Great apes show up in the fossil record in Europe and Asia beginning about 12 million years ago. The only living great ape in Asia is the orangutan. Various genera of dryopithecines have been identified and are classified as an extinct sister clade of the Homininae. Possible further members of this tribe, indicated within Ponginae in the cladistic tree above, or of Homininae or else third or more tribes yet unnamed include extinct Pierolapithecus, Hispanopithecus, Lufengpithecus and Khoratpithecus''. Dryopithecines' nominate genus Dryopithecus was first uncovered in France, and it had a large frontal sinus w
https://en.wikipedia.org/wiki/Tobias%20Preis
Tobias Preis is Professor of Behavioral Science and Finance at Warwick Business School and a fellow of the Alan Turing Institute. He is a computational social scientist focussing on measuring and predicting human behavior with online data. At Warwick Business School he directs the Data Science Lab together with his colleague Suzy Moat. Preis holds visiting positions at Boston University and University College London. In 2011, he worked as a senior research fellow with H. Eugene Stanley at Boston University and with Dirk Helbing at ETH Zurich. In 2009, he was named a member of the Gutenberg Academy. In 2007, he founded Artemis Capital Asset Management GmbH, a proprietary trading firm which is based in Germany. He was awarded a Ph.D. in physics from the Johannes Gutenberg University of Mainz in Germany. Preis has quantified and modelled financial market fluctuations. In addition, he has made contributions to general-purpose computing on graphics processing units (GPGPU) in statistical physics and computational finance. Research In 2010, Preis headed a research team which provided evidence that search engine query data and stock market fluctuations are correlated. The team discovered a link between the number of Internet searches for company names and transaction volumes of the corresponding stocks on a weekly time scale. In a TEDx talk, Preis highlights the opportunities offered by studies of citizens' online behaviour to gain insights into socio and economic decision making. In 2012, Preis used Google Trends data to demonstrate together with his colleagues Suzy Moat, H. Eugene Stanley and Steven R. Bishop that Internet users from countries with a higher per capita gross domestic product (GDP) are more likely to search for information about the future than information about the past. The findings, published in the journal Scientific Reports, suggest there may be a link between online behaviour and real-world economic indicators. Preis and colleagues examined Google
https://en.wikipedia.org/wiki/William%20B.%20Graham%20Prize%20for%20Health%20Services%20Research
The William B. Graham Prize for Health Services Research is an award acknowledging contributions to health care research. It is funded by the Baxter International Foundation, and awarded every year through the US-based Association of University Programs in Health Administration (AUPHA). The recipient is awarded $25,000, with another $25,000 given to a non-profit institution selected by him or her. Until 2005, the prize was named The Baxter International Foundation Prize for Health Services Research. It was renamed in 2006, after the death of long-time CEO of Baxter International, William B. Graham. List of recipients The Baxter International Foundation Prize 1986: Avedis Donabedian 1987: Brian Abel-Smith 1988: Joseph Newhouse and Robert H. Brook 1989: M. Eisenberg 1990: Rosemary A. Stevens 1991: Victor Fuchs 1992: John D. Thompson and Robert B. Fetter 1993: John Wennberg 1994: Alain Enthoven 1995: Stephen M. Shortell 1996: Kerr L. White 1997: David Mechanic 1998: Harold S. Luft 1999: Ronald Andersen and Odin W. Anderson 2000: Karen Davis 2001: Robert G. Evans 2002: John M. Eisenberg (awarded posthumously) 2003: (unknown) 2004: Barbara Starfield 2005: David Sackett William B. Graham Prize 2006: Linda Aiken 2007: Donald Berwick 2008: Michael Marmot 2009: Carolyn Clancy 2010: Uwe Reinhardt 2011: Edward H. Wagner 2012: Mark V. Pauly 2013: Dorothy P. Rice 2014: Stuart Altman 2015: Anthony Culyer and Alan Maynard 2016: John K. Iglehart 2017: David Blumenthal See also List of medicine awards
https://en.wikipedia.org/wiki/Australia%20Bioinformatics%20Resource
The Australia Bioinformatics Resource (EMBL-ABR) (formerly the Bioinformatics Resource Australia - EMBL (BRAEMBL)) was a significant initiative under the associate membership to EMBL. Since 2019, all activities carried out under EMBL-ABR have rolled over into the Bioplatforms Australia (NCRIS-funded) Australian BioCommons, under new funding agreements and led by Associate Professor Andrew Lonie. EMBL-ABR aimed to: Increase Australia’s capacity to collect, integrate, analyse, exploit, share and archive the large heterogeneous data sets now part of modern life sciences research Contribute to the development of and provide training in data, tools and platforms to enable Australia’s life science researchers to undertake research in the age of big data Showcase Australian research and datasets at an international level Enable engagement in international programs that create, deploy and develop best practice approaches to data management, software tools and methods, computational platforms and bioinformatics services EMBL-ABR was supported by Bioplatforms Australia and the University of Melbourne. EMBL-ABR Hub was hosted at the Victorian Life Sciences Computation Initiative (VLSCI) at the University of Melbourne. In July 2016, EMBL-ABR announced an agreement to collaborate with GOBLET to develop training programs for bioinformatics.
https://en.wikipedia.org/wiki/Partial%20pressure
In a mixture of gases, each constituent gas has a partial pressure which is the notional pressure of that constituent gas as if it alone occupied the entire volume of the original mixture at the same temperature. The total pressure of an ideal gas mixture is the sum of the partial pressures of the gases in the mixture (Dalton's Law). The partial pressure of a gas is a measure of thermodynamic activity of the gas's molecules. Gases dissolve, diffuse, and react according to their partial pressures but not according to their concentrations in gas mixtures or liquids. This general property of gases is also true in chemical reactions of gases in biology. For example, the necessary amount of oxygen for human respiration, and the amount that is toxic, is set by the partial pressure of oxygen alone. This is true across a very wide range of different concentrations of oxygen present in various inhaled breathing gases or dissolved in blood; consequently, mixture ratios, like that of breathable 20% oxygen and 80% Nitrogen, are determined by volume instead of by weight or mass. Furthermore, the partial pressures of oxygen and carbon dioxide are important parameters in tests of arterial blood gases. That said, these pressures can also be measured in, for example, cerebrospinal fluid. Symbol The symbol for pressure is usually or which may use a subscript to identify the pressure, and gas species are also referred to by subscript. When combined, these subscripts are applied recursively. Examples: or = pressure at time 1 or = partial pressure of hydrogen or = venous partial pressure of oxygen Dalton's law of partial pressures Dalton's law expresses the fact that the total pressure of a mixture of ideal gases is equal to the sum of the partial pressures of the individual gases in the mixture. This equality arises from the fact that in an ideal gas, the molecules are so far apart that they do not interact with each other. Most actual real-world gases come very close to