source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Multidrop%20bus
A multidrop bus (MDB) is a computer bus in which all components are connected to the electrical circuit. A process of arbitration determines which device sends information at any point. The other devices listen for the data they are intended to receive. Multidrop buses have the advantage of simplicity and extensibility, but their differing electrical characteristics make them relatively unsuitable for high frequency or high bandwidth applications. In computing Since 2000, multidrop standards such as PCI and Parallel ATA are increasingly being replaced by point-to-point systems such as PCI Express and SATA. Modern SDRAM chips exemplify the problem of electrical impedance discontinuity. Fully Buffered DIMM is an alternative approach to connecting multiple DRAM modules to a memory controller. For vending machines MDB/ICP MDB/ICP (formerly known as MDB) is a multidrop bus computer networking protocol used within the vending machine industry, currently published by the American National Automatic Merchandising Association. ccTalk The ccTalk multidrop bus protocol uses an TTL-level asynchronous serial protocol. It uses address randomization to allow multiple similar devices on the bus (after randomisation the devices can be distinguished by their serial number). ccTalk was developed by CoinControls, but is used by multiple vendors. See also Bus network topology EIA-485 1-Wire Open collector External links IBM Journal of Research and Development Computer buses
https://en.wikipedia.org/wiki/Franz%20Taurinus
Franz Adolph Taurinus (15 November 1794 – 13 February 1874) was a German mathematician who is known for his work on non-Euclidean geometry. Life Franz Taurinus was the son of Julius Ephraim Taurinus, a court official of the Count of Erbach-Schönberg, and Luise Juliane Schweikart. He studied law in Heidelberg, Gießen and Göttingen. He lived as a private scholar in Cologne. Hyperbolic geometry Taurinus corresponded with his uncle Ferdinand Karl Schweikart (1780–1859), who was a law professor in Königsberg, among other things about mathematics. Schweikart examined a model (after Giovanni Girolamo Saccheri and Johann Heinrich Lambert) in which the parallel postulate is not satisfied, and in which the sum of three angles of a triangle is less than two right angles (which is now called hyperbolic geometry). While Schweikart never published his work (which he called "astral geometry"), he sent a short summary of its main principles by letter to Carl Friedrich Gauß. Motivated by the work of Schweikart, Taurinus examined the model of geometry on a "sphere" of imaginary radius, which he called "logarithmic-spherical" (now called hyperbolic geometry). He published his "theory of parallel lines" in 1825 and "Geometriae prima elementa" in 1826. For instance, in his "Geometriae prima elementa" on p. 66, Taurinus defined the hyperbolic law of cosines When solved for and using hyperbolic functions, it has the form Taurinus described his logarithmic-spherical geometry as the "third system" besides Euclidean geometry and spherical geometry, and pointed out that infinitely many systems exist depending on an arbitrary constant. While he noticed that no contradictions can be found in his logarithmic-spherical geometry, he remained convinced of the special role of Euclidean geometry. According to Paul Stäckel and Friedrich Engel, as well as Zacharias, Taurinus must be given credit as a founder of non-Euclidean trigonometry (together with Gauss), but his contributions cannot be co
https://en.wikipedia.org/wiki/Carothers%20equation
In step-growth polymerization, the Carothers equation (or Carothers' equation) gives the degree of polymerization, , for a given fractional monomer conversion, . There are several versions of this equation, proposed by Wallace Carothers, who invented nylon in 1935. Linear polymers: two monomers in equimolar quantities The simplest case refers to the formation of a strictly linear polymer by the reaction (usually by condensation) of two monomers in equimolar quantities. An example is the synthesis of nylon-6,6 whose formula is from one mole of hexamethylenediamine, , and one mole of adipic acid, . For this case In this equation is the number-average value of the degree of polymerization, equal to the average number of monomer units in a polymer molecule. For the example of nylon-6,6 ( diamine units and diacid units). is the extent of reaction (or conversion to polymer), defined by is the number of molecules present initially as monomer is the number of molecules present after time . The total includes all degrees of polymerization: monomers, oligomers and polymers. This equation shows that a high monomer conversion is required to achieve a high degree of polymerization. For example, a monomer conversion, , of 98% is required for = 50, and = 99% is required for = 100. Linear polymers: one monomer in excess If one monomer is present in stoichiometric excess, then the equation becomes r is the stoichiometric ratio of reactants, the excess reactant is conventionally the denominator so that r < 1. If neither monomer is in excess, then r = 1 and the equation reduces to the equimolar case above. The effect of the excess reactant is to reduce the degree of polymerization for a given value of p. In the limit of complete conversion of the limiting reagent monomer, p → 1 and Thus for a 1% excess of one monomer, r = 0.99 and the limiting degree of polymerization is 199, compared to infinity for the equimolar case. An excess of one reactant can be used to c
https://en.wikipedia.org/wiki/High-level%20design
High-level design (HLD) explains the architecture that would be used to develop a system. The architecture diagram provides an overview of an entire system, identifying the main components that would be developed for the product and their interfaces. The HLD can use non-technical to mildly technical terms which should be understandable to the administrators of the system. In contrast, low-level design further exposes the logical detailed design of each of these elements for use by engineers and programmers. HLD documentation should cover the planned implementation of both software and hardware. Purpose Preliminary design: In the preliminary stages of system development, the need is to size the project and to identify those parts which might be risky or time-consuming. Design overview: As the project proceeds, the need is to provide an overview of how the various sub-systems and components of the system fit together. In both cases, the high-level design should be a complete view of the entire system, breaking it down into smaller parts that are more easily understood. To minimize the maintenance overhead as construction proceeds and the lower-level design is done, it is best that the high-level design is elaborated only to the degree needed to satisfy these needs. High-level design document A high-level design document or HLDD adds the necessary details to the current project description to represent a suitable model for building. This document includes a high-level architecture diagram depicting the structure of the system, such as the hardware, database architecture, application architecture (layers), application flow (navigation), security architecture and technology architecture. Design overview A high-level design provides an overview of a system, product, service, or process. Such an overview helps supporting components be compatible to others. The highest-level design should briefly describe all platforms, systems, products, services, and processes
https://en.wikipedia.org/wiki/Botanical%20Research%20Institute%20of%20Texas
The Botanical Research Institute of Texas (BRIT) is a botanical research institute located in Fort Worth, Texas, United States. It was established in 1987 for the herbarium and botanical library collections of Lloyd H. Shinners from Southern Methodist University but has subsequently expanded substantially. BRIT focuses on plant taxonomy, conservation and knowledge sharing for both scientists and the general public History The Botanical Research Institute of Texas was founded in 1987 around the herbarium and library from the Southern Methodist University that been substantially expanded by their final curator, Lloyd Herbert Shinners. It was located in a re-purposed warehouse in the main business and commercial area of Fort Worth. In spring 2011, BRIT moved into new buildings adjacent to the Fort Worth Botanic Garden that dates from 1934. The buildings were designed by Hugh Hardy of H3 Hardy Collaboration Architecture and have a LEED-NC platinum rating from the U.S. Green Building Council for efficiency and sustainable design. The Institute is a 501(c)(3) private, non-profit organization governed by a board of trustees. The institute now consists of a two-story Archives Block that houses the herbarium in a , climate-controlled space, with the remaining for research and a library. Part of the roof is covered with solar panels. A second building, the Think Block is used for education programs, exhibits and administrative offices. This building has natural lighting through floor-to-ceiling glass on the north façade. The building design allows for future expansion. A green roof of native Texan prairie plants was installed between 2007 and 2012. BRIT Collections and Programs Although BRIT covers the local Texan flora, it also includes plants world-wide. It contains a world-class collection of preserved plant specimens and books and also publishes about plants. Its staff provide outreach activities and also research and conservation about plants. Herbarium The Phile
https://en.wikipedia.org/wiki/The%20Apprentice%20%28Libby%20novel%29
The Apprentice is a novel by Lewis Libby, former Chief of Staff to United States Vice President Dick Cheney, first published in hardback in 1996, reprinted in trade paperback in 2002, and reissued in mass market paperback in 2005 after Libby's indictment in the CIA leak grand jury investigation. It is set in northern Japan in winter 1903, and centers on a group of travelers stranded at a remote inn due to a smallpox epidemic. It has been described as "a thriller ... that includes references to bestiality, pedophilia and rape." It is the first and only novel that Libby has written. Publication history After being published in hardback by Graywolf Press (St. Paul, Minnesota) in August 1996 (now out of print), it was published as a trade paperback by St. Martin's Thomas Dunne Books in February 2002, and then reissued as a mass market paperback reprint of 25,000 copies by St. Martin's Griffin imprint in December 2005, after Libby's indictment that October, as a result of the CIA leak grand jury investigation. First edition publicity In 2002, during an interview on Larry King Live promoting his novel's first publication in paperback, King asked Libby: "Are you a novelist working part-time for the vice president?" Libby told King, "Well, I've never quite figured that out. ... I'm a great fan of the Vice President. I think he's one of the smartest, most honorable people I've ever met. So, I'd like to consider myself fully on his team, but there's always a novel kicking around in the back somewhere." After hearing a brief plot summary, King wondered why Libby had set the novel in Japan, and Libby responded: At that time, Libby also appeared on The Diane Rehm Show on National Public Radio to talk about the novel. Libby said that he had chosen to set the novel in Japan in 1903, because it was a pivotal time in its history that had intrigued him. Plot summary According to the description of the book by St. Martin's Press: Reprint publicity Following his indictment on
https://en.wikipedia.org/wiki/Fishkeeping
Fishkeeping is a popular hobby, practiced by aquarists, concerned with keeping fish in a home aquarium or garden pond. There is also a piscicultural fishkeeping industry, serving as a branch of agriculture. Origins of fishkeeping Fish have been raised as food in pools and ponds for thousands of years. Brightly colored or tame specimens of fish in these pools have sometimes been valued as pets rather than food. Many cultures, ancient and modern, have kept fish for both functional and decorative purposes. Ancient Sumerians kept wild-caught fish in ponds, before preparing them for meals. Depictions of the sacred fish of Oxyrhynchus kept in captivity in rectangular temple pools have been found in ancient Egyptian art. Similarly, Asia has experienced a long history of stocking rice paddies with freshwater fish suitable for eating, including various types of catfish and cyprinid. Selective breeding of carp into today's popular and completely domesticated koi and fancy goldfish began over 2,000 years ago in Japan and China, respectively. The Chinese brought goldfish indoors during the Song Dynasty to enjoy them in large ceramic vessels. In Medieval Europe, carp pools were a standard feature of estates and monasteries, providing an alternative to meat on feast days when meat could not be eaten for religious reasons. Marine fish have been similarly valued for centuries. Wealthy Romans kept lampreys and other fish in salt water pools. Tertullian reports that Asinius Celer paid 8000 sesterces for a particularly fine mullet. Cicero reports that the advocate Quintus Hortensius wept when a favored specimen died. Rather cynically, he referred to these ancient fishkeepers as the Piscinarii, the "fish-pond owners" or "fish breeders", for example when saying that "the rich (I mean your friends the fish-breeders) did not disguise their jealousy of me". The first person to breed a tropical fish in Europe was Pierre Carbonnier, who founded one of the oldest public aquaria in Par
https://en.wikipedia.org/wiki/Durrell%20Institute%20of%20Conservation%20and%20Ecology
The Durrell Institute of Conservation and Ecology (DICE) is a subdivision and research centre of the School of Anthropology and Conservation at the University of Kent, started in 1989 and named in honour of the famous British naturalist Gerald Durrell. It was the first institute in the United Kingdom to award undergraduate and postgraduate degrees and diplomas in the fields of conservation biology, ecotourism, and biodiversity management. It consists of 22 academic staff, being six Professors, seven Readers and nine Lecturers and Senior Lecturers, as well as an advisory board consisting of 14 conservationists from government, business and the NGO sector. History DICE's graduate degree programme began in 1991 with a class of seven international students. Since then it has trained over 1,200 people from 101 countries, including 322 people from Lower- and Middle-Income countries in Africa, Asia, Oceania and South America. The founder of DICE is Professor Ian Swingland, who retired from the University of Kent in 1999, and the first Director was Dr. Mike Walkey, who retired in 2002. Awards In 2019 DICE was awarded a Queen's Anniversary Prize for "pioneering education, capacity building and research in global nature conservation to protect species and ecosystems and benefit people". Alumni Notable alumni include: Bahar Dutt, Indian television journalist and environmental editor Sanjay Gubbi, Indian conservation biologist Rachel Ikemeh, Nigerian conservationist Winnie Kiiru, Kenyan biologist and elephant conservationist Patricia Medici, Brazilian conservation biologist Jeanneney Rabearivony, Malagasy ecologist and herpetologist Rajeev Raghavan, Indian conservation biologist Alexandra Zimmermann, wildlife conservationist
https://en.wikipedia.org/wiki/Massospondylus
Massospondylus ( ; from Greek, (massōn, "longer") and (spondylos, "vertebra")) is a genus of sauropodomorph dinosaur from the Early Jurassic (Hettangian to Pliensbachian ages, ca. 200–183 million years ago). It was described by Sir Richard Owen in 1854 from remains discovered in South Africa, and is thus one of the first dinosaurs to have been named. Fossils have since been found at other locations in South Africa, Lesotho, and Zimbabwe. Material from Arizona's Kayenta Formation, India, and Argentina has been assigned to the genus at various times, but the Arizonan and Argentinian material are now assigned to other genera. The type species is M. carinatus; seven other species have been named during the past 150 years, but only M. kaalae is still considered valid. Early sauropodomorph systematics have undergone numerous revisions during the last several years, and many scientists disagree where exactly Massospondylus lies on the dinosaur evolutionary tree. The family name Massospondylidae was once coined for the genus, but because knowledge of an early sauropod relationship is in a state of flux, it is unclear which other dinosaurs—if any—belong in a natural grouping of massospondylids; several 2007 papers support the family's validity. Although Massospondylus was long depicted as quadrupedal, a 2007 study found it to be bipedal. It was probably a plant eater (herbivore), although it is speculated that the early sauropodomorphs may have been omnivorous. The genus was long, and had a long neck and tail and a small head and slender body. On each of its forefeet, it bore a sharp thumb claw that was used in defense or feeding. Recent studies indicate that Massospondylus grew steadily throughout its lifespan, possessed air sacs similar to those of birds, and may have cared for its young. History of discovery The first fossils of Massospondylus were described by paleontologist Sir Richard Owen in 1854. Originally, Owen did not recognize the finds as those of a dinos
https://en.wikipedia.org/wiki/Peroxisomal%20targeting%20signal
In biochemical protein targeting, a peroxisomal targeting signal (PTS) is a region of the peroxisomal protein that receptors recognize and bind to. It is responsible for specifying that proteins containing this motif are localised to the peroxisome. Overview All peroxisomal proteins are synthesized in the cytoplasm and must be directed to the peroxisome. The first step in this process is the binding of the protein to a receptor. The receptor then directs the complex to the peroxisome. Receptors recognize and bind to a region of the peroxisomal protein called a peroxisomal targeting signal, or PTS. Peroxisomes consist of a matrix surrounded by a specific membrane. Most peroxisomal matrix proteins contain a short sequence, usually three amino acids at the extreme carboxy tail of the protein, that serves as the PTS. The prototypic sequence (many variations exist) is serine-lysine-leucine (-SKL in the one letter amino acid code). This motif, and its variations, is known as the PTS1, and the receptor is termed the PTS1 receptor. It was found that the PTS1 receptor is encoded by the PEX5 gene. PEX5 imports folded proteins into the peroxisome, shuttling between the peroxisome and cytosol. PEX5 interacts with a large number of other proteins, including Pex8p, 10p, 12p, 13p, 14p. A few peroxisomal matrix proteins have a different, and less conserved sequence, at their amino termini. This PTS2 signal is recognized by the PTS2 receptor, encoded by the PEX7 gene. "PEX" refers to a group of genes that were identified as being important for peroxisomal synthesis. The numerical attributions, such as PEX5, generally refer to the order in which they were first discovered. A distinct motif is used for proteins destined for the peroxisomal membrane called the "mPTS" motif, which is more poorly defined and may consist of discontinuous subdomains. One of these usually is a cluster of basic amino acids (arginines and lysines) within a loop of protein (i.e., between membrane spans)
https://en.wikipedia.org/wiki/P%E2%80%93n%20junction%20isolation
p–n junction isolation is a method used to electrically isolate electronic components, such as transistors, on an integrated circuit (IC) by surrounding the components with reverse biased p–n junctions. Introduction By surrounding a transistor, resistor, capacitor or other component on an IC with semiconductor material which is doped using an opposite species of the substrate dopant, and connecting this surrounding material to a voltage which reverse-biases the p–n junction that forms, it is possible to create a region which forms an electrically isolated "well" around the component. Operation Assume that the semiconductor wafer is p-type material. Also assume a ring of n-type material is placed around a transistor, and placed beneath the transistor. If the p-type material within the n-type ring is now connected to the negative terminal of the power supply and the n-type ring is connected to the positive terminal, the 'holes' in the p-type region are pulled away from the p–n junction, causing the width of the nonconducting depletion region to increase. Similarly, because the n-type region is connected to the positive terminal, the electrons will also be pulled away from the junction. This effectively increases the potential barrier and greatly increases the electrical resistance against the flow of charge carriers. For this reason there will be no (or minimal) electric current across the junction. At the middle of the junction of the p–n material, a depletion region is created to stand-off the reverse voltage. The width of the depletion region grows larger with higher voltage. The electric field grows as the reverse voltage increases. When the electric field increases beyond a critical level, the junction breaks down and current begins to flow by avalanche breakdown. Therefore, care must be taken that circuit voltages do not exceed the breakdown voltage or electrical isolation ceases. History In an article entitled "Microelectronics", published in Scientifi
https://en.wikipedia.org/wiki/Multileaf%20collimator
A multileaf collimator (MLC) is a Collimator or beam-limiting device that is made of individual "leaves" of a high atomic numbered material, usually tungsten, that can move independently in and out of the path of a radiotherapy beam in order to shape it and vary its intensity. MLCs are used in external beam radiotherapy to provide conformal shaping of beams. Specifically, conformal radiotherapy and Intensity Modulated Radiation Therapy (IMRT) can be delivered using MLCs. The MLC has improved rapidly since its inception and the first use of leaves to shape structures in 1965 to modern day operation and use. MLCs are now widely used and have become an integral part of any radiotherapy department. MLCs were primarily used for conformal radiotherapy, and have allowed the cost-effective implementation of conformal treatment with significant time saving, and also have been adapted for use for IMRT treatments. For conformal radiotherapy the MLC allows conformal shaping of the beam to match the borders of the target tumour. For intensity modulated treatments the leaves of a MLC can be moved across the field to create IMRT distributions (MLCs really provide a fluence modulation rather than intensity modulation). The MLC is an important tool for radiation therapy dose delivery. It was originally used as a surrogate for alloy block field shaping and is now widely used for IMRT. As with any tool used in radiotherapy the MLC must undergo commissioning and quality assurance. Additional commissioning measurements are completed to model a MLC for treatment planning. Various MLCs are provided by different vendors and they all have unique design features as determined by specifications of design, and these differences are quite significant.
https://en.wikipedia.org/wiki/Stuart%20Newman
Stuart Alan Newman (born April 4, 1945 in New York City) is a professor of cell biology and anatomy at New York Medical College in Valhalla, NY, United States. His research centers around three program areas: cellular and molecular mechanisms of vertebrate limb development, physical mechanisms of morphogenesis, and mechanisms of morphological evolution. He also writes about social and cultural aspects of biological research and technology. Career Stuart Newman graduated from Jamaica High School in Queens, New York. He received an A.B. from Columbia College of Columbia University in 1965, and a Ph.D. in chemical physics from the University of Chicago in 1970, where he worked with the theoretical chemist, Stuart A. Rice. He was a postdoctoral fellow in the Department of Theoretical Biology, University of Chicago and the School of Biological Sciences, University of Sussex, UK, and before joining New York Medical College was an instructor in anatomy at the University of Pennsylvania and an assistant professor of biological sciences at the State University of New York at Albany. He has been a visiting professor at the Pasteur Institute, Paris, the Commissariat à l'Energie Atomique-Saclay, the Indian Institute of Science, Bangalore, the University of Tokyo, Komaba, and was a Fogarty Senior International Fellow at Monash University, Australia. He is a member of the External Faculty of the Konrad Lorenz Institute for Evolution and Cognition Research, Klosterneuburg, Austria and in 2015 was appointed editor-in-chief of the institute's journal Biological Theory. He is a director of the Indigenous Peoples Council on Biocolonialism, Nixon, NV and was a founding member of the Council for Responsible Genetics, Cambridge, MA, and of the editorial board of the Journal of Biosciences (Bangalore). Newman's work in developmental biology includes a proposed mechanism for patterning of the vertebrate limb skeleton based on the self-organization of embryonic tissues. He has also char
https://en.wikipedia.org/wiki/Origination%20of%20Organismal%20Form
Origination of Organismal Form: Beyond the Gene in Developmental and Evolutionary Biology is an anthology published in 2003 edited by Gerd B. Müller and Stuart A. Newman. The book is the outcome of the 4th Altenberg Workshop in Theoretical Biology on "Origins of Organismal Form: Beyond the Gene Paradigm", hosted in 1999 at the Konrad Lorenz Institute for Evolution and Cognition Research. It has been cited over 200 times and has a major influence on extended evolutionary synthesis research. Description of the book The book explores the multiple factors that may have been responsible for the origination of biological form in multicellular life. These biological forms include limbs, segmented structures, and different body symmetries. It explores why the basic body plans of nearly all multicellular life arose in the relatively short time span of the Cambrian Explosion. The authors focus on physical factors (structuralism) other than changes in an organism's genome that may have caused multicellular life to form new structures. These physical factors include differential adhesion of cells and feedback oscillations between cells. The book also presents recent experimental results that examine how the same embryonic tissues or tumor cells can be coaxed into forming dramatically different structures under different environmental conditions. One of the goals of the book is to stimulate research that may lead to a more comprehensive theory of evolution. It is frequently cited as foundational to the development of the extended evolutionary synthesis. List of contributions Origination of Organismal Form: The Forgotten Cause in Evolutionary Theory, Gerd B. Müller and Stuart A. Newman The Cambrian "Explosion" of Metazoans, Simon Conway Morris Convergence and Homoplasy in the Evolution of Organismal Form, Pat Willmer Homology:The Evolution of Morphological Organization, Gerd B. Müller Only Details Determine, Roy J. Britten The Reactive Genome, Scott F. Gilbert Tis
https://en.wikipedia.org/wiki/Amazon%20Mechanical%20Turk
Amazon Mechanical Turk (MTurk) is a crowdsourcing website with which businesses can hire remotely located "crowdworkers" to perform discrete on-demand tasks that computers are currently unable to do as economically. It is operated under Amazon Web Services, and is owned by Amazon. Employers (known as requesters) post jobs known as Human Intelligence Tasks (HITs), such as identifying specific content in an image or video, writing product descriptions, or answering survey questions. Workers, colloquially known as Turkers or crowdworkers, browse among existing jobs and complete them in exchange for a fee set by the employer. To place jobs, the use an open application programming interface (API), or the more limited MTurk Requester site. , Requesters could register from 49 approved countries. History The service was conceived by Venky Harinarayan in a U.S. patent disclosure in 2001. Amazon coined the term artificial artificial intelligence for processes that outsource some parts of a computer program to humans, for those tasks carried out much faster by humans than computers. It is claimed that Jeff Bezos was responsible for proposing the development of Amazon's Mechanical Turk to realize this process. The name Mechanical Turk was inspired by "The Turk", an 18th-century chess-playing automaton made by Wolfgang von Kempelen that toured Europe, and beat both Napoleon Bonaparte and Benjamin Franklin. It was later revealed that this "machine" was not an automaton, but a human chess master hidden in the cabinet beneath the board and controlling the movements of a humanoid dummy. Analogously, the Mechanical Turk online service uses remote human labor hidden behind a computer interface to help employers perform tasks that are not possible using a true machine. MTurk launched publicly on November 2, 2005. Its user base grew quickly. In early- to mid-November 2005, there were tens of thousands of jobs, all uploaded to the system by Amazon itself for some of its internal tas
https://en.wikipedia.org/wiki/Macroglobulinemia
Macroglobulinemia is the presence of increased levels of macroglobulins in the circulating blood. It is a plasma cell dyscrasia, resembling leukemia, with cells of lymphocytic, plasmacytic, or intermediate morphology, which secrete a monoclonal immunoglobulin M component. There is diffuse infiltration by the malignant cells of the bone marrow and also, in many cases, of the spleen, liver, or lymph nodes. The circulating macroglobulin can produce symptoms of hyperviscosity syndrome: weakness, fatigue, bleeding disorders, and visual disturbances. Peak incidence of macroglobulinemia is in the sixth and seventh decades of life. (Dorland, 28th ed) See also Waldenström macroglobulinemia Hematopoietic ulcer
https://en.wikipedia.org/wiki/VMOS
A VMOS () (vertical metal oxide semiconductor or V-groove MOS) transistor is a type of metal–oxide–semiconductor field-effect transistor (MOSFET). VMOS is also used to describe the V-groove shape vertically cut into the substrate material. The "V" shape of the MOSFET's gate allows the device to deliver a higher amount of current from the source to the drain of the device. The shape of the depletion region creates a wider channel, allowing more current to flow through it. During operation in blocking mode, the highest electric field occurs at the N+/p+ junction. The presence of a sharp corner at the bottom of the groove enhances the electric field at the edge of the channel in the depletion region, thus reducing the breakdown voltage of the device. This electric field launches electrons into the gate oxide and consequently, the trapped electrons shift the threshold voltage of the MOSFET. For this reason, the V-groove architecture is no longer used in commercial devices. The device's use was a power device until more suitable geometries, like the UMOS (or Trench-Gate MOS) were introduced in order to lower the maximum electric field at the top of the V shape and thus leading to higher maximum voltages than in case of the VMOS. History The first MOSFET (without a V-groove) was invented by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959. The V-groove construction was pioneered by Jun-ichi Nishizawa in 1969, initially for the static induction transistor (SIT), a type of junction field-effect transistor (JFET). The VMOS was invented by Hitachi in 1969, when they introduced the first vertical power MOSFET in Japan. T. J. Rodgers, while he was a student at Stanford University, filed a US patent for a VMOS in 1973. Siliconix commercially introduced a VMOS in 1975. The VMOS later developed into what became known as the vertical DMOS (VDMOS). In 1978, American Microsystems (AMI) released the S2811. It was the first integrated circuit chip specifically designed as a d
https://en.wikipedia.org/wiki/Pasquale%20del%20Pezzo
Pasquale del Pezzo, Duke of Caianello and Marquis of Campodisola (2 May 1859 – 20 June 1936), was an Italian mathematician. He was born in Berlin (where his father was a representative of the Neapolitan king) on 2 May 1859. He died in Naples on 20 June 1936. His wife was the Swedish writer Anne Charlotte Leffler, sister of the great mathematician Gösta Mittag-Leffler (1846–1927). At the University of Naples, he received first a law degree in 1880 and then in 1882 a math degree. He became a pre-eminent professor at that university, teaching projective geometry, and remained at that University, as rector, faculty president, etc. He was mayor of Naples from 1914 to 1917. Starting in 1919 he became a Senator of the Kingdom of Italy until his death. He is remembered particularly for first describing what became known as a del Pezzo surface.
https://en.wikipedia.org/wiki/Software%20assurance
Software assurance (SwA) is a critical process in software development that ensures the reliability, safety, and security of software products. It involves a variety of activities, including requirements analysis, design reviews, code inspections, testing, and formal verification. One crucial component of software assurance is secure coding practices, which follow industry-accepted standards and best practices, such as those outlined by the Software Engineering Institute (SEI) in their CERT Secure Coding Standards (SCS). Another vital aspect of software assurance is testing, which should be conducted at various stages of the software development process and can include functional testing, performance testing, and security testing. Testing helps to identify any defects or vulnerabilities in software products before they are released. Furthermore, software assurance involves organizational and management practices like risk management and quality management to ensure that software products meet the needs and expectations of stakeholders. Software assurance aims to ensure that software is free from vulnerabilities and functions as intended, conforming to all requirements and standards governing the software development process.[3] Additionally, software assurance aims to produce software-intensive systems that are more secure. To achieve this, a preventive dynamic and static analysis of potential vulnerabilities is required, and a holistic, system-level understanding is recommended. Architectural risk analysis plays an essential role in any software security program, as design flaws account for 50% of security problems, and they cannot be found by staring at code alone. By following industry-accepted standards and best practices, incorporating testing and management practices, and conducting architectural risk analysis, software assurance can minimize the risk of system failures and security breaches, making it a critical aspect of software development. Initiatives
https://en.wikipedia.org/wiki/Linear%20network%20coding
In computer networking, linear network coding is a program in which intermediate nodes transmit data from source nodes to sink nodes by means of linear combinations. Linear network coding may be used to improve a network's throughput, efficiency, and scalability, as well as reducing attacks and eavesdropping. The nodes of a network take several packets and combine for transmission. This process may be used to attain the maximum possible information flow in a network. It has been proven that, theoretically, linear coding is enough to achieve the upper bound in multicast problems with one source. However linear coding is not sufficient in general; even for more general versions of linearity such as convolutional coding and filter-bank coding. Finding optimal coding solutions for general network problems with arbitrary demands is a hard problem, which can be NP-hard and even undecidable. Encoding and decoding In a linear network coding problem, a group of nodes are involved in moving the data from source nodes to sink nodes. Each node generates new packets which are linear combinations of past received packets by multiplying them by coefficients chosen from a finite field, typically of size . More formally, each node, with indegree, , generates a message from the linear combination of received messages by the formula: Where the values are coefficients selected from . Since operations are computed in a finite field, the generated message is of the same length as the original messages. Each node forwards the computed value along with the coefficients, , used in the level, . Sink nodes receive these network coded messages, and collect them in a matrix. The original messages can be recovered by performing Gaussian elimination on the matrix. In reduced row echelon form, decoded packets correspond to the rows of the form Background A network is represented by a directed graph . is the set of nodes or vertices, is the set of directed links (or edges), and
https://en.wikipedia.org/wiki/Semi-differentiability
In calculus, a branch of mathematics, the notions of one-sided differentiability and semi-differentiability of a real-valued function f of a real variable are weaker than differentiability. Specifically, the function f is said to be right differentiable at a point a if, roughly speaking, a derivative can be defined as the function's argument x moves to a from the right, and left differentiable at a if the derivative can be defined as x moves to a from the left. One-dimensional case In mathematics, a left derivative and a right derivative are derivatives (rates of change of a function) defined for movement in one direction only (left or right; that is, to lower or higher values) by the argument of a function. Definitions Let f denote a real-valued function defined on a subset I of the real numbers. If is a limit point of   and the one-sided limit exists as a real number, then f is called right differentiable at a and the limit ∂+f(a) is called the right derivative of f at a. If is a limit point of   and the one-sided limit exists as a real number, then f is called left differentiable at a and the limit ∂–f(a) is called the left derivative of f at a. If is a limit point of   and and if f is left and right differentiable at a, then f is called semi-differentiable at a. If the left and right derivatives are equal, then they have the same value as the usual ("bidirectional") derivative. One can also define a symmetric derivative, which equals the arithmetic mean of the left and right derivatives (when they both exist), so the symmetric derivative may exist when the usual derivative does not. Remarks and examples A function is differentiable at an interior point a of its domain if and only if it is semi-differentiable at a and the left derivative is equal to the right derivative. An example of a semi-differentiable function, which is not differentiable, is the absolute value function , at a = 0. We find easily If a function is semi-differentiable at a
https://en.wikipedia.org/wiki/SpamBayes
SpamBayes is a Bayesian spam filter written in Python which uses techniques laid out by Paul Graham in his essay "A Plan for Spam". It has subsequently been improved by Gary Robinson and Tim Peters, among others. The most notable difference between a conventional Bayesian filter and the filter used by SpamBayes is that there are three classifications rather than two: spam, non-spam (called ham in SpamBayes), and unsure. The user trains a message as being either ham or spam; when filtering a message, the spam filters generate one score for ham and another for spam. If the spam score is high and the ham score is low, the message will be classified as spam. If the spam score is low and the ham score is high, the message will be classified as ham. If the scores are both high or both low, the message will be classified as unsure. This approach leads to a low number of false positives and false negatives, but it may result in a number of unsures which need a human decision. Web filtering Some work has gone into applying SpamBayes to filter internet content via a proxy web server.
https://en.wikipedia.org/wiki/Teenage%20Mutant%20Ninja%20Turtles%20%28NES%20video%20game%29
Teenage Mutant Ninja Turtles, known as in Japan and Teenage Mutant Hero Turtles in Europe, is a 1989 side-scrolling action-platform game for the Nintendo Entertainment System released by Konami. In North America it was published under Konami's Ultra Games imprint in the US and the equivalent PALCOM brand in Europe and Australia. Alongside the arcade game (also developed by Konami), it was one of the first video games based on the 1987 Teenage Mutant Ninja Turtles animated series, being released after the show's second season. The game sold more than cartridges worldwide. Plot Shredder kidnaps April and gains the Life Transformer Gun, a weapon capable of returning Splinter to his human form. In order to save April, the turtles (Leo, Mikey, Donny and Raph) embark on the streets of New York to confront the Foot Clan. While traversing the sewers, the turtles encounter Bebop, a mutated pig, and Rocksteady, a mutant rhino. Though the turtles defeat Bebop, Rocksteady escapes with April O’Neil. The turtles then chase Rocksteady to an abandoned warehouse, fight him, and rescue April. After disabling bombs in the Hudson River dam, Shredder captures Splinter, so the turtles give chase in the Party Wagon. Hot in pursuit, the turtles scour the city and eventually find that Splinter is held captive by the robotic Mecaturtle on a skyscraper rooftop. After the turtles save Splinter, Shredder escapes in a helicopter. The turtles give chase, tracking him to JFK airport, where they encounter Big Mouser. After defeating Big Mouser, the turtles head to Shredder's secret Foot Clan base in the South Bronx via the Turtle Blimp. Once there, they locate and battle the Technodrome underground. The turtles descend into the Technodrome’s reactor and ultimately defeat Shredder. With the Life Transformer Gun, the turtles help Splinter return to his human form. With a tough mission accomplished, the turtles and April celebrate with a pizza. Gameplay Teenage Mutant Ninja Turtles is a s
https://en.wikipedia.org/wiki/Commonly%20used%20gamma-emitting%20isotopes
Radionuclides which emit gamma radiation are valuable in a range of different industrial, scientific and medical technologies. This article lists some common gamma-emitting radionuclides of technological importance, and their properties. Fission products Many artificial radionuclides of technological importance are produced as fission products within nuclear reactors. A fission product is a nucleus with approximately half the mass of a uranium or plutonium nucleus which is left over after such a nucleus has been "split" in a nuclear fission reaction. Caesium-137 is one such radionuclide. It has a half-life of 30 years, and decays by beta decay without gamma ray emission to a metastable state of barium-137 (). Barium-137m has a half-life of a 2.6 minutes and is responsible for all of the gamma ray emission in this decay sequence. The ground state of barium-137 is stable. The photon energy (energy of a single gamma ray) of is about 662 keV. These gamma rays can be used, for example, in radiotherapy such as for the treatment of cancer, in food irradiation, or in industrial gauges or sensors. is not widely used for industrial radiography as other nuclides, such as cobalt-60 or iridium-192, offer higher radiation output for a given volume. Iodine-131 is another important gamma-emitting radionuclide produced as a fission product. With a short half-life of 8 days, this radioisotope is not of practical use in radioactive sources in industrial radiography or sensing. However, since iodine is a component of biological molecules such as thyroid hormones, iodine-131 is of great importance in nuclear medicine, and in medical and biological research as a radioactive tracer. Lanthanum-140 is a decay product of barium-140, a common fission product. It is a potent gamma emitter. It was used in high quantities during the Manhattan Project for the RaLa Experiments. Activation products Some radionuclides, such as cobalt-60 and iridium-192, are made by the neutron irradiation
https://en.wikipedia.org/wiki/Total%20refraction
Total refraction occurs when an incident wave on an interface between two media with opposite refractive index signs is completely transmitted. There is then no reflected wave. This can occur only when one of the two materials has a negative refractive index. Composite metamaterials with this unusual property were fabricated for the first time in 2002. This phenomenon is conditioned by the wave impedance matching between the two media. Physical optics Geometrical optics ru:Преломление полное
https://en.wikipedia.org/wiki/Gibbs%20measure
In mathematics, the Gibbs measure, named after Josiah Willard Gibbs, is a probability measure frequently seen in many problems of probability theory and statistical mechanics. It is a generalization of the canonical ensemble to infinite systems. The canonical ensemble gives the probability of the system X being in state x (equivalently, of the random variable X having value x) as Here, is a function from the space of states to the real numbers; in physics applications, is interpreted as the energy of the configuration x. The parameter is a free parameter; in physics, it is the inverse temperature. The normalizing constant is the partition function. However, in infinite systems, the total energy is no longer a finite number and cannot be used in the traditional construction of the probability distribution of a canonical ensemble. Traditional approaches in statistical physics studied the limit of intensive properties as the size of a finite system approaches infinity (the thermodynamic limit). When the energy function can be written as a sum of terms that each involve only variables from a finite subsystem, the notion of a Gibbs measure provides an alternative approach. Gibbs measures were proposed by probability theorists such as Dobrushin, Lanford, and Ruelle and provided a framework to directly study infinite systems, instead of taking the limit of finite systems. A measure is a Gibbs measure if the conditional probabilities it induces on each finite subsystem satisfy a consistency condition: if all degrees of freedom outside the finite subsystem are frozen, the canonical ensemble for the subsystem subject to these boundary conditions matches the probabilities in the Gibbs measure conditional on the frozen degrees of freedom. The Hammersley–Clifford theorem implies that any probability measure that satisfies a Markov property is a Gibbs measure for an appropriate choice of (locally defined) energy function. Therefore, the Gibbs measure applies to widespread
https://en.wikipedia.org/wiki/RAM%20parity
RAM parity checking is the storing of a redundant parity bit representing the parity (odd or even) of a small amount of computer data (typically one byte) stored in random-access memory, and the subsequent comparison of the stored and the computed parity to detect whether a data error has occurred. The parity bit was originally stored in additional individual memory chips; with the introduction of plug-in DIMM, SIMM, etc. modules, they became available in non-parity and parity (with an extra bit per byte, storing 9 bits for every 8 bits of actual data) versions. History Early computers sometimes required the use of parity RAM, and parity-checking could not be disabled. A parity error typically caused the machine to halt, with loss of unsaved data; this is usually a better option than saving corrupt data. Logic parity RAM, also known as fake parity RAM, is non-parity RAM that can be used in computers that require parity RAM. Logic parity RAM recalculates an always-valid parity bit each time a byte is read from memory, instead of storing the parity bit when the memory is written to; the calculated parity bit, which will not reveal if the data has been corrupted (hence the name "fake parity"), is presented to the parity-checking logic. It is a means of using cheaper 8-bit RAM in a system designed to use only 9-bit parity RAM. Memory errors In the 1970s-80s, RAM reliability was often less-than-perfect; in particular, the 4116 DRAMs which were an industry standard from 1975 to 1983 had a considerable failure rate as they used triple voltages (-5, +5, and +12) which resulted in high operating temperatures. By the mid-1980s, these had given way to single voltage DRAM such as the 4164 and 41256 with the result of improved reliability. However, RAM did not achieve modern standards of reliability until the 1990s. Since then errors have become less visible as simple parity RAM has fallen out of use; either they are invisible as they are not detected, or they are corrected
https://en.wikipedia.org/wiki/Burr%E2%80%93Erd%C5%91s%20conjecture
In mathematics, the Burr–Erdős conjecture was a problem concerning the Ramsey number of sparse graphs. The conjecture is named after Stefan Burr and Paul Erdős, and is one of many conjectures named after Erdős; it states that the Ramsey number of graphs in any sparse family of graphs should grow linearly in the number of vertices of the graph. The conjecture was proven by Choongbum Lee. Thus it is now a theorem. Definitions If G is an undirected graph, then the degeneracy of G is the minimum number p such that every subgraph of G contains a vertex of degree p or smaller. A graph with degeneracy p is called p-degenerate. Equivalently, a p-degenerate graph is a graph that can be reduced to the empty graph by repeatedly removing a vertex of degree p or smaller. It follows from Ramsey's theorem that for any graph G there exists a least integer , the Ramsey number of G, such that any complete graph on at least vertices whose edges are coloured red or blue contains a monochromatic copy of G. For instance, the Ramsey number of a triangle is 6: no matter how the edges of a complete graph on six vertices are colored red or blue, there is always either a red triangle or a blue triangle. The conjecture In 1973, Stefan Burr and Paul Erdős made the following conjecture: For every integer p there exists a constant cp so that any p-degenerate graph G on n vertices has Ramsey number at most cp n. That is, if an n-vertex graph G is p-degenerate, then a monochromatic copy of G should exist in every two-edge-colored complete graph on cp n vertices. Known results Before the full conjecture was proved, it was first settled in some special cases. It was proven for bounded-degree graphs by ; their proof led to a very high value of cp, and improvements to this constant were made by and . More generally, the conjecture is known to be true for p-arrangeable graphs, which includes graphs with bounded maximum degree, planar graphs and graphs that do not contain a subdivision of Kp.
https://en.wikipedia.org/wiki/Seshadri%20constant
In algebraic geometry, a Seshadri constant is an invariant of an ample line bundle L at a point P on an algebraic variety. It was introduced by Demailly to measure a certain rate of growth, of the tensor powers of L, in terms of the jets of the sections of the Lk. The object was the study of the Fujita conjecture. The name is in honour of the Indian mathematician C. S. Seshadri. It is known that Nagata's conjecture on algebraic curves is equivalent to the assertion that for more than nine general points, the Seshadri constants of the projective plane are maximal. There is a general conjecture for algebraic surfaces, the Nagata–Biran conjecture. Definition Let be a smooth projective variety, an ample line bundle on it, a point of , = { all irreducible curves passing through }. . Here, denotes the intersection number of and , measures how many times passing through . Definition: One says that is the Seshadri constant of at the point , a real number. When is an abelian variety, it can be shown that is independent of the point chosen, and it is written simply .
https://en.wikipedia.org/wiki/Nagata%E2%80%93Biran%20conjecture
In mathematics, the Nagata–Biran conjecture, named after Masayoshi Nagata and Paul Biran, is a generalisation of Nagata's conjecture on curves to arbitrary polarised surfaces. Statement Let X be a smooth algebraic surface and L be an ample line bundle on X of degree d. The Nagata–Biran conjecture states that for sufficiently large r the Seshadri constant satisfies
https://en.wikipedia.org/wiki/Fujita%20conjecture
In mathematics, Fujita's conjecture is a problem in the theories of algebraic geometry and complex manifolds, unsolved . It is named after Takao Fujita, who formulated it in 1985. Statement In complex geometry, the conjecture states that for a positive holomorphic line bundle L on a compact complex manifold M, the line bundle KM ⊗ L⊗m (where KM is a canonical line bundle of M) is spanned by sections when m ≥ n + 1 ; very ample when m ≥ n + 2, where n is the complex dimension of M. Note that for large m the line bundle KM ⊗ L⊗m is very ample by the standard Serre's vanishing theorem (and its complex analytic variant). Fujita conjecture provides an explicit bound on m, which is optimal for projective spaces. Known cases For surfaces the Fujita conjecture follows from Reider's theorem. For three-dimensional algebraic varieties, Ein and Lazarsfeld in 1993 proved the first part of the Fujita conjecture, i.e. that m≥4 implies global generation. See also Ohsawa–Takegoshi L2 extension theorem
https://en.wikipedia.org/wiki/Core%20sample
A core sample is a cylindrical section of (usually) a naturally-occurring substance. Most core samples are obtained by drilling with special drills into the substance, such as sediment or rock, with a hollow steel tube, called a core drill. The hole made for the core sample is called the "core hole". A variety of core samplers exist to sample different media under different conditions; there is continuing development in the technology. In the coring process, the sample is pushed more or less intact into the tube. Removed from the tube in the laboratory, it is inspected and analyzed by different techniques and equipment depending on the type of data desired. Core samples can be taken to test the properties of manmade materials, such as concrete, ceramics, some metals and alloys, especially the softer ones. Core samples can also be taken of living things, including human beings, especially of a person's bones for microscopic examination to help diagnose diseases. Methods The composition of the subject materials can vary from almost liquid to the strongest materials found in nature or technology, and the location of the subject materials can vary from on the laboratory bench to over 10  km from the surface of the Earth in a borehole. The range of equipment and techniques applied to the task is correspondingly great. Core samples are most often taken with their long axis oriented roughly parallel to the axis of a borehole, or parallel to the gravity field for the gravity-driven tools. However it is also possible to take core samples from the wall of an existing borehole. Taking samples from an exposure, be it an overhanging rock face or on a different planet, is almost trivial. (The Mars Exploration Rovers carry a Rock Abrasion Tool, which is logically equivalent to the "rotary sidewall core" tool described below.) Some common techniques include: gravity coring, in which the core sampler is dropped into the sample, usually the bed of a water body, but essentially
https://en.wikipedia.org/wiki/Frost%20flower
A frost flower or ice flower is formed when thin layers of ice are extruded from long-stemmed plants in autumn or early winter. The thin layers of ice are often formed into exquisite patterns, curling into "petals" which resemble flowers. Types Frost flower formations are also referred to as frost faces, ice castles, ice blossoms, or crystallofolia. Types of frost flowers include needle ice, frost pillars, or frost columns, extruded from pores in the soil, and ice ribbons, rabbit frost, or rabbit ice, extruded from linear fissures in plant stems. The term "ice flower" is also used as synonym for ice ribbons, but it may be used to describe the unrelated phenomenon of window frost as well. Formation The formation of frost flowers is dependent on a freezing weather condition occurring when the ground is not already frozen. The sap in the stem of the plants will expand (water expands when frozen), causing long, thin cracks to form along the length of the stem. Water is then drawn through these cracks via capillary action and freezes upon contact with the air. As more water is drawn through the cracks it pushes the thin ice layers further from the stem, causing a thin "petal" to form. The petals of frost flowers are very delicate and will break when touched. They usually melt or sublime when exposed to sunlight and are usually visible in the early morning or in shaded areas. Examples of plants that often form frost flowers are white crownbeard (Verbesina virginica), commonly called frostweed, yellow ironweed (Verbesina alternifolia), dittany (Cunila origanoides), and Helianthemum canadense. See also Hair ice Needle ice Frostweed
https://en.wikipedia.org/wiki/KRP%20%28biochemistry%29
KRP stands for kinesin related proteins. bimC is a subfamily of KRPs and its function is to separate the duplicated centrosomes during mitosis. Role in mitotic repair Kinesin-13 MCAK (Mitotic Centromere-Associated Kinesin) is a KRP that is involved in resolving errors during mitosis involving kinetochore-microtubules. This process is associated with Aurora B Protein Kinase. When Aurora B's function is disrupted, MCAK ability to locate centromeres, which play a critical role in separation of chromosomes during mitosis, was suppressed. There are other environments in which MCAK's function is impaired, absent impact on its associated kinase. For example, alpha-tubulin detyrosination has been demonstrated to impact MCAK's mitotic repair capabilities, suggesting a potential cause of chromosomal instability.
https://en.wikipedia.org/wiki/Private%20annuity%20trust
Prior to 2006, a private annuity trust (PAT) was an arrangement to enable the value of highly appreciated assets, such as real estate, collectables or an investment portfolio, to be realized without directly selling them and incurring substantial taxes from their sale. A PAT was used to defer United States federal capital gains tax on the sale of an asset, to provide a stream of income, and in effect to remove the asset from the owner's estate, thus reducing or eliminating estate taxes. With these advantages, a PAT provided an alternative to other methods of deferring capital gains taxes, such as the charitable remainder trust (CRT), installment sale, or tax-deferred 1031 exchange. As of October 2006 the Internal Revenue Service (IRS) proposed a rule that would have provided that the PAT was no longer a valid capital gains tax deferral method. Those persons who used the PAT before the IRS ruling were to be grandfathered in, and would continue to result in tax deferral benefits. Prior to October 2006, PATs were attractive to sellers of highly appreciated real estate. A PAT allowed the owner of investment property to defer up to 100% of the taxes without ever having to buy another property. This is very important because good quality investment properties are difficult to locate. The PAT also allowed the seller of a highly appreciated primary residence to defer up to 100% of the taxes as well. This is important because all gains on primary residences over $250,000 for a single person, and $500,000 for a married couple will be taxed if a PAT is not used. A properly structured PAT involves first transferring the asset to the PAT in return for a lifetime income stream in the form of an annuity. The transfer of the asset is not a taxable transaction. A PAT is not issued by a commercial insurance company. Anytime after the asset is placed into the PAT, the asset can be sold without taxation to the trust. There is no tax on the sale to the PAT because the PAT has actual
https://en.wikipedia.org/wiki/Computation%20in%20the%20limit
In computability theory, a function is called limit computable if it is the limit of a uniformly computable sequence of functions. The terms computable in the limit, limit recursive and recursively approximable are also used. One can think of limit computable functions as those admitting an eventually correct computable guessing procedure at their true value. A set is limit computable just when its characteristic function is limit computable. If the sequence is uniformly computable relative to D, then the function is limit computable in D. Formal definition A total function is limit computable if there is a total computable function such that The total function is limit computable in D if there is a total function computable in D also satisfying A set of natural numbers is defined to be computable in the limit if and only if its characteristic function is computable in the limit. In contrast, the set is computable if and only if it is computable in the limit by a function and there is a second computable function that takes input i and returns a value of t large enough that the has stabilized. Limit lemma The limit lemma states that a set of natural numbers is limit computable if and only if the set is computable from (the Turing jump of the empty set). The relativized limit lemma states that a set is limit computable in if and only if it is computable from . Moreover, the limit lemma (and its relativization) hold uniformly. Thus one can go from an index for the function to an index for relative to . One can also go from an index for relative to to an index for some that has limit . Proof As is a [computably enumerable] set, it must be computable in the limit itself as the computable function can be defined whose limit as goes to infinity is the characteristic function of . It therefore suffices to show that if limit computability is preserved by Turing reduction, as this will show that all sets computable from are limit compu
https://en.wikipedia.org/wiki/Scalar%20theories%20of%20gravitation
Scalar theories of gravitation are field theories of gravitation in which the gravitational field is described using a scalar field, which is required to satisfy some field equation. Note: This article focuses on relativistic classical field theories of gravitation. The best known relativistic classical field theory of gravitation, general relativity, is a tensor theory, in which the gravitational interaction is described using a tensor field. Newtonian gravity The prototypical scalar theory of gravitation is Newtonian gravitation. In this theory, the gravitational interaction is completely described by the potential , which is required to satisfy the Poisson equation (with the mass density acting as the source of the field). To wit: , where G is the gravitational constant and is the mass density. This field theory formulation leads directly to the familiar law of universal gravitation, . Nordström's theories of gravitation The first attempts to present a relativistic (classical) field theory of gravitation were also scalar theories. Gunnar Nordström created two such theories. Nordström's first idea (1912) was to simply replace the divergence operator in the field equation of Newtonian gravity with the d'Alembertian operator . This gives the field equation . However, several theoretical difficulties with this theory quickly arose, and Nordström dropped it. A year later, Nordström tried again, presenting the field equation , where is the trace of the stress–energy tensor. Solutions of Nordström's second theory are conformally flat Lorentzian spacetimes. That is, the metric tensor can be written as , where ημν is the Minkowski metric, and is a scalar which is a function of position. This suggestion signifies that the inertial mass should depend on the scalar field. Nordström's second theory satisfies the weak equivalence principle. However: The theory fails to predict any deflection of light passing near a massive body (contrary to obser
https://en.wikipedia.org/wiki/Morton%27s%20neuroma
Morton's neuroma is a benign neuroma of an intermetatarsal plantar nerve, most commonly of the second and third intermetatarsal spaces (between the second/third and third/fourth metatarsal heads; the first is of the big toe), which results in the entrapment of the affected nerve. The main symptoms are pain and/or numbness, sometimes relieved by ceasing to wear footwear with tight toe boxes and high heels (which have been linked to the condition). The condition is named after Thomas George Morton, though it was first correctly described by a chiropodist named Durlacher. Some sources claim that entrapment of the plantar nerve resulting from compression between the metatarsal heads, as originally proposed by Morton, is highly unlikely, because the plantar nerve is on the plantar side of the transverse metatarsal ligament and thus does not come into contact with the metatarsal heads. It is more likely that the transverse metatarsal ligament is the cause of the entrapment. Though the condition is labeled as a neuroma, many sources do not consider it a true tumor, but rather a perineural fibroma (fibrous tissue formation around nerve tissue). Signs and symptoms Symptoms include pain on weight bearing, frequently after only a short time. The nature of the pain varies widely among individuals. Some people experience shooting pain affecting the contiguous halves of two toes. Others describe a feeling akin to having a pebble in the shoe or walking on razor blades. Burning, numbness, and paresthesia may also be experienced. The symptoms progress over time, often beginning as a tingling sensation in the ball of the foot. Morton's neuroma lesions have been found using MRI in patients without symptoms. Diagnosis Negative signs include a lack of obvious deformities, erythema, signs of inflammation, or limitation of movement. Direct pressure between the metatarsal heads will replicate the symptoms, as will compression of the forefoot between the finger and thumb so as to comp
https://en.wikipedia.org/wiki/Topological%20order
In physics, topological order is a kind of order in the zero-temperature phase of matter (also known as quantum matter). Macroscopically, topological order is defined and described by robust ground state degeneracy and quantized non-Abelian geometric phases of degenerate ground states. Microscopically, topological orders correspond to patterns of long-range quantum entanglement. States with different topological orders (or different patterns of long range entanglements) cannot change into each other without a phase transition. Various topologically ordered states have interesting properties, such as (1) topological degeneracy and fractional statistics or non-Abelian statistics that can be used to realize a topological quantum computer; (2) perfect conducting edge states that may have important device applications; (3) emergent gauge field and Fermi statistics that suggest a quantum information origin of elementary particles; (4) topological entanglement entropy that reveals the entanglement origin of topological order, etc. Topological order is important in the study of several physical systems such as spin liquids and the quantum Hall effect, along with potential applications to fault-tolerant quantum computation. Topological insulators and topological superconductors (beyond 1D) do not have topological order as defined above, their entanglements being only short-ranged. Background Matter composed of atoms can have different properties and appear in different forms, such as solid, liquid, superfluid, etc. These various forms of matter are often called states of matter or phases. According to condensed matter physics and the principle of emergence, the different properties of materials generally arise from the different ways in which the atoms are organized in the materials. Those different organizations of the atoms (or other particles) are formally called the orders in the materials. Atoms can organize in many ways which lead to many different orders and m
https://en.wikipedia.org/wiki/Ralph%20P.%20Boas%20Jr.
Ralph Philip Boas Jr. (August 8, 1912 – July 25, 1992) was a mathematician, teacher, and journal editor. He wrote over 200 papers, mainly in the fields of real and complex analysis. Biography He was born in Walla Walla, Washington, the son of an English professor at Whitman College, but moved frequently as a child; his younger sister, Marie Boas Hall, later to become a historian of science, was born in Springfield, Massachusetts, where his father had become a high school teacher. He was home-schooled until the age of eight, began his formal schooling in the sixth grade, and graduated from high school while still only 15. After a gap year auditing classes at Mount Holyoke College (where his father had become a professor) he entered Harvard, intending to major in chemistry and go into medicine, but ended up studying mathematics instead. His first mathematics publication was written as an undergraduate, after he discovered an incorrect proof in another paper. He got his A.B. degree in 1933, received a Sheldon Fellowship for a year of travel, and returned to Harvard for his doctoral studies in 1934. He earned his doctorate there in 1937, under the supervision of David Widder. After postdoctoral studies at Princeton University with Salomon Bochner, and then the University of Cambridge in England, he began a two-year instructorship at Duke University, where he met his future wife, Mary Layne, also a mathematics instructor at Duke. They were married in 1941, and when the United States entered World War II later that year, Boas moved to the Navy Pre-flight School in Chapel Hill, North Carolina. In 1942, he interviewed for a position in the Manhattan Project, at the Los Alamos National Laboratory, but ended up returning to Harvard to teach in a Navy instruction program there, while his wife taught at Tufts University. Beginning when he was an instructor at Duke University, Boas had become a prolific reviewer for Mathematical Reviews, and at the end of the war he took a
https://en.wikipedia.org/wiki/Quick%20ratio
In finance, the quick ratio, also known as the acid-test ratio is a type of liquidity ratio, which measures the ability of a company to use its near cash or quick assets to extinguish or retire its current liabilities immediately. It is defined as the ratio between quickly available or liquid assets and current liabilities. Quick assets are current assets that can presumably be quickly converted to cash at close to their book values. A normal liquid ratio is considered to be 1:1. A company with a quick ratio of less than 1 cannot currently fully pay back its current liabilities. The quick ratio is similar to the current ratio but provides a more conservative assessment of the liquidity position of firms as it excludes inventory, which it does not consider as sufficiently liquid. Formula or specifically: It can also be expressed as: Ratio Ratios are tests of viability for business entities but do not give a complete picture of the business's health. If a business has large amounts in accounts receivable which are due for payment after a long period (say 120 days), and essential business expenses and accounts payable due for immediate payment, the quick ratio may look healthy when the business is actually about to run out of cash. In contrast, if the business has negotiated fast payment or cash from customers, and long terms from suppliers, it may have a very low quick ratio and yet be very healthy. More detailed analysis of all major payables and receivables in line with market sentiments and adjusting input data accordingly shall give more sensible outcomes which shall give actionable insights. Generally, the acid test ratio should be 1:1 or higher; however, this varies widely by industry. In general, the higher the ratio, the greater the company's liquidity (i.e., the better able to meet current obligations using liquid assets). See also Current ratio Financial Accounting
https://en.wikipedia.org/wiki/List%20of%20complex%20and%20algebraic%20surfaces
This is a list of named algebraic surfaces, compact complex surfaces, and families thereof, sorted according to their Kodaira dimension following Enriques–Kodaira classification. Kodaira dimension −∞ Rational surfaces Projective plane Quadric surfaces Cone (geometry) Cylinder Ellipsoid Hyperboloid Paraboloid Sphere Spheroid Rational cubic surfaces Cayley nodal cubic surface, a certain cubic surface with 4 nodes Cayley's ruled cubic surface Clebsch surface or Klein icosahedral surface Fermat cubic Monkey saddle Parabolic conoid Plücker's conoid Whitney umbrella Rational quartic surfaces Châtelet surfaces Dupin cyclides, inversions of a cylinder, torus, or double cone in a sphere Gabriel's horn Right circular conoid Roman surface or Steiner surface, a realization of the real projective plane in real affine space Tori, surfaces of revolution generated by a circle about a coplanar axis Other rational surfaces in space Boy's surface, a sextic realization of the real projective plane in real affine space Enneper surface, a nonic minimal surface Henneberg surface, a minimal surface of degree 15 Bour's minimal surface, a surface of degree 16 Richmond surfaces, a family of minimal surfaces of variable degree Other families of rational surfaces Coble surfaces Del Pezzo surfaces, surfaces with an ample anticanonical divisor Hirzebruch surfaces, rational ruled surfaces Segre surfaces, intersections of two quadrics in projective 4-space Unirational surfaces of characteristic 0 Veronese surface, the Veronese embedding of the projective plane into projective 5-space White surfaces, the blow-up of the projective plane at points by the linear system of degree- curves through those points Bordiga surfaces, the White surfaces determined by families of quartic curves Non-rational ruled surfaces Class VII surfaces Vanishing second Betti number: Hopf surfaces Inoue surfaces; several other families discovered by Inoue have also been called "
https://en.wikipedia.org/wiki/Academy%20of%20Nutrition%20and%20Dietetics
The Academy of Nutrition and Dietetics is a 501(c)(6) trade association in the United States. With over 112,000 members, the association claims to be the largest organization of food and nutrition professionals. It has registered dietitian nutritionists (RDNs), nutrition and dietetics technicians registered (NDTRs), and other dietetics professionals as members. Founded in 1917 as the American Dietetic Association, the organization officially changed its name to the Academy of Nutrition and Dietetics in 2012. According to the group's website, about 65% of its members are RDNs, and another 2% are NDTRs. The group's primary activities include providing testimony at hearings, lobbying the United States Congress and other governmental bodies, commenting on proposed regulations, and publishing statements on various topics pertaining to food and nutrition. The association is funded by a number of food multinationals, pharmaceutical companies, and food industry lobbying groups, such as the National Confectioners Association. The Academy has faced controversy regarding corporate influence related to its relationship with the food industry and funding from corporate groups such as McDonald's, Coca-Cola, Mars, and others. History The Academy of Nutrition and Dietetics was founded in 1917 in Cleveland, Ohio, by a group of women led by Lenna F. Cooper and the Academy's first president, Lulu G. Graves, for the purpose helping the government conserve food and improve public health during World War I. It is now headquartered in Chicago, Illinois. The original mission of the Academy was in part to help make maximal use of America's food resources during wartime. In its first year, the Academy attracted 58 members. It remained a small organization, remaining under the 1,000 member mark until the 1930s. As the group's scope expanded, so did its membership numbers. Between the 1930s and 1960s, membership grew to more than 60,000. Growth trajectory has since stabilized, and the Acade
https://en.wikipedia.org/wiki/Bit%20cell
A bit cell is the length of tape, the area of disc surface, or the part of an integrated circuit in which a single bit is recorded. The smaller the bit cells are, the greater the storage density of the medium is. In magnetic storage, the magnetic flux or magnetization doesn't necessarily change at the boundaries of bit cells to indicate bit states. For example, the presence of a magnetic transition within a bit cell might record state 1, and the lack of such a transition might record state 0. Other encodings are also possible. See also Computer data storage
https://en.wikipedia.org/wiki/Operation%20CHAOS
Operation CHAOS or Operation MHCHAOS was a Central Intelligence Agency (CIA) domestic espionage project targeting American citizens operating from 1967 to 1974, established by President Lyndon B. Johnson and expanded under President Richard Nixon, whose mission was to uncover possible foreign influence on domestic race, anti-war, and other protest movements. The operation was launched under Director of Central Intelligence (DCI) Richard Helms by chief of counter-intelligence James Jesus Angleton, and headed by Richard Ober. The "MH" designation is to signify the program had a global area of operations. Background The CIA was charged with the collection, correlation, and evaluation of intelligence. While the Act does not specify a prohibition on collecting domestic intelligence, or a restriction to only collect foreign intelligence, Executive Order 12333 of 1981 added prohibitions to limit CIA activities. The CIA began domestic recruiting operations in 1959 in the process of finding Cuban exiles who could be used in the campaign against Cuba and President Fidel Castro. As these operations expanded, the CIA formed a Domestic Operations Division in 1964. In 1965, President Lyndon Johnson requested that the CIA begin its own investigation into domestic dissent—independent of the FBI's ongoing COINTELPRO. The CIA developed numerous operations targeting American dissidents in the US. Many of these programs operated under the CIA's Office of Security, including: HTLINGUAL – Directed at letters passing between the United States and the then Soviet Union; the program involved the examination of correspondence to and from individuals or organizations placed on a watchlist. Project 2 – Directed at infiltration of foreign intelligence targets by agents posing as dissident sympathizers and which, like CHAOS, had placed agents within domestic radical organizations for the purposes of training and establishment of dissident credentials. Project MERRIMAC – Designed to infiltrate
https://en.wikipedia.org/wiki/Peppadew
Peppadew is a trademarked brand name of South African food company Peppadew International (Pty) Ltd. for a pickled version of the Juanita pepper. Peppadew International produces and markets a variety of food products under the Peppadew brand, including jalapeño peppers, Goldew peppers, pickled onions, hot sauces, pasta sauces and relishes, but is best known for its sweet piquanté pepper (a cultivar of Capsicum baccatum) grown in the Limpopo province of South Africa. History Peppadew International and the Peppadew brand was founded in 1995 after founder Johan Steenkamp discovered a sweet piquanté pepper in the Eastern Cape of South Africa. Upon discovery of the pepper, plant breeders' rights were applied for and obtained with the South African Department of Agriculture, Forestry and Fisheries in order to protect the species. Johan Steenkamp started cultivating and processing the peppers in the Tzaneen region of South Africa, where Peppadew International's factory is still based today. Steenkamp later sold his interest in the company in 2004. Peppadew International were the first to market this type of pepper. Although the pepper is sometimes described as a cross between a pepper and a tomato, this description is not botanically accurate, and refers only to the resemblance in color and size between red peppers and cherry tomatoes. In 2011 Bon Appetit published a recipe for Pimento Mac & Cheese calling for Peppadews, then had to run a follow-up piece telling readers how to find the peppers. In 2016 the Baltimore Sun reported that there was a black market for the pepper's seeds. Processing Peppadew brand peppers are grown into seedlings from hand-selected seeds for six to eight weeks. They are then transferred to contract farmers who then grow the peppers under the guidance of Peppadew International's agricultural team. The raw peppers are then harvested and sent to Peppadew International's processing facility where the peppers are de-seeded, treated and bottled in
https://en.wikipedia.org/wiki/Interlock%20%28engineering%29
An interlock is a feature that makes the state of two mechanisms or functions mutually dependent. It may consist of any electrical, or mechanical devices or systems. In most applications, an interlock is used to help prevent any damage to the machine or to the operator handling the machine. For example, elevators are equipped with an interlock that prevents the moving elevator from opening its doors and prevents the stationary elevator (with open doors) from moving. Interlocks may include sophisticated elements such as curtains of infrared beams, photodetectors, simple switches, and locks. It can also be a computer containing an interlocking computer program with digital or analogue electronics. Trapped-key interlocking Trapped-key interlocking is a method of ensuring safety in industrial environments by forcing the operator through a predetermined sequence using a defined selection of keys, locks and switches. It is called trapped key as it works by releasing and trapping keys in a predetermined sequence. After the control or power has been isolated, a key is released that can be used to grant access to individual or multiple doors. Below is an example of what a trapped key interlock transfer block would look like. This is a part of a trapped key interlocking system. In order to obtain the keys in this system, a key must be inserted and turned (like the key at the bottom of the system of the picture). Once the key is turned, the operator may retrieve the remaining keys that will be used to open other doors. Once all keys are returned, then the operator will be allowed to take out the original key from the beginning. The key will not turn unless the remaining keys are put back in its place. Another example is an electric kiln. To prevent access to the inside of an electric kiln, a trapped key system may be used to interlock a disconnecting switch and the kiln door. While the switch is turned on, the key is held by the interlock attached to the disconnecting
https://en.wikipedia.org/wiki/MacUpdate
MacUpdate is a Mac software download website founded in 1996. History In the Inc. 5000 list of private American companies with the fastest revenue growth, MacUpdate was listed 319th in 2008, 114th in 2009, and 233rd in 2010. MacUpdate has offered several "bundles" offering Mac software at a discounted price. The company offered an application called MacUpdate Desktop ($20/year with a 10 day trial) which automatically downloaded and installed updates to other installed applications on a user's Mac. MacUpdate Desktop has since been discontinued. In 2020, MacUpdate was acquired by Clario Tech ltd., a London-Kyiv based cybersecurity company.
https://en.wikipedia.org/wiki/Standard%20normal%20table
In statistics, a standard normal table, also called the unit normal table or Z table, is a mathematical table for the values of , the cumulative distribution function of the normal distribution. It is used to find the probability that a statistic is observed below, above, or between values on the standard normal distribution, and by extension, any normal distribution. Since probability tables cannot be printed for every normal distribution, as there are an infinite variety of normal distributions, it is common practice to convert a normal to a standard normal (known as a z-score) and then use the standard normal table to find probabilities. Normal and standard normal distribution Normal distributions are symmetrical, bell-shaped distributions that are useful in describing real-world data. The standard normal distribution, represented by , is the normal distribution having a mean of 0 and a standard deviation of 1. Conversion If is a random variable from a normal distribution with mean and standard deviation , its Z-score may be calculated from by subtracting and dividing by the standard deviation: If is the mean of a sample of size from some population in which the mean is and the standard deviation is , the standard error is If is the total of a sample of size from some population in which the mean is and the standard deviation is , the expected total is and the standard error is Reading a Z table Formatting / layout tables are typically composed as follows: The label for rows contains the integer part and the first decimal place of . The label for columns contains the second decimal place of . The values within the table are the probabilities corresponding to the table type. These probabilities are calculations of the area under the normal curve from the starting point (0 for cumulative from mean, negative infinity for cumulative and positive infinity for complementary cumulative) to . Example: To find 0.69, one would look down the r
https://en.wikipedia.org/wiki/Project%20MERRIMAC
Project MERRIMAC was a domestic espionage operation coordinated under the Office of Security of the CIA. It involved information gathering procedures via infiltration and surveillance on Washington-based anti-war groups that might pose potential threats to the CIA. However, the type of data gathered also included general information on the infrastructure of targeted communities. Project MERRIMAC and its twin program, Project RESISTANCE were both coordinated by the CIA Office of Security. In addition, the twin projects were branch operations that relayed civilian information to their parent program, Operation CHAOS. The Assassination Archives and Research Center believes that Project MERRIMAC began in February 1967. See also Operation CHAOS Project RESISTANCE COINTELPRO
https://en.wikipedia.org/wiki/Journal%20club
A journal club is a group of individuals who meet regularly to critically evaluate recent articles in the academic literature, such as the scientific literature, medical literature, or philosophy literature. Journal clubs are usually organized around a defined subject in basic or applied research. For example, the application of evidence-based medicine to some area of medical practice can be facilitated by a journal club. Typically, each participant can voice their view relating to several questions such as the appropriateness of the research design, the statistics employed, the appropriateness of the controls that were used, etc. There might be an attempt to synthesize together the results of several papers, even if some of these results might first appear to contradict each other. Even if the results of the study are seen as valid, there might be a discussion of how useful the results are and if these results might lead to new research or to new applications. Journal clubs are sometimes used in the education of graduate or professional students. These help make the student(s) become more familiar with the advanced literature in their new field of study. In addition, these journal clubs help improve the students' skills of understanding and debating current topics of active interest in their field. This type of journal club may sometimes be taken for credit. Research laboratories may also organize journal clubs for all researchers in the lab to help them keep up with the literature produced by others who work in their field. Traditional journal club Traditionally, journal clubs have met weekly or monthly to discuss current research in a topic relevant to the field. An analysis of one hundred publications describing and evaluating journal clubs found that they are most effective if they have a clearly identified leader and have an established purpose that all articles can be linked to. Online journal clubs Prominent journals and scientific societies have beg
https://en.wikipedia.org/wiki/Radio%20over%20IP
Radio over Internet Protocol, or RoIP, is similar to Voice over IP (VoIP), but augments two-way radio communications rather than telephone calls. From the system point of view, it is essentially VoIP with push-to-talk. To the user it can be implemented like any other radio network. With RoIP, at least one node of a network is a radio (or a radio with an IP interface device) connected via IP to other nodes in the radio network. The other nodes can be two-way radios, but could also be dispatch consoles either traditional (hardware) or modern (software on a PC), POTS telephones, softphone applications running on a computer such as Skype phone, PDA, smartphone, or some other communications device accessible over IP. RoIP can be deployed over private networks as well as the public Internet. It is useful in land mobile radio systems used by public safety departments and fleets of utilities spread over a broad geographic area. Like other centralized radio systems such as trunked radio systems, issues of delay or latency and reliance on centralized infrastructure can be impediments to adoption by public safety agencies. RoIP is not a proprietary or protocol-limited construct but a basic concept that has been implemented in a number of ways. Several systems have been implemented in the amateur radio community such as Galaxy PTT Comms, AllStar Link, BroadNet, IRLP, and EchoLink that have demonstrated the utility of RoIP in a partly or entirely open-source environment. Many commercial radio systems vendors such as Motorola and Harris have adopted RoIP as part of their system designs. The motivation to deploy RoIP technology is usually driven by one of three factors: first, the need to span large geographic areas or operate in areas without sufficient coverage from radio towers; second, the desire to provide more reliable, or at least more repairable links in radio systems; and third, to support the use of many base station users, that is, voice communications from station
https://en.wikipedia.org/wiki/Index%20to%20Marine%20%26%20Lacustrine%20Geological%20Samples
The Index to Marine & Lacustrine Geological Samples is a collaboration between multiple institutions and agencies that operate geological sample repositories. The purpose of the database is to help researchers locate sea floor and lakebed cores, grabs, dredges, and drill samples in their collections. Sample material is available from participating institutions unless noted as unavailable. Data include basic collection and storage information. Lithology, texture, age, principal investigator, province, weathering/metamorphism, glass remarks, and descriptive comments are included for some samples. Links are provided to related data and information at the institutions and at NCEI. Data are coded by individual institutions, several of which receive funding from the US National Science Foundation. For more information see the NSF Division of Ocean Sciences Data and Sample Policy. The Index is endorsed by the Intergovernmental Oceanographic Commission, Committee on International Oceanographic Data and Information Exchange (IODE-XIV.2). The index is maintained by the National Centers for Environmental Information (NCEI), formerly the National Geophysical Data Center (NGDC), and collocated World Data Center for Geophysics, Boulder, Colorado. NCEI is part of the National Environmental Satellite, Data and Information Service of the National Oceanic & Atmospheric Administration, U. S. Department of Commerce. Searches and data downloads are available via a JSP and an ArcIMS interface. Data selections can be downloaded in tab-delimited or shapefile form, depending on the interface used. Both WMS and WFS interfaces are also available. The Index was created in 1977 in response to a meeting of Curators of Marine Geological Samples, sponsored by the U.S. National Science Foundation. The Curators' group continues to meet every 2–3 years. Dataset Digital Object Identifier DOI:10.7289/V5H41PB8 Web site The Index to Marine and Lacustrine Geological Samples Participating Ins
https://en.wikipedia.org/wiki/Pad%C3%A9%20approximant
In mathematics, a Padé approximant is the "best" approximation of a function near a specific point by a rational function of given order. Under this technique, the approximant's power series agrees with the power series of the function it is approximating. The technique was developed around 1890 by Henri Padé, but goes back to Georg Frobenius, who introduced the idea and investigated the features of rational approximations of power series. The Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. For these reasons Padé approximants are used extensively in computer calculations. They have also been used as auxiliary functions in Diophantine approximation and transcendental number theory, though for sharp results ad hoc methods—in some sense inspired by the Padé theory—typically replace them. Since Padé approximant is a rational function, an artificial singular point may occur as an approximation, but this can be avoided by Borel–Padé analysis. The reason the Padé approximant tends to be a better approximation than a truncating Taylor series is clear from the viewpoint of the multi-point summation method. Since there are many cases in which the asymptotic expansion at infinity becomes 0 or a constant, it can be interpreted as the "incomplete two-point Padé approximation", in which the ordinary Padé approximation improves the method truncating a Taylor series. Definition Given a function f and two integers m ≥ 0 and n ≥ 1, the Padé approximant of order [m/n] is the rational function which agrees with f(x) to the highest possible order, which amounts to Equivalently, if is expanded in a Maclaurin series (Taylor series at 0), its first terms would equal the first terms of , and thus When it exists, the Padé approximant is unique as a formal power series for the given m and n. The Padé approximant defined above is also denoted as Computation F
https://en.wikipedia.org/wiki/Proofs%20of%20quadratic%20reciprocity
In number theory, the law of quadratic reciprocity, like the Pythagorean theorem, has lent itself to an unusually large number of proofs. Several hundred proofs of the law of quadratic reciprocity have been published. Proof synopsis Of the elementary combinatorial proofs, there are two which apply types of double counting. One by Gotthold Eisenstein counts lattice points. Another applies Zolotarev's lemma to , expressed by the Chinese remainder theorem as and calculates the signature of a permutation. The shortest known proof also uses a simplified version of double counting, namely double counting modulo a fixed prime. Eisenstein's proof Eisenstein's proof of quadratic reciprocity is a simplification of Gauss's third proof. It is more geometrically intuitive and requires less technical manipulation. The point of departure is "Eisenstein's lemma", which states that for distinct odd primes p, q, where denotes the floor function (the largest integer less than or equal to x), and where the sum is taken over the even integers u = 2, 4, 6, ..., p−1. For example, This result is very similar to Gauss's lemma, and can be proved in a similar fashion (proof given below). Using this representation of (q/p), the main argument is quite elegant. The sum counts the number of lattice points with even x-coordinate in the interior of the triangle ABC in the following diagram: Because each column has an even number of points (namely q−1 points), the number of such lattice points in the region BCYX is the same modulo 2 as the number of such points in the region CZY: Then by flipping the diagram in both axes, we see that the number of points with even x-coordinate inside CZY is the same as the number of points inside AXY having odd x-coordinates. This can be justified mathematically by noting that . The conclusion is that where μ is the total number of lattice points in the interior of AXY. Switching p and q, the same argument shows that where ν is the number of la
https://en.wikipedia.org/wiki/Glan%E2%80%93Taylor%20prism
A Glan–Taylor prism is a type of prism which is used as a polarizer or polarizing beam splitter. It is one of the most common types of modern polarizing prism. It was first described by Archard and Taylor in 1948. The prism is made of two right-angled prisms of calcite (or sometimes other birefringent materials) separated on their long faces with an air gap. The optical axes of the calcite crystals are aligned parallel to the plane of reflection. Total internal reflection of s-polarized light at the air gap ensures that only p-polarized light is transmitted by the device. Because the angle of incidence at the gap can be reasonably close to Brewster's angle, unwanted reflection of p-polarized light is reduced, giving the Glan–Taylor prism better transmission than the Glan–Foucault design. Note that while the transmitted beam is completely polarized, the reflected beam is not. The sides of the crystal can be polished to allow the reflected beam to exit or can be blackened to absorb it. The latter reduces unwanted Fresnel reflection of the rejected beam. A variant of the design exists called a Glan–laser prism. This is a Glan–Taylor prism with a steeper angle for the cut in the prism, which decreases reflection loss at the expense of reduced angular field of view. These polarizers are also typically designed to tolerate very high beam intensities, such those produced by a laser. The differences may include using calcite selected for low scattering loss, improved polish quality on the faces and especially on the sides of the crystal, and better antireflection coatings. Prisms with irradiance damage thresholds greater than 1 GW/cm2 are commercially available. See also Glan–Foucault prism Glan–Thompson prism
https://en.wikipedia.org/wiki/Davson%E2%80%93Danielli%20model
The Davson–Danielli model (or paucimolecular model) was a model of the plasma membrane of a cell, proposed in 1935 by Hugh Davson and James Danielli. The model describes a phospholipid bilayer that lies between two layers of globular proteins, which is both trilaminar and lipoprotinious. The phospholipid bilayer had already been proposed by Gorter and Grendel in 1925; however, the flanking proteinaceous layers in the Davson–Danielli model were novel and intended to explain Danielli's observations on the surface tension of lipid bi-layers (It is now known that the phospholipid head groups are sufficient to explain the measured surface tension). Evidence for the model included electron microscopy, in which high-resolution micrographs showed three distinct layers within a cell membrane, with an inner white core and two flanking dark layers. Since proteins usually appear dark and phospholipids white, the micrographs were interpreted as a phospholipid bilayer sandwiched between two protein layers. The model proposed an explanation for the ability for certain molecules to permeate the cell membrane while other molecules could not, while also accounting for the thinness of cell membranes. Despite the Davson–Danielli model being scientifically accepted, the model made assumptions, such as assuming that all membranes had the same structure, thickness and lipid-protein ratio, contradicting the observation that membranes could have specialized functions. Furthermore, the Davson–Danielli model could not account for certain observed phenomena, notably the bulk movement of molecules through the plasma membrane through active transport. Another shortcoming of the Davson–Danielli model was that many membrane proteins were known to be amphipathic and mostly hydrophobic, and therefore existing outside of the cell membranes in direct contact remained an unresolved complication. The Davson–Danielli model was scientifically accepted until Seymour Jonathan Singer and Garth L.
https://en.wikipedia.org/wiki/Regenerative%20Satellite%20Mesh%20%E2%80%93%20A
Regenerative Satellite Mesh – A (RSM-A) is an internationally standardized satellite communications protocol by Telecommunications Industry Association and European Telecommunications Standards Institute. It is based upon the Spaceway Ka-band communications system developed by Hughes Network Systems. It is expected to be utilized by the Hughes Network Systems satellite called Spaceway-3. The standard is meant to provide broadband capabilities of up to 512 kbit/s, 2 Mbit/s, and 16 Mbit/s uplink data communication rates with fixed Ka-band satellite terminal antennas sized as small as 77 cm. The standard consists of the following documents: TIA-1040.1.01 Physical Layer Specification; Part 1: General Description TIA-1040.1.02 Physical Layer Specification; Part 2: Frame Structure TIA-1040.1.03 Physical Layer Specification; Part 3: Channel Coding TIA-1040.1.04 Physical Layer Specification; Part 4: Modulation TIA-1040.1.05 Physical Layer Specification; Part 5: Radio Transmission and Reception TIA-1040.1.06 Physical Layer Specification; Part 6: Radio Link Control TIA-1040.1.07 Physical Layer Specification; Part 7: Synchronization TIA-1040.2.01 MAC/SLC Layer Specification; Part 1: General Description TIA-1040.2.02 MAC/SLC Layer Specification; Part 2: SLC Layer TIA-1040.2.03 MAC/SLC Layer Specification; Part 3: ST-SAM interface General Description The standard describes the various segments involved in a RSM-A satellite system including: Satellite Terminal: fixed satellite terminal for satellite communication linked to terrestrial hosts via connected LANs Satellite Payload: geosynchronous regenerative satellite payload and antennas Network Operations Control Center: involved ground network management and resource management The uplink consists of a multi-frequency time-division multiple access (MF-TDMA) scheme where individual uplink spotbeams are assigned frequency channels out of the satellites frequency band. Satellite Terminals transmit on timeslots o
https://en.wikipedia.org/wiki/Thoracic%20aorta
The thoracic aorta is a part of the aorta located in the thorax. It is a continuation of the aortic arch. It is located within the posterior mediastinal cavity, but frequently bulges into the left pleural cavity. The descending thoracic aorta begins at the lower border of the fourth thoracic vertebra and ends in front of the lower border of the twelfth thoracic vertebra, at the aortic hiatus in the diaphragm where it becomes the abdominal aorta. At its commencement, it is situated on the left of the vertebral column; it approaches the median line as it descends; and, at its termination, lies directly in front of the column. The thoracic aorta has a curved shape that faces forward, and has small branches. It has a radius of approximately 1.16 cm. Structure The thoracic aorta is part of the descending aorta, which has different parts named according to their structure or location. The thoracic aorta is a continuation of the descending aorta and becomes the abdominal aorta when it passes through the diaphragm. The initial part of the aorta, the ascending aorta, rises out of the left ventricle, from which it is separated by the aortic valve. The two coronary arteries of the heart arise from the aortic root, just above the cusps of the aortic valve. The aorta then arches back over the right pulmonary artery. Three vessels come out of the aortic arch: the brachiocephalic artery, the left common carotid artery, and the left subclavian artery. These vessels supply blood to the head, neck, thorax and upper limbs. Behind the descending thoracic aorta is the vertebral column and the hemiazygos vein. To the right is the azygos veins and thoracic duct, and to the left is the left pleura and lung. In front of the thoracic aorta lies the root of the left lung, the pericardium, the esophagus, and the diaphragm. The esophagus, which is covered by a nerve plexus lies to the right of the descending thoracic aorta. Lower, the esophagus passes in front of the aorta, and ultimately
https://en.wikipedia.org/wiki/Cycler
A cycler is a potential spacecraft on a closed transfer orbit that would pass close to two celestial bodies at regular intervals. Cyclers could be used for carrying heavy supplies, life support and radiation shielding. Free return trajectory A free-return trajectory is a symmetrical orbit past the Moon and Earth that was first analysed by Arthur Schwaniger Lunar cycler A lunar cycler or Earth–Moon cycler is a cycler orbit, or spacecraft therein, which periodically passes close by the Earth and the Moon, using gravity assists and occasional propellant-powered corrections to maintain its trajectories between the two. If the fuel required to reach a particular cycler orbit from both the Earth and the Moon is modest, and the travel time between the two along the cycler is reasonable, then having a spacecraft in the cycler can provide an efficient and regular method for space transportation. Mars cycler A Mars cycler or Earth–Mars cycler is a spacecraft trajectory that encounters the Earth and Mars on a regular basis, or a spacecraft on such a trajectory Interstellar cycler An interstellar cycler or Schroeder cycler, a theoretical spacecraft trajectory that encounters two or more stars on a regular basis, or a spacecraft on such a trajectory
https://en.wikipedia.org/wiki/Superficial%20temporal%20artery
In human anatomy, the superficial temporal artery is a major artery of the head. It arises from the external carotid artery when it splits into the superficial temporal artery and maxillary artery. Its pulse can be felt above the zygomatic arch, above and in front of the tragus of the ear. Structure The superficial temporal artery is the smaller of two end branches that split superiorly from the external carotid. Based on its direction, the superficial temporal artery appears to be a continuation of the external carotid. It begins within the parotid gland, behind the neck of the mandible, and passes superficially over the posterior root of the zygomatic process of the temporal bone; about 5 cm above this process it divides into two branches: a. frontal, and a. parietal. Branches The parietal branch of the superficial temporal artery (posterior temporal) is a small artery in the head. It is larger than the frontal branch and curves upward and backward on the side of the head, lying superficial to the temporal fascia; it joins with its fellow of the opposite side, and with the posterior auricular and occipital arteries. The frontal branch of the superficial temporal artery (anterior temporal) runs tortuously upward and forward to the forehead, supplying the muscles, skin, and pericranium in this region, and anastomosing with the supraorbital and frontal arteries. In an estimate of the path of the nerve in the soft tissue of the temporal frontal branch using landmarks by Pitanguy, he describes a line starting from a point 0.5 cm below the tragus in the direction of the eyebrow, passing 1.5 cm above the lateral extremity of the eyebrow. Relations As it crosses the zygomatic process, it is covered by the auricularis anterior muscle and by a dense fascia; it is crossed by the temporal and zygomatic branches of the facial nerve and one or two veins, and is accompanied by the auriculotemporal nerve, which lies immediately behind it. The superficial temporal artery j
https://en.wikipedia.org/wiki/SGI%20Prism
The Silicon Graphics Prism is a series of visualization computer systems developed and manufactured by Silicon Graphics (SGI). Released in April 2005, the Prism's basic system architecture is based on the Altix 3000 servers, but with graphics hardware. The Prism uses the Linux operating system and the OpenGL software library. Three models of the SGI Prism are Power, Team and Extreme levels. The Power level supports two to eight Itanium 2 processors, up to 96 GB of memory and two to four graphics pipelines. The Team level supports 8 to 16 Itanium 2 processors, up to 192 GB of memory and four to eight graphics pipelines. The Extreme level supports 16 to 256 Itanium 2 processors, up to 3 TB of memory and 4 to 16 graphics pipelines. The graphics pipelines for the Prism are ATI FireGL cards based on either the R350 or R420 GPUs.
https://en.wikipedia.org/wiki/Logical%20equality
Logical equality is a logical operator that corresponds to equality in Boolean algebra and to the logical biconditional in propositional calculus. It gives the functional value true if both functional arguments have the same logical value, and false if they are different. It is customary practice in various applications, if not always technically precise, to indicate the operation of logical equality on the logical operands x and y by any of the following forms: Some logicians, however, draw a firm distinction between a functional form, like those in the left column, which they interpret as an application of a function to a pair of arguments — and thus a mere indication that the value of the compound expression depends on the values of the component expressions — and an equational form, like those in the right column, which they interpret as an assertion that the arguments have equal values, in other words, that the functional value of the compound expression is true. Definition Logical equality is an operation on two logical values, typically the values of two propositions, that produces a value of true if and only if both operands are false or both operands are true. The truth table of p EQ q (also written as p = q, p ↔ q, Epq, p ≡ q, or p == q) is as follows: {| class="wikitable" style="text-align:center" |+ Logical equality ! p ! q ! p = q |- | 0 || 0 || 1 |- | 0 || 1 || 0 |- | 1 || 0 || 0 |- | 1 || 1 || 1 |} Alternative descriptions The form (x = y) is equivalent to the form (x ∧ y) ∨ (¬x ∧ ¬y). For the operands x and y, the truth table of the logical equality operator is as follows: {| class="wikitable" ! colspan="2" rowspan="2" | !! colspan="2" | y |- ! T !! F |- ! rowspan="2" | x !! T | style="padding: 1em;" | T | style="padding: 1em;" | F |- ! F | style="padding: 1em;" | F | style="padding: 1em;" | T |} Inequality In mathematics, the plus sign "+" almost invariably indicates an operation that satisfies the axioms assigned to addition in the t
https://en.wikipedia.org/wiki/Bit%20slicing
Bit slicing is a technique for constructing a processor from modules of processors of smaller bit width, for the purpose of increasing the word length; in theory to make an arbitrary n-bit central processing unit (CPU). Each of these component modules processes one bit field or "slice" of an operand. The grouped processing components would then have the capability to process the chosen full word-length of a given software design. Bit slicing more or less died out due to the advent of the microprocessor. Recently it has been used in arithmetic logic units (ALUs) for quantum computers and as a software technique, e.g. for cryptography in x86 CPUs. Operational details Bit-slice processors (BSPs) usually include 1-, 2-, 4-, 8- or 16-bit arithmetic logic unit (ALU) and control lines (including carry or overflow signals that are internal to the processor in non-bitsliced CPU designs). For example, two 4-bit ALU chips could be arranged side by side, with control lines between them, to form an 8-bit ALU (result need not be power of two, e.g. three 1-bit units can make a 3-bit ALU, thus 3-bit (or n-bit) CPU, while 3-bit, or any CPU with higher odd number of bits, hasn't been manufactured and sold in volume). Four 4-bit ALU chips could be used to build a 16-bit ALU. It would take eight chips to build a 32-bit word ALU. The designer could add as many slices as required to manipulate longer word lengths. A microsequencer or control ROM would be used to execute logic to provide data and control signals to regulate function of the component ALUs. Known bit-slice microprocessors: 2-bit slice: Intel 3000 family (1974, now discontinued), e.g. Intel 3002 with Intel 3001, second-sourced by Signetics and Intersil Signetics 8X02 family (1977, now discontinued) 4-bit slice: National IMP family, consisting primarily of the IMP-00A/520 RALU (also known as MM5750) and various masked ROM microcode and control chips (CROMs, also known as MM5751) National GPC/P / IMP-4 (1973), seco
https://en.wikipedia.org/wiki/Common%20beta%20emitters
Various radionuclides emit beta particles, high-speed electrons or positrons, through radioactive decay of their atomic nucleus. These can be used in a range of different industrial, scientific, and medical applications. This article lists some common beta-emitting radionuclides of technological importance, and their properties. Fission products Strontium Strontium-90 is a commonly used beta emitter used in industrial sources. It decays to yttrium-90, which is itself a beta emitter. It is also used as a thermal power source in radioisotope thermoelectric generator (RTG) power packs. These use heat produced by radioactive decay of strontium-90 to generate heat, which can be converted to electricity using a thermocouple. Strontium-90 has a shorter half-life, produces less power, and requires more shielding than plutonium-238, but is cheaper as it is a fission product and is present in a high concentration in nuclear waste and can be relatively easily chemically extracted. Strontium-90 based RTGs have been used to power remote lighthouses. As strontium is water-soluble, the perovskite form strontium titanate is usually employed as it is not water-soluble and has a high melting point. Strontium-89 is a short-lived beta emitter which has been used as a treatment for bone tumors, this is used in palliative care in terminal cancer cases. Both strontium-89 and strontium-90 are fission products. Neutron activation products Tritium Tritium is a low-energy beta emitter commonly used as a radiotracer in research and in traser self-powered lightings. The half-life of tritium is 12.3 years. The electrons from beta emission from tritium are so low in energy (average decay energy 5.7 keV) that a Geiger counter cannot be used to detect them. An advantage of the low energy of the decay is that it is easy to shield, since the low energy electrons penetrate only to shallow depths, reducing the safety issues in deal with the isotope. Tritium can also be found in metal work in
https://en.wikipedia.org/wiki/Lancelot%20Hogben
Lancelot Thomas Hogben FRS FRSE (9 December 1895 – 22 August 1975) was a British experimental zoologist and medical statistician. He developed the African clawed frog (Xenopus laevis) as a model organism for biological research in his early career, attacked the eugenics movement in the middle of his career, and wrote popular books on science, mathematics and language in his later career. Early life and education Hogben was born and raised in Southsea near Portsmouth in Hampshire. His parents were Methodists. He attended Tottenham County School in London, his family having moved to Stoke Newington, where his mother had grown up, in 1907, and then as a medical student studied physiology at Trinity College, Cambridge. Hogben had matriculated into the University of London as an external student before he could apply to Cambridge and he graduated as a Bachelor of Science (BSc) in 1914. He took his Cambridge degree in 1915, graduating with an Ordinary BA. He had acquired socialist convictions, changing the name of the university's Fabian Society to Socialist Society and went on to become an active member of the Independent Labour Party. Later in life he preferred to describe himself as 'a scientific humanist'. In the First World War he was a pacifist, and joined the Quakers. He worked for six months with the Red Cross in France, under the auspices of the Friends' War Victims Relief Service and then the Friends' Ambulance Unit. He then returned to Cambridge, and was imprisoned in Wormwood Scrubs as a conscientious objector in 1916. His health collapsed and he was released in 1917. His brother George was also a conscientious objector, serving with the Friends' Ambulance Unit. Career After a year's convalescence he took lecturing positions in London universities and in 1921 he became a Doctor of Science (D.Sc.) in Zoology of the University of London. He moved in 1922 to the University of Edinburgh and its Animal Breeding Research Department.In 1923, Hogben was a founder
https://en.wikipedia.org/wiki/Tight%20binding
In solid-state physics, the tight-binding model (or TB model) is an approach to the calculation of electronic band structure using an approximate set of wave functions based upon superposition of wave functions for isolated atoms located at each atomic site. The method is closely related to the LCAO method (linear combination of atomic orbitals method) used in chemistry. Tight-binding models are applied to a wide variety of solids. The model gives good qualitative results in many cases and can be combined with other models that give better results where the tight-binding model fails. Though the tight-binding model is a one-electron model, the model also provides a basis for more advanced calculations like the calculation of surface states and application to various kinds of many-body problem and quasiparticle calculations. Introduction The name "tight binding" of this electronic band structure model suggests that this quantum mechanical model describes the properties of tightly bound electrons in solids. The electrons in this model should be tightly bound to the atom to which they belong and they should have limited interaction with states and potentials on surrounding atoms of the solid. As a result, the wave function of the electron will be rather similar to the atomic orbital of the free atom to which it belongs. The energy of the electron will also be rather close to the ionization energy of the electron in the free atom or ion because the interaction with potentials and states on neighboring atoms is limited. Though the mathematical formulation of the one-particle tight-binding Hamiltonian may look complicated at first glance, the model is not complicated at all and can be understood intuitively quite easily. There are only three kinds of matrix elements that play a significant role in the theory. Two of those three kinds of elements should be close to zero and can often be neglected. The most important elements in the model are the interatomic matrix element
https://en.wikipedia.org/wiki/Degree%20of%20a%20continuous%20mapping
In topology, the degree of a continuous mapping between two compact oriented manifolds of the same dimension is a number that represents the number of times that the domain manifold wraps around the range manifold under the mapping. The degree is always an integer, but may be positive or negative depending on the orientations. The degree of a map was first defined by Brouwer, who showed that the degree is homotopy invariant (invariant among homotopies), and used it to prove the Brouwer fixed point theorem. In modern mathematics, the degree of a map plays an important role in topology and geometry. In physics, the degree of a continuous map (for instance a map from space to some order parameter set) is one example of a topological quantum number. Definitions of the degree From Sn to Sn The simplest and most important case is the degree of a continuous map from the -sphere to itself (in the case , this is called the winding number): Let be a continuous map. Then induces a homomorphism , where is the th homology group. Considering the fact that , we see that must be of the form for some fixed . This is then called the degree of . Between manifolds Algebraic topology Let X and Y be closed connected oriented m-dimensional manifolds. Orientability of a manifold implies that its top homology group is isomorphic to Z. Choosing an orientation means choosing a generator of the top homology group. A continuous map f : X →Y induces a homomorphism f∗ from Hm(X) to Hm(Y). Let [X], resp. [Y] be the chosen generator of Hm(X), resp. Hm(Y) (or the fundamental class of X, Y). Then the degree of f is defined to be f*([X]). In other words, If y in Y and f −1(y) is a finite set, the degree of f can be computed by considering the m-th local homology groups of X at each point in f −1(y). Differential topology In the language of differential topology, the degree of a smooth map can be defined as follows: If f is a smooth map whose domain is a compact manifold and p
https://en.wikipedia.org/wiki/Castelnuovo%E2%80%93de%20Franchis%20theorem
In mathematics, the Castelnuovo–de Franchis theorem is a classical result on complex algebraic surfaces. Let X be such a surface, projective and non-singular, and let ω1 and ω2 be two differentials of the first kind on X which are linearly independent but with wedge product 0. Then this data can be represented as a pullback of an algebraic curve: there is a non-singular algebraic curve C, a morphism φ: X → C, and differentials of the first kind ω1 and ω2 on C such that φ*(1) = ω1 and φ*(2) = ω2. This result is due to Guido Castelnuovo and Michele de Franchis (1875–1946). The converse, that two such pullbacks would have wedge 0, is immediate. See also de Franchis theorem
https://en.wikipedia.org/wiki/Charge%20conservation
In physics, charge conservation is the principle that the total electric charge in an isolated system never changes. The net quantity of electric charge, the amount of positive charge minus the amount of negative charge in the universe, is always conserved. Charge conservation, considered as a physical conservation law, implies that the change in the amount of electric charge in any volume of space is exactly equal to the amount of charge flowing into the volume minus the amount of charge flowing out of the volume. In essence, charge conservation is an accounting relationship between the amount of charge in a region and the flow of charge into and out of that region, given by a continuity equation between charge density and current density . This does not mean that individual positive and negative charges cannot be created or destroyed. Electric charge is carried by subatomic particles such as electrons and protons. Charged particles can be created and destroyed in elementary particle reactions. In particle physics, charge conservation means that in reactions that create charged particles, equal numbers of positive and negative particles are always created, keeping the net amount of charge unchanged. Similarly, when particles are destroyed, equal numbers of positive and negative charges are destroyed. This property is supported without exception by all empirical observations so far. Although conservation of charge requires that the total quantity of charge in the universe is constant, it leaves open the question of what that quantity is. Most evidence indicates that the net charge in the universe is zero; that is, there are equal quantities of positive and negative charge. History Charge conservation was first proposed by British scientist William Watson in 1746 and American statesman and scientist Benjamin Franklin in 1747, although the first convincing proof was given by Michael Faraday in 1843. Formal statement of the law Mathematically, we can state
https://en.wikipedia.org/wiki/De%20Franchis%20theorem
In mathematics, the de Franchis theorem is one of a number of closely related statements applying to compact Riemann surfaces, or, more generally, algebraic curves, X and Y, in the case of genus g > 1. The simplest is that the automorphism group of X is finite (see though Hurwitz's automorphisms theorem). More generally, the set of non-constant morphisms from X to Y is finite; fixing X, for all but a finite number of such Y, there is no non-constant morphism from X to Y. These results are named for (1875–1946). It is sometimes referenced as the De Franchis-Severi theorem. It was used in an important way by Gerd Faltings to prove the Mordell conjecture. See also Castelnuovo–de Franchis theorem
https://en.wikipedia.org/wiki/Vascular%20tissue
Vascular tissue is a complex conducting tissue, formed of more than one cell type, found in vascular plants. The primary components of vascular tissue are the xylem and phloem. These two tissues transport fluid and nutrients internally. There are also two meristems associated with vascular tissue: the vascular cambium and the cork cambium. All the vascular tissues within a particular plant together constitute the vascular tissue system of that plant. The cells in vascular tissue are typically long and slender. Since the xylem and phloem function in the conduction of water, minerals, and nutrients throughout the plant, it is not surprising that their form should be similar to pipes. The individual cells of phloem are connected end-to-end, just as the sections of a pipe might be. As the plant grows, new vascular tissue differentiates in the growing tips of the plant. The new tissue is aligned with existing vascular tissue, maintaining its connection throughout the plant. The vascular tissue in plants is arranged in long, discrete strands called vascular bundles. These bundles include both xylem and phloem, as well as supporting and protective cells. In stems and roots, the xylem typically lies closer to the interior of the stem with phloem towards the exterior of the stem. In the stems of some Asterales dicots, there may be phloem located inwardly from the xylem as well. Between the xylem and phloem is a meristem called the vascular cambium. This tissue divides off cells that will become additional xylem and phloem. This growth increases the girth of the plant, rather than its length. As long as the vascular cambium continues to produce new cells, the plant will continue to grow more stout. In trees and other plants that develop wood, the vascular cambium allows the expansion of vascular tissue that produces woody growth. Because this growth ruptures the epidermis of the stem, woody plants also have a cork cambium that develops among the phloem. The cork cambium g
https://en.wikipedia.org/wiki/Tate%20twist
In number theory and algebraic geometry, the Tate twist, named after John Tate, is an operation on Galois modules. For example, if K is a field, GK is its absolute Galois group, and ρ : GK → AutQp(V) is a representation of GK on a finite-dimensional vector space V over the field Qp of p-adic numbers, then the Tate twist of V, denoted V(1), is the representation on the tensor product V⊗Qp(1), where Qp(1) is the p-adic cyclotomic character (i.e. the Tate module of the group of roots of unity in the separable closure Ks of K). More generally, if m is a positive integer, the mth Tate twist of V, denoted V(m), is the tensor product of V with the m-fold tensor product of Qp(1). Denoting by Qp(−1) the dual representation of Qp(1), the -mth Tate twist of V can be defined as
https://en.wikipedia.org/wiki/Noether%27s%20theorem%20on%20rationality%20for%20surfaces
In mathematics, Noether's theorem on rationality for surfaces is a classical result of Max Noether on complex algebraic surfaces, giving a criterion for a rational surface. Let S be an algebraic surface that is non-singular and projective. Suppose there is a morphism φ from S to the projective line, with general fibre also a projective line. Then the theorem states that S is rational. See also Hirzebruch surface List of complex and algebraic surfaces
https://en.wikipedia.org/wiki/Axiality%20and%20rhombicity
In physics and mathematics, axiality and rhombicity are two characteristics of a symmetric second-rank tensor in three-dimensional Euclidean space, describing its directional asymmetry. Let A denote a second-rank tensor in R3, which can be represented by a 3-by-3 matrix. We assume that A is symmetric. This implies that A has three real eigenvalues, which we denote by , and . We assume that they are ordered such that The axiality of A is defined by The rhombicity is the difference between the smallest and the second-smallest eigenvalue: Other definitions of axiality and rhombicity differ from the ones given above by constant factors which depend on the context. For example, when using them as parameters in the irreducible spherical tensor expansion, it is most convenient to divide the above definition of axiality by and that of rhombicity by . Applications The description of physical interactions in terms of axiality and rhombicity is frequently encountered in spin dynamics and, in particular, in spin relaxation theory, where many traceless bilinear interaction Hamiltonians, having the (eigenframe) form (hats denote spin projection operators) may be conveniently rotated using rank 2 irreducible spherical tensor operators: where are Wigner functions, are Euler angles, and the expressions for the rank 2 irreducible spherical tensor operators are: Defining Hamiltonian rotations in this way (axiality, rhombicity, three angles) significantly simplifies calculations, since the properties of Wigner functions are well understood.
https://en.wikipedia.org/wiki/Celebratory%20gunfire
Celebratory gunfire is the shooting of a firearm into the air in celebration. It sometimes occurs in parts of the Balkans, Russia, the Middle East, South Asia, Latin America, the United States, and Ethiopia, even where illegal. Common occasions for celebratory gunfire include New Year's Day as well as religious holidays. The practice sometimes results in random death and injury from stray bullets. Property damage is another result of celebratory gunfire; shattered windows and damaged roofs are sometimes found after such celebrations. Injuries Depending on the angle it is fired, the speed of a falling bullet changes. A bullet fired nearly vertically will lose the most speed, usually falling at terminal velocity, which is much lower than its muzzle velocity. Despite this, people can still be injured or killed by bullets falling at this speed. If a bullet is fired at other angles, it maintains its angular ballistic trajectory and is far less likely to engage in tumbling motion; it therefore travels at speeds much higher than a bullet in free fall. Dense, small bullets achieve higher terminal velocities than lighter, larger bullets. Between 1918 and 1920, United States Army Ordnance Corps Julian Hatcher conducted experiments to determine the velocity of falling bullets, and calculated that .30 caliber rounds reach terminal velocities of 90 m/s (300 feet per second or 186 miles per hour). According to computer models, 9mm handgun rounds reach terminal velocities of between 150 and 250 feet per second. A bullet traveling at only 61 m/s (200 feet per second) to 100 m/s (330 feet per second) can penetrate human skin. Any gunfire can damage hearing of those nearby without ear protection, and blank rounds fired in an unsafe direction can cause injuries or death from muzzle blast at close range, as in the case of actor Jon-Erik Hexum. Birdshot fired from a shotgun disperses and loses energy much faster than slugs, buckshot, or bullets fired from rifles and pistols. Alth
https://en.wikipedia.org/wiki/Fano%20variety
In algebraic geometry, a Fano variety, introduced by Gino Fano in , is a complete variety X whose anticanonical bundle KX* is ample. In this definition, one could assume that X is smooth over a field, but the minimal model program has also led to the study of Fano varieties with various types of singularities, such as terminal or klt singularities. Recently techniques in differential geometry have been applied to the study of Fano varieties over the complex numbers, and success has been found in constructing moduli spaces of Fano varieties and proving the existence of Kähler–Einstein metrics on them through the study of K-stability of Fano varieties. Examples The fundamental example of Fano varieties are the projective spaces: the anticanonical line bundle of Pn over a field k is O(n+1), which is very ample (over the complex numbers, its curvature is n+1 times the Fubini–Study symplectic form). Let D be a smooth codimension-1 subvariety in Pn. The adjunction formula implies that KD = (KX + D)|D = (−(n+1)H + deg(D)H)|D, where H is the class of a hyperplane. The hypersurface D is therefore Fano if and only if deg(D) < n+1. More generally, a smooth complete intersection of hypersurfaces in n-dimensional projective space is Fano if and only if the sum of their degrees is at most n. Weighted projective space P(a0,...,an) is a singular (klt) Fano variety. This is the projective scheme associated to a graded polynomial ring whose generators have degrees a0,...,an. If this is well formed, in the sense that no n of the numbers a have a common factor greater than 1, then any complete intersection of hypersurfaces such that the sum of their degrees is less than a0+...+an is a Fano variety. Every projective variety in characteristic zero that is homogeneous under a linear algebraic group is Fano. Some properties The existence of some ample line bundle on X is equivalent to X being a projective variety, so a Fano variety is always projective. For a Fano variety X over t
https://en.wikipedia.org/wiki/Tolman%20surface%20brightness%20test
The Tolman surface brightness test is one out of six cosmological tests that were conceived in the 1930s to check the viability of and compare new cosmological models. Tolman's test compares the surface brightness of galaxies as a function of their redshift (measured as z). Such a comparison was first proposed in 1930 by Richard C. Tolman as a test of whether the universe is expanding or static. It is a unique test of cosmology, as it is independent of dark energy, dark matter and Hubble constant parameters, testing purely for whether Cosmological Redshift is caused by an expanding universe or not. In a simple (static and flat) universe, the light received from an object drops proportional to the square of its distance and the apparent area of the object also drops proportional to the square of the distance, so the surface brightness (light received per surface area) would be constant, independent of the distance. In an expanding universe, however, there are two effects that change this relation. First, the rate at which photons are received is reduced because each photon has to travel a little farther than the one before. Second, the energy of each photon observed is reduced by the redshift. At the same time, distant objects appear larger than they really are because the photons observed were emitted at a time when the object was closer. Adding these effects together, the surface brightness in a simple expanding universe (flat geometry and uniform expansion over the range of redshifts observed) should decrease with the fourth power of . One of the earliest and most comprehensive studies was published in 1996, as observational requirements limited the practicality of the test till then. This test found consistency with an expanding universe. However, therein, the authors note that: A later paper that reviewed this one removed their assumed expansion cosmology for calculating SB, to make for a fair test, and found that the 1996 results, once the correction was
https://en.wikipedia.org/wiki/The%20Number%20Devil
The Number Devil: A Mathematical Adventure () is a book for children and young adults that explores mathematics. It was originally written in 1997 in German by Hans Magnus Enzensberger and illustrated by Rotraut Susanne Berner. The book follows a young boy named Robert, who is taught mathematics by a sly "number devil" called Teplotaxl over the course of twelve dreams. The book was met with mostly positive reviews from critics, approving its description of math while praising its simplicity. Its colorful use of fictional mathematical terms and its creative descriptions of concepts have made it a suggested book for both children and adults troubled with math. The Number Devil was a bestseller in Europe, and has been translated into English by Michael Henry Heim. Plot Robert is a young boy who suffers from mathematical anxiety due to his boredom in school. His mother is Mrs. Wilson. He also experiences recurring dreams—including falling down an endless slide or being eaten by a giant fish—but is interrupted from this sleep habit one night by a small devil creature who introduces himself as the Number Devil. Although there are many Number Devils (from Number Heaven), Robert only knows him as the Number Devil before learning of his actual name, Teplotaxl, later in the story. Over the course of twelve dreams, the Number Devil teaches Robert mathematical principles. On the first night, the Number Devil appears to Robert in an oversized world and introduces the number one. The next night, the Number Devil emerges in a forest of trees shaped like "ones" and explains the necessity of the number zero, negative numbers, and introduces hopping, a fictional term to describe exponentiation. On the third night, the Number Devil brings Robert to a cave and reveals how prima-donna numbers (prime numbers) can only be divided by themselves and one without a remainder. Later, on the fourth night, the Number Devil teaches Robert about rutabagas, another fictional term to depict squar
https://en.wikipedia.org/wiki/List%20of%20large%20cardinal%20properties
This page includes a list of cardinals with large cardinal properties. It is arranged roughly in order of the consistency strength of the axiom asserting the existence of cardinals with the given property. Existence of a cardinal number κ of a given type implies the existence of cardinals of most of the types listed above that type, and for most listed cardinal descriptions φ of lesser consistency strength, Vκ satisfies "there is an unbounded class of cardinals satisfying φ". The following table usually arranges cardinals in order of consistency strength, with size of the cardinal used as a tiebreaker. In a few cases (such as strongly compact cardinals) the exact consistency strength is not known and the table uses the current best guess. "Small" cardinals: 0, 1, 2, ..., ,..., , ... (see Aleph number) worldly cardinals weakly and strongly inaccessible, α-inaccessible, and hyper inaccessible cardinals weakly and strongly Mahlo, α-Mahlo, and hyper Mahlo cardinals. reflecting cardinals weakly compact (= Π-indescribable), Π-indescribable, totally indescribable cardinals λ-unfoldable, unfoldable cardinals, ν-indescribable cardinals and λ-shrewd, shrewd cardinals (not clear how these relate to each other). ethereal cardinals, subtle cardinals almost ineffable, ineffable, n-ineffable, totally ineffable cardinals remarkable cardinals α-Erdős cardinals (for countable α), 0# (not a cardinal), γ-iterable, γ-Erdős cardinals (for uncountable γ) almost Ramsey, Jónsson, Rowbottom, Ramsey, ineffably Ramsey, completely Ramsey, strongly Ramsey, super Ramsey cardinals measurable cardinals, 0† λ-strong, strong cardinals, tall cardinals Woodin, weakly hyper-Woodin, Shelah, hyper-Woodin cardinals superstrong cardinals (=1-superstrong; for n-superstrong for n≥2 see further down.) subcompact, strongly compact (Woodin< strongly compact≤supercompact), supercompact, hypercompact cardinals η-extendible, extendible cardinals Vopěnka cardinals, Shelah for supercompactness,
https://en.wikipedia.org/wiki/Rotation%20around%20a%20fixed%20axis
Rotation around a fixed axis or axial rotation is a special case of rotational motion around an axis of rotation fixed, stationary, or static in three-dimensional space. This type of motion excludes the possibility of the instantaneous axis of rotation changing its orientation and cannot describe such phenomena as wobbling or precession. According to Euler's rotation theorem, simultaneous rotation along a number of stationary axes at the same time is impossible; if two rotations are forced at the same time, a new axis of rotation will result. This concept assumes that the rotation is also stable, such that no torque is required to keep it going. The kinematics and dynamics of rotation around a fixed axis of a rigid body are mathematically much simpler than those for free rotation of a rigid body; they are entirely analogous to those of linear motion along a single fixed direction, which is not true for free rotation of a rigid body. The expressions for the kinetic energy of the object, and for the forces on the parts of the object, are also simpler for rotation around a fixed axis, than for general rotational motion. For these reasons, rotation around a fixed axis is typically taught in introductory physics courses after students have mastered linear motion; the full generality of rotational motion is not usually taught in introductory physics classes. Translation and rotation A rigid body is an object of a finite extent in which all the distances between the component particles are constant. No truly rigid body exists; external forces can deform any solid. For our purposes, then, a rigid body is a solid which requires large forces to deform it appreciably. A change in the position of a particle in three-dimensional space can be completely specified by three coordinates. A change in the position of a rigid body is more complicated to describe. It can be regarded as a combination of two distinct types of motion: translational motion and circular motion. Pure
https://en.wikipedia.org/wiki/Flooding%20%28psychology%29
Flooding, sometimes referred to as in vivo exposure therapy, is a form of behavior therapy and desensitization—or exposure therapy—based on the principles of respondent conditioning. As a psychotherapeutic technique, it is used to treat phobia and anxiety disorders including post-traumatic stress disorder. It works by exposing the patient to their painful memories, with the goal of reintegrating their repressed emotions with their current awareness. Flooding was invented by psychologist Thomas Stampfl in 1967. It is still used in behavior therapy today. Flooding is a psychotherapeutic method for overcoming phobias. In order to demonstrate the irrationality of the fear, a psychologist would put a person in a situation where they would face their phobia. Under controlled conditions and using psychologically-proven relaxation techniques, the subject attempts to replace their fear with relaxation. The experience can often be traumatic for a person, but may be necessary if the phobia is causing them significant life disturbances. The advantage to flooding is that it is quick and usually effective. There is, however, a possibility that a fear may spontaneously recur. This can be made less likely with systematic desensitization, another form of a classical condition procedure for the elimination of phobias. How it works "Flooding" works on the principles of classical conditioning or respondent conditioning—a form of Pavlov's classical conditioning—where patients change their behaviors to avoid negative stimuli. According to Pavlov, people can learn through associations, so if one has a phobia, it is because one associates the feared stimulus with a negative outcome. Flooding uses a technique based on Pavlov's classical conditioning that uses exposure. There are different forms of exposure, such as imaginal exposure, virtual reality exposure, and in vivo exposure. While systematic desensitization may use these other types of exposure, flooding uses in vivo exposure, act
https://en.wikipedia.org/wiki/Croatian%20Interdisciplinary%20Society
Croatian Interdisciplinary Society (, abbrev. HID) is a non-governmental organization operating in Croatia. It acts to promote interdisciplinary education and research, primarily but not exclusively in the domain of complex systems. They are a member society of the International Federation for Systems Research. Activities INDECS, a peer-reviewed scientific journal DECOS, Describing Complex Systems, an academic conference AtoS, a seminar for students Sustavi, a popular science magazine See also znanost.org External links Complex systems theory Scientific societies based in Croatia Systems science societies 2004 establishments in Croatia
https://en.wikipedia.org/wiki/Philip%20Verheyen
Philip Verheyen (Verrebroek, April 23, 1648 – Leuven, January 28, 1710) was a Flemish surgeon, anatomist and author. As the third child of seven, Verheyen was born in Verrebroek, in modern Belgium (most likely in his parents' house, standing on a small plot of owned land in the area called "Borring", close to the border with Meerdonk), to Thomas Verheyen and Joanna Goeman. He was baptized in the parish church of Verrebroek on 24 April 1648. Little is known of his childhood. As a young boy he was probably a cowherd, and it is assumed that he learned to read and write at the local parish school. Local folk tales claim that he had such a brilliant memory that he could recite the pastor's sermon after attending mass on Sunday. The pastor of the village took him under his wing and he was sent to Leuven in 1672 where he spent three years at Trinity College. Concluding his studies in the liberal arts in 1675 Verheyen went on to study theology with the intention of following in the footsteps of his mentor and joining the clergy to become a priest. It was at this crucial juncture that an illness resulted in the amputation of his left leg rendering him unfit for the clergy. This event proved to be of utmost importance to the subsequent path he chose. Embarking on a career in medicine, he initially continued at Trinity College and from 1681 to 1683 studied in Leiden. He returned to Leuven in 1683, obtaining the doctorate in medicine there. He gave lessons in anatomy and surgery and also practiced medicine. As a result of his many publications, in a short period of time he acquired renown both in and outside the country. The year 1693 saw the first publication of his Corporis Humani Anatomia. Philippe Verheyen died in November 1710 and was buried in the churchyard of the church of St Michael in Leuven. Prior to his death, he had given orders for his body to be buried outside the church so as not to infect the building with "unwholesome vapours". In 1862 his native villag
https://en.wikipedia.org/wiki/Amphotropism
Amphotropism or amphotropic indicates that a pathogen like a virus or a bacterium has a wide host range and can infect more than one species or cell culture line. See also Tropism, a list of tropisms Ecotropism, indicating a narrow host range Ecology terminology
https://en.wikipedia.org/wiki/Jack%20Box
Jack Box (full name Jack I. Box or simply known as Jack) is the primary mascot of the Jack in the Box fast food restaurant chain. In television commercials, he is the founder, CEO and ad spokesman for the chain. His appearance is that of a typical male, with the exception of his spherical white head, blue dot eyes, conical black pointed nose and curvilinear red smile. He is most of the time seen wearing his trademark yellow clown cap and business suit. The company has used the Jack Box mascot in its advertising since 1994 and has won a number of advertising awards for the long campaign. History Prior to 1980, the chain used Jack as its symbol, which sat atop the drive-thru menus in the 1960s and early 1970s. Jack's head was also atop the large signs at each location. In 1980, the chain decided to establish a more "mature" image by introducing a wider variety of menu items and (most notably) discontinuing the use of Jack. A series of television commercials announced that "now we stand for great new food", to which the commercials showed the dramatic destruction of the notorious clown heads (most commonly through explosion, also dropping them from a crane and launching them like a rocket). Throughout the late 1980s to the 1990s, Jack in the Box tried to position itself as a premium fast food alternative, with varying results. In 1993, a major food contamination crisis was linked to Jack in the Box restaurants. By 1994, a series of lawsuits and negative publicity took their tolls and pushed their corporate parent, Foodmaker Inc. to the verge of bankruptcy. In the short term, they decided to promote their initiatives on food safety. Management then approved a new guerilla advertising campaign created by Richard "Rick" Sittig, then working at the TBWA\Chiat\Day ad agency in Santa Monica, California. The concept brought back the original company mascot, Jack, but now in the form of a savvy and no-nonsense businessman who happened to have an enormous round clown head.
https://en.wikipedia.org/wiki/Data%20Interchange%20Format
Data Interchange Format (.dif) is a text file format used to import/export single spreadsheets between spreadsheet programs. Applications that still support the DIF format are Collabora Online, Excel, Gnumeric, and LibreOffice Calc. Historical applications that used to support it until they became end of life or no longer acknowledge support of the format are dBase, FileMaker, Framework, Lotus 1-2-3, Multiplan, OpenOffice.org Calc and StarCalc. A limitation with DIF format is that it cannot handle multiple spreadsheets in a single workbook. Due to the similarity in abbreviation and in age (both date to the early 1980s), the DIF spreadsheet format it is often confused with Navy DIF; Navy DIF, however, is an unrelated "document interchange format" for word processors. History DIF was developed by Software Arts, Inc. (the developers of the VisiCalc program) in the early 1980s. The specification was included in many copies of VisiCalc, and published in Byte Magazine. Bob Frankston developed the format, with input from others, including Mitch Kapor, who helped so that it could work with his VisiPlot program. (Kapor later went on to found Lotus and make Lotus 1-2-3 happen.) The specification was copyright 1981. DIF was a registered trademark of Software Arts Products Corp. (a legal name for Software Arts at the time). Syntax DIF stores everything in an ASCII text file to mitigate many cross-platform issues back in the days of its creation. However modern spreadsheet software, e.g. OpenOffice.org Calc and Gnumeric, offer more character encoding to export/import. The file is divided into 2 sections: header and data. Everything in DIF is represented by a 2- or 3-line chunk. Headers get a 3-line chunk; data, 2. Header chunks start with a text identifier that is all caps, only alphabetic characters, and less than 32 letters. The following line must be a pair of numbers, and the third line must be a quoted string. On the other hand, data chunks start with a number pair and
https://en.wikipedia.org/wiki/Runtime%20verification
Runtime verification is a computing system analysis and execution approach based on extracting information from a running system and using it to detect and possibly react to observed behaviors satisfying or violating certain properties. Some very particular properties, such as datarace and deadlock freedom, are typically desired to be satisfied by all systems and may be best implemented algorithmically. Other properties can be more conveniently captured as formal specifications. Runtime verification specifications are typically expressed in trace predicate formalisms, such as finite state machines, regular expressions, context-free patterns, linear temporal logics, etc., or extensions of these. This allows for a less ad-hoc approach than normal testing. However, any mechanism for monitoring an executing system is considered runtime verification, including verifying against test oracles and reference implementations . When formal requirements specifications are provided, monitors are synthesized from them and infused within the system by means of instrumentation. Runtime verification can be used for many purposes, such as security or safety policy monitoring, debugging, testing, verification, validation, profiling, fault protection, behavior modification (e.g., recovery), etc. Runtime verification avoids the complexity of traditional formal verification techniques, such as model checking and theorem proving, by analyzing only one or a few execution traces and by working directly with the actual system, thus scaling up relatively well and giving more confidence in the results of the analysis (because it avoids the tedious and error-prone step of formally modelling the system), at the expense of less coverage. Moreover, through its reflective capabilities runtime verification can be made an integral part of the target system, monitoring and guiding its execution during deployment. History and context Checking formally or informally specified properties again
https://en.wikipedia.org/wiki/Mrs.%20Butterworth%27s
Mrs. Butterworth's is an American brand of table syrups and pancake mixes owned by Conagra Brands. The syrups come in distinctive bottles shaped as the character "Mrs. Butterworth", represented in the form of a matronly woman. The syrup was introduced in 1961. In 1999, the original glass bottles began to be replaced with plastic. In 2009, the character was given the first name "Joy" following a contest held by the company. Advertising One of the main voice actresses for Mrs. Butterworth was Mary Kay Bergman. she was also voiced by Hope Summers during the early to late 1970s. Kim Fields appeared in a commercial for the product during the late-1970s. In 2007, Mrs. Butterworth was used in a series of ads for GEICO, in which she helped an actual customer with her testimonial. In 2019, she appeared along with an actor playing Colonel Sanders in a KFC commercial spoofing a scene from Dirty Dancing, promoting chicken and waffles using Mrs. Butterworth's syrup. Controversy In 2020, following protests over systemic racism, Conagra Brands announced that it would review the shape of their bottles, as critics viewed them as an example of the "mammy" stereotype. A competing brand, Aunt Jemima, revamped its brand and advertising following the attention on negative black stereotypes. In ads, Mrs. Butterworth's voice has evoked a grandmotherly white woman, and she has been portrayed by white voice actresses. Despite this, some reports had claimed, without citing any sources, that the character was originally modeled on Butterfly McQueen, a black actress who appeared as the maid in Gone with the Wind (1939). As of August 2023, Mrs. Butterworth’s syrup is still being sold with the familiar bottle shape, despite the “brand review” Conagra announced it would conduct back in 2020. According to Dieline, “It’s unclear if Mrs. Butterworth’s brand review is still ongoing, what conclusions Conagra reached, or what changes they intend to make.” In popular culture In 2005, Chicago
https://en.wikipedia.org/wiki/Kerala%20school%20of%20astronomy%20and%20mathematics
The Kerala school of astronomy and mathematics or the Kerala school was a school of mathematics and astronomy founded by Madhava of Sangamagrama in Tirur, Malappuram, Kerala, India, which included among its members: Parameshvara, Neelakanta Somayaji, Jyeshtadeva, Achyuta Pisharati, Melpathur Narayana Bhattathiri and Achyuta Panikkar. The school flourished between the 14th and 16th centuries and its original discoveries seem to have ended with Narayana Bhattathiri (1559–1632). In attempting to solve astronomical problems, the Kerala school independently discovered a number of important mathematical concepts. Their most important results—series expansion for trigonometric functions—were described in Sanskrit verse in a book by Neelakanta called Tantrasangraha, and again in a commentary on this work, called Tantrasangraha-vakhya, of unknown authorship. The theorems were stated without proof, but proofs for the series for sine, cosine, and inverse tangent were provided a century later in the work Yuktibhasa (), written in Malayalam, by Jyesthadeva, and also in a commentary on Tantrasangraha. Their work, completed two centuries before the invention of calculus in Europe, provided what is now considered the first example of a power series (apart from geometric series). Background Islamic scholars nearly developed a general formula for finding integrals of polynomials by 1000 AD —and evidently could find such a formula for any polynomial in which they were interested. But, it appears, they were not interested in any polynomial of degree higher than four, at least in any of the material that has come down to us. Indian scholars, on the other hand, were by the year 1600 able to use formula similar to ibn al-Haytham's sum formula for arbitrary integral powers in calculating power series for the functions in which they were interested. By the same time, they also knew how to calculate the differentials of these functions. So some of the basic ideas of calculus were know
https://en.wikipedia.org/wiki/Super%20I/O
Super I/O is a class of I/O controller integrated circuits that began to be used on personal computer motherboards in the late 1980s, originally as add-in cards, later embedded on the motherboards. A super I/O chip combines interfaces for a variety of low-bandwidth devices. Now it is mostly merged with EC. The functions below are usually provided by the super I/O if they are on the motherboard: A floppy-disk controller An IEEE 1284-compatible parallel port (commonly used for printers) One or more 16C550-compatible serial port UARTs Keyboard controller for PS/2 keyboard and/or mouse Most Super I/O chips include some additional low-speed devices, such as: Temperature, voltage, and fan speed interface Thermal Zone Chassis intrusion detection Mainboard power management LED management PWM fan speed control An IrDA Port controller A game port (not provided by recent super I/O chips anymore because Windows XP is the last Windows OS to support a game port unless the vendor has a custom driver in the future OS) A watchdog timer A consumer IR receiver A MIDI port Some GPIO pins Legacy Plug and Play or ACPI support for the included devices By combining many functions in a single chip, the number of parts needed on a motherboard is reduced, thus reducing the cost of production. The original super I/O chips communicated with the central processing unit via the ISA bus. With the evolution away from ISA towards use of the PCI bus, the Super I/O chip was often the biggest remaining reason for continuing inclusion of ISA on the motherboard. Later super I/O chips use the LPC bus instead of ISA for communication with the central processing unit. This normally occurs through an LPC interface on the southbridge chip of the motherboard. Since Intel is replacing the LPC bus with the eSPI bus, super I/O chips that connect to that bus have appeared on the market. Companies that make super I/O controllers include Nuvoton (has incorporated Winbond), , Fintek Inc. ,ENE
https://en.wikipedia.org/wiki/Diversity%20index
A diversity index is a quantitative measure that reflects how many different types (such as species) there are in a dataset (a community), and that can simultaneously take into account the phylogenetic relations among the individuals distributed among those types, such as richness, divergence or evenness. These indices are statistical representations of biodiversity in different aspects (richness, evenness, and dominance). Effective number of species or Hill numbers When diversity indices are used in ecology, the types of interest are usually species, but they can also be other categories, such as genera, families, functional types, or haplotypes. The entities of interest are usually individual plants or animals, and the measure of abundance can be, for example, number of individuals, biomass or coverage. In demography, the entities of interest can be people, and the types of interest various demographic groups. In information science, the entities can be characters and the types of the different letters of the alphabet. The most commonly used diversity indices are simple transformations of the effective number of types (also known as 'true diversity'), but each diversity index can also be interpreted in its own right as a measure corresponding to some real phenomenon (but a different one for each diversity index). Many indices only account for categorical diversity between subjects or entities. Such indices, however do not account for the total variation (diversity) that can be held between subjects or entities which occurs only when both categorical and qualitative diversity are calculated. True diversity, or the effective number of types, refers to the number of equally abundant types needed for the average proportional abundance of the types to equal that observed in the dataset of interest (where all types may not be equally abundant). The true diversity in a dataset is calculated by first taking the weighted generalized mean of the proportional abundance
https://en.wikipedia.org/wiki/Ostwald%E2%80%93Freundlich%20equation
The Ostwald–Freundlich equation governs boundaries between two phases; specifically, it relates the surface tension of the boundary to its curvature, the ambient temperature, and the vapor pressure or chemical potential in the two phases. The Ostwald–Freundlich equation for a droplet or particle with radius is: = atomic volume = Boltzmann constant = surface tension (J m−2) = equilibrium partial pressure (or chemical potential or concentration) = partial pressure (or chemical potential or concentration) = absolute temperature One consequence of this relation is that small liquid droplets (i.e., particles with a high surface curvature) exhibit a higher effective vapor pressure, since the surface is larger in comparison to the volume. Another notable example of this relation is Ostwald ripening, in which surface tension causes small precipitates to dissolve and larger ones to grow. Ostwald ripening is thought to occur in the formation of orthoclase megacrysts in granites as a consequence of subsolidus growth. See rock microstructure for more. History In 1871, Lord Kelvin (William Thomson) obtained the following relation governing a liquid-vapor interface: where: = vapor pressure at a curved interface of radius = vapor pressure at flat interface () = = surface tension = density of vapor = density of liquid , = radii of curvature along the principal sections of the curved interface. In his dissertation of 1885, Robert von Helmholtz (son of the German physicist Hermann von Helmholtz) derived the Ostwald–Freundlich equation and showed that Kelvin's equation could be transformed into the Ostwald–Freundlich equation. The German physical chemist Wilhelm Ostwald derived the equation apparently independently in 1900; however, his derivation contained a minor error which the German chemist Herbert Freundlich corrected in 1909. Derivation from Kelvin's equation According to Lord Kelvin's equation of 1871, If the particle is assumed to be
https://en.wikipedia.org/wiki/K%C3%B6the%20conjecture
In mathematics, the Köthe conjecture is a problem in ring theory, open . It is formulated in various ways. Suppose that R is a ring. One way to state the conjecture is that if R has no nil ideal, other than {0}, then it has no nil one-sided ideal, other than {0}. This question was posed in 1930 by Gottfried Köthe (1905–1989). The Köthe conjecture has been shown to be true for various classes of rings, such as polynomial identity rings and right Noetherian rings, but a general solution remains elusive. Equivalent formulations The conjecture has several different formulations: (Köthe conjecture) In any ring, the sum of two nil left ideals is nil. In any ring, the sum of two one-sided nil ideals is nil. In any ring, every nil left or right ideal of the ring is contained in the upper nil radical of the ring. For any ring R and for any nil ideal J of R, the matrix ideal Mn(J) is a nil ideal of Mn(R) for every n. For any ring R and for any nil ideal J of R, the matrix ideal M2(J) is a nil ideal of M2(R). For any ring R, the upper nilradical of Mn(R) is the set of matrices with entries from the upper nilradical of R for every positive integer n. For any ring R and for any nil ideal J of R, the polynomials with indeterminate x and coefficients from J lie in the Jacobson radical of the polynomial ring R[x]. For any ring R, the Jacobson radical of R[x] consists of the polynomials with coefficients from the upper nilradical of R. Related problems A conjecture by Amitsur read: "If J is a nil ideal in R, then J[x] is a nil ideal of the polynomial ring R[x]." This conjecture, if true, would have proven the Köthe conjecture through the equivalent statements above, however a counterexample was produced by Agata Smoktunowicz. While not a disproof of the Köthe conjecture, this fueled suspicions that the Köthe conjecture may be false. Kegel proved that a ring which is the direct sum of two nilpotent subrings is itself nilpotent. The question arose whether or not "nilpoten
https://en.wikipedia.org/wiki/Timeline%20of%20entomology%20since%201900
1900 Walter Reed, a United States Army major, was appointed president of a board "to study infectious diseases in Cuba paying particular attention to yellow fever." He concurred with Carlos Finlay in identifying mosquitoes as the agent. Ignacio Bolívar y Urrutia publishes Catálogo sinóptico de los ortópteros de la fauna ibérica. Kálmán Kertész, Mario Bezzi, Paul Stein (entomologist) and Theodor Becker published the first part of a Palaearctic Catalogue of Diptera Katalog der Paläarktischen dipteren in Budapest. 1901 William Francis de Vismes Kane A catalogue of the Lepidoptera of Ireland-the third (and first comprehensive) catalogue of the Irish macrolepidoptera. Augustus Daniel Imms General textbook of Entomology published. 10th revised edition (1977) still one of the most widely used of all insect texts. Thomas Hunt Morgan is the first to conduct genetic research with the fruit fly Drosophila melanogaster. In the Fly Room at Columbia University. 1902 Ronald Ross gained Nobel Prize for Medicine for his discovery that malaria is carried by mosquitoes. The awarding committee made special mention of the work of Giovanni Battista Grassi on the life history of the Plasmodium parasite. Charles W. Woodworth A List of the Insects of California published. Philogene Auguste Galilee Wytsman started Genera Insectorum, a multi-authored series that consisted of 219 issues, the last occurring in 1970. Otto SchmiedeknechtOpuscula Ichneumonologica. Blankenburg. William Morton Wheeler appointed curator of invertebrate zoology in the American Museum of Natural History, New York August Arthur Petry publishes Ueber die deutschen an Artemisia lebenden Arten der Gattung Bucculatrix Z. nebst Beschreibung einer neuen Art in Deutsche entomologische Zeitschrift Iris Peter Esben-Petersen publishes Bidrag til en Fortegnelse over Arktisk Norges Neuropterfauna 1905 Adolfo Lutz Beitraege zur Kenntniss der brasilianischen Tabaniden. Rev. Soc. Sci. São Paulo 1: 19–32, published Rap
https://en.wikipedia.org/wiki/List%20of%20Fibre%20Channel%20switches
Major manufacturers of Fibre Channel switches are: Brocade (Broadcom): Switches: G630, G620, G610, 6520, 6510, 6505, 5300, 5100, VA-40FC, 5000, 4900, 2400, 2800, 3800, 3900, 4100, 300, 200E Directors: DCX X6-8, DCX X6-4, DCX 8510-8, DCX 8510-4, DCX, DCX-4S, 48000, 24000, 12000 More complete list in Brocade Communications Systems article. Cisco: Switches: Cisco MDS 9020, 9120, 9124, 9124e, 9134, 9140, 9148, 9216, 9216A, 9216i, 9222i, 9250i, 9148S, 914T, 9396S, 9396T Nexus 5672UP, 5672UP-16G Directors: Cisco MDS 9506, 9509, 9513, 9706, 9710 and 9718 Juniper Networks: Switches: QFabric QFX3500-48S4Q-ACR, QFX3008-CHASA-BASE, QFX3008-SF16Q, QFX3100-GBE-ACR McDATA (acquired and rebranded by Brocade, now Broadcom): Switches: 3232, 4500, 4700 Directors: 6064, 6140, 10000 QLogic (Marvell): Switches: SANbox 5800,5802 5600, 5602 5200, 3050, 1400 Directors / Modular Chassis Switches: SANbox 9000
https://en.wikipedia.org/wiki/Sandin%20Image%20Processor
The Sandin Image Processor is a video synthesizer, usually introduced as invented by Dan Sandin and designed between 1971 and 1974. Some called it the "video equivalent of a Moog audio synthesizer." It accepted basic video signals and mixed and modified them in a fashion similar to what a Moog synthesizer did with audio. An analog, Modular Synthesizer, real time, video processing instrument, it provided video processing performance and produced subtle and delicate video effects of a complexity not seen again until well into the digital video revolution. Its real time nature led to its use in live theater performance, including "Electronic Visualization Events" where it was seen processing the output of Tom DeFanti's Graphics Symbiosis System. The Sandin Image Processor fostered many imaginative videotapes seen, for example at early SIGGRAPH conferences. Sandin's instrument, and his personally delivered instruction in video, trained many of the people who were later to engineer the digital video revolution. Physically, an Image Processor system would be built out of modules. Several types of modules were defined and typically would be an aluminum box containing a circuit board inside, video connectors and knobs on front of box and power connector on back of box. The modules would be organized in rows. Individual systems could vary in size and increase in power with the addition of more modules. Typical modules would be signal sources, combiners and modifiers, effects modules, sync, color encoder, color decoder, and NTSC video interface. Sandin was an advocate of education and a "copy it right distribution religion". Accordingly, he placed the circuit board layouts with a commercial circuit board company where anyone could buy them for ordinary manufacturing costs and freely published schematics and other documentation. A following of video artists, students, and others interested in video electronics would assemble these modules kit style and try to build up the
https://en.wikipedia.org/wiki/Ruziewicz%20problem
In mathematics, the Ruziewicz problem (sometimes Banach–Ruziewicz problem) in measure theory asks whether the usual Lebesgue measure on the n-sphere is characterised, up to proportionality, by its properties of being finitely additive, invariant under rotations, and defined on all Lebesgue measurable sets. This was answered affirmatively and independently for n ≥ 4 by Grigory Margulis and Dennis Sullivan around 1980, and for n = 2 and 3 by Vladimir Drinfeld (published 1984). It fails for the circle. The problem is named after Stanisław Ruziewicz.
https://en.wikipedia.org/wiki/Ian%20G.%20Macdonald
Ian Grant Macdonald (11 October 1928 – 8 August 2023) was a British mathematician known for his contributions to symmetric functions, special functions, Lie algebra theory and other aspects of algebra, algebraic combinatorics, and combinatorics. Early life and education Born in London, he was educated at Winchester College and Trinity College, Cambridge, graduating in 1952. Career He then spent five years as a civil servant. He was offered a position at Manchester University in 1957 by Max Newman, on the basis of work he had done while outside academia. In 1960 he moved to the University of Exeter, and in 1963 became a Fellow of Magdalen College, Oxford. Macdonald became Fielden Professor at Manchester in 1972, and professor at Queen Mary College, University of London, in 1976. He worked on symmetric products of algebraic curves, Jordan algebras and the representation theory of groups over local fields. In 1972 he proved the Macdonald identities, after a pattern known to Freeman Dyson. His 1979 book Symmetric Functions and Hall Polynomials has become a classic. Symmetric functions are an old theory, part of the theory of equations, to which both K-theory and representation theory lead. His was the first text to integrate much classical theory, such as Hall polynomials, Schur functions, the Littlewood–Richardson rule, with the abstract algebra approach. It was both an expository work and, in part, a research monograph, and had a major impact in the field. The Macdonald polynomials are now named after him. The Macdonald conjectures from 1982 also proved most influential. Macdonald was elected a Fellow of the Royal Society in 1979. He was an invited speaker in 1970 at the International Congress of Mathematicians (ICM) in Nice and a plenary speaker in 1998 at the ICM in Berlin. In 1991 he received the Pólya Prize of the London Mathematical Society. He was awarded the 2009 Steele Prize for Mathematical Exposition. In 2012 he became a fellow of the American Mathemati
https://en.wikipedia.org/wiki/Radiation%20chemistry
Radiation chemistry is a subdivision of nuclear chemistry which studies the chemical effects of ionizing radiation on matter. This is quite different from radiochemistry, as no radioactivity needs to be present in the material which is being chemically changed by the radiation. An example is the conversion of water into hydrogen gas and hydrogen peroxide. Radiation interactions with matter As ionizing radiation moves through matter its energy is deposited through interactions with the electrons of the absorber. The result of an interaction between the radiation and the absorbing species is removal of an electron from an atom or molecular bond to form radicals and excited species. The radical species then proceed to react with each other or with other molecules in their vicinity. It is the reactions of the radical species that are responsible for the changes observed following irradiation of a chemical system. Charged radiation species (α and β particles) interact through Coulombic forces between the charges of the electrons in the absorbing medium and the charged radiation particle. These interactions occur continuously along the path of the incident particle until the kinetic energy of the particle is sufficiently depleted. Uncharged species (γ photons, x-rays) undergo a single event per photon, totally consuming the energy of the photon and leading to the ejection of an electron from a single atom. Electrons with sufficient energy proceed to interact with the absorbing medium identically to β radiation. An important factor that distinguishes different radiation types from one another is the linear energy transfer (LET), which is the rate at which the radiation loses energy with distance traveled through the absorber. Low LET species are usually low mass, either photons or electron mass species (β particles, positrons) and interact sparsely along their path through the absorber, leading to isolated regions of reactive radical species. High LET species are usuall