source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Electron%20tomography
Electron tomography (ET) is a tomography technique for obtaining detailed 3D structures of sub-cellular, macro-molecular, or materials specimens. Electron tomography is an extension of traditional transmission electron microscopy and uses a transmission electron microscope to collect the data. In the process, a beam of electrons is passed through the sample at incremental degrees of rotation around the center of the target sample. This information is collected and used to assemble a three-dimensional image of the target. For biological applications, the typical resolution of ET systems are in the 5–20 nm range, suitable for examining supra-molecular multi-protein structures, although not the secondary and tertiary structure of an individual protein or polypeptide. Recently, atomic resolution in 3D electron tomography reconstructions has been demonstrated. BF-TEM and ADF-STEM tomography In the field of biology, bright-field transmission electron microscopy (BF-TEM) and high-resolution TEM (HRTEM) are the primary imaging methods for tomography tilt series acquisition. However, there are two issues associated with BF-TEM and HRTEM. First, acquiring an interpretable 3-D tomogram requires that the projected image intensities vary monotonically with material thickness. This condition is difficult to guarantee in BF/HRTEM, where image intensities are dominated by phase-contrast with the potential for multiple contrast reversals with thickness, making it difficult to distinguish voids from high-density inclusions. Second, the contrast transfer function of BF-TEM is essentially a high-pass filter – information at low spatial frequencies is significantly suppressed – resulting in an exaggeration of sharp features. However, the technique of annular dark-field scanning transmission electron microscopy (ADF-STEM), which is typically used on material specimens, more effectively suppresses phase and diffraction contrast, providing image intensities that vary with the projected m
https://en.wikipedia.org/wiki/Frequency%20scaling
In computer architecture, frequency scaling (also known as frequency ramping) is the technique of increasing a processor's frequency so as to enhance the performance of the system containing the processor in question. Frequency ramping was the dominant force in commodity processor performance increases from the mid-1980s until roughly the end of 2004. The effect of processor frequency on computer speed can be seen by looking at the equation for computer program runtime: where instructions per program is the total instructions being executed in a given program, cycles per instruction is a program-dependent, architecture-dependent average value, and time per cycle is by definition the inverse of processor frequency. An increase in frequency thus decreases runtime. However, power consumption in a chip is given by the equation where P is power consumption, C is the capacitance being switched per clock cycle, V is voltage, and F is the processor frequency (cycles per second). Increases in frequency thus increase the amount of power used in a processor. Increasing processor power consumption led ultimately to Intel's May 2004 cancellation of its Tejas and Jayhawk processors, which is generally cited as the end of frequency scaling as the dominant computer architecture paradigm. Moore's Law was still in effect when frequency scaling ended. Despite power issues, transistor densities were still doubling every 18 to 24 months. With the end of frequency scaling, new transistors (which are no longer needed to facilitate frequency scaling) are used to add extra hardware, such as additional cores, to facilitate parallel computing - a technique that is being referred to as parallel scaling. The end of frequency scaling as the dominant cause of processor performance gains has caused an industry-wide shift to parallel computing in the form of multicore processors. See also Dynamic frequency scaling Overclocking Underclocking Voltage scaling
https://en.wikipedia.org/wiki/Disodium%20pyrophosphate
Disodium pyrophosphate or sodium acid pyrophosphate (SAPP) is an inorganic compound consisting of sodium cations and pyrophosphate anion. It is a white, water-soluble solid that serves as a buffering and chelating agent, with many applications in the food industry. When crystallized from water, it forms a hexahydrate, but it dehydrates above room temperature. Pyrophosphate is a polyvalent anion with a high affinity for polyvalent cations, e.g. Ca2+. Disodium pyrophosphate is produced by heating sodium dihydrogen phosphate: 2 NaH2PO4 → Na2H2P2O7 + H2O Food uses Disodium pyrophosphate is a popular leavening agent found in baking powders. It combines with sodium bicarbonate to release carbon dioxide: Na2H2P2O7 + NaHCO3 → Na3HP2O7 + CO2 + H2O It is available in a variety of grades that affect the speed of its action. Because the resulting phosphate residue has an off-taste, SAPP is usually used in very sweet cakes which mask the off-taste. Disodium pyrophosphate and other sodium and potassium polyphosphates are widely used in food processing; in the E number scheme, they are collectively designated as E450, with the disodium form designated as E450(a). In the United States, it is classified as generally recognized as safe (GRAS) for food use. In canned seafood, it is used to maintain color and reduce purge during retorting. Retorting achieves microbial stability with heat. It is an acid source for reaction with baking soda to leaven baked goods. In baking powder, it is often labeled as food additive E450. In cured meats, it speeds the conversion of sodium nitrite to nitrite (NO2−) by forming the nitrous acid (HONO) intermediate, and can improve water-holding capacity. Disodium pyrophosphate is also found in frozen hash browns and other potato products, where it is used to keep the color of the potatoes from darkening. Disodium pyrophosphate can leave a slightly bitter aftertaste in some products, but "the SAPP taste can be masked by using su
https://en.wikipedia.org/wiki/The%20COED%20Project
The COED Project, or the COmmunications and EDiting Project, was an innovative software project created by the Computer Division of NOAA, US Department of Commerce in Boulder, Colorado in the 1970s. This project was designed, purchased and implemented by the in-house computing staff rather than any official organization. Intent The computer division previously had a history of frequently replacing its mainframe computers. Starting with a CDC 1604, then a CDC 3600, a couple of CDC 3800s, and finally a CDC 6600. The department also had an XDS 940 timesharing system which would support up to 32 users on dial-up modems. Due to rapidly changing requirements for computer resources, it was expected that new systems would be installed on a regular basis, and the resultant strain on the users to adapt to each new system was perceived to be excessive. The COED project was the result of a study group convened to solve this problem. The project was implemented by the computer specialists who were also responsible for the purchase, installation, and maintenance of all the computers in the division. COED was designed and implemented in long hours of overtime. The data communications aspect of the system was fully implemented and resulted in greatly improved access to the XDS 940 and CDC 6600 systems. It was also used as the front end of the - Free University of Amsterdam's SARA system for many years. Design A complete networked system was a pair of Modcomps: one II handled up to 256 communication ports, and one IV handled the disks and file editing. The system was designed to be fully redundant. If one pair failed the other automatically took over. All computer systems in the network were kept time-synchronized so that all file dates/times would be accurate - synchronized to the National Bureau of Standards atomic clock, housed in the same building. Another innovation was asynchronous dynamic speed recognition. After a terminal connected to a port, the user would type a Carr
https://en.wikipedia.org/wiki/Olla
An olla is a ceramic jar, often unglazed, used for cooking stews or soups, for the storage of water or dry foods, or for other purposes like the irrigation of olive trees. Ollas have short wide necks and wider bellies, resembling beanpots or handis. History Antiquity The Latin word olla or aulla (also aula) meant a very similar type of pot in Ancient Roman pottery, used for cooking and storage as well as a funerary urn to hold the ashes from cremation of bodies. Later, in Celtic Gaul, the olla became a symbol of the god Sucellus, who reigned over agriculture. Spain In Spain, the popular dish olla podrida (literally “rotten pot”), cooked in an olla, dates back to the Middle Ages. Catalonia In certain areas of the Pyrenees in Catalonia a type of olla, known locally as tupí, is used as container for the preparation of tupí, a certain type of cheese. American Southwest The Spanish settlers may have introduced the olla to Native American tribes which they reproduced for sale to colonists, but they had their own traditional pots attributed to their respective tribes. Catawba potters, native to the southeast, used unglazed pottery. Among Southwestern Native American tribes, ollas used for storing water often were made with narrow necks to prevent evaporation in the desert heat. The olla is used by the Kwaaymii people, among many others, for cooking, storing water, serving meals and even nursing infants. The term olla is also applied to regional basketry shaped with bulbous bodies and narrow necks. Olla baskets are commonly used by the Western Apache, Shoshone, and Yavapai. Use in irrigation Because water seeps through the walls of an unglazed olla by using soil-moisture tension, one can use ollas to irrigate plants. The olla is buried in the ground, with the neck of the olla extending above the soil. The olla is filled with water, and plants such as tomatoes, melons, corn, beans, carrots, etc are planted around the olla. Or, an olla can be put near a new sapl
https://en.wikipedia.org/wiki/Nexus%20%28standard%29
Nexus or IEEE-ISTO 5001-2003 is a standard debugging interface for embedded systems. Features The IEEE-ISTO 5001-2003 (Nexus) feature set is modeled on today's on-chip debug implementations, most of which are processor-specific. Its goal is to create a rich debug feature set while minimizing the required pin-count and die area, and being both processor- and architecture independent. It also supports multi-core and multi-processor designs. Accordingly, it is comparable to the ARM CoreSight debug architecture. Physically, IEEE-ISTO 5001-2003 defines a standard set of connectors for connecting the debug tool to the target or system under test. Logically, data is transferred using a packet-based protocol. This protocol can be JTAG (IEEE 1149.1); or, for high-speed systems, an auxiliary port can be used that supports full duplex, higher bandwidth transfers. Key Nexus functionality involves either JTAG-style request/response interactions, or packets transferred through the debug port, and includes: Run-time control ... With all implementations, debug tools can start and stop the processor, modify registers, and single-step machine instructions. Memory access ... Nexus supports memory access while the processor is running. Such access is required when debugging systems where it is not possible to halt the system under test. Examples include Engine Control, where stopping digital feedback loops can create physically dangerous situations. Breakpoints ... Programs halt when a specified event, a breakpoint, has occurred. The event can be specified as a code execution address, or as a data access (read or write) to an address with a specified value. Nexus breakpoints can be set at any address, including flash or ROM memory; CPUs may also provide special breakpoint instructions. Several kinds of event tracing are defined, mostly depending on a high speed auxiliary port to offload the voluminous data without negatively impacting program execution: Program trace ...
https://en.wikipedia.org/wiki/Sporocarp%20%28fungus%29
The sporocarp (also known as fruiting body, fruit body or fruitbody) of fungi is a multicellular structure on which spore-producing structures, such as basidia or asci, are borne. The fruitbody is part of the sexual phase of a fungal life cycle, while the rest of the life cycle is characterized by vegetative mycelial growth and asexual spore production. The sporocarp of a basidiomycete is known as a basidiocarp or basidiome, while the fruitbody of an ascomycete is known as an ascocarp. Many shapes and morphologies are found in both basidiocarps and ascocarps; these features play an important role in the identification and taxonomy of fungi. Fruitbodies are termed epigeous if they grow on the ground, while those that grow underground are hypogeous. Epigeous sporocarps that are visible to the naked eye, especially fruitbodies of a more or less agaricoid morphology, are often called mushrooms. Epigeous sporocarps have mycelia that extend underground far beyond the mother sporocarp. There is a wider distribution of mycelia underground than sporocarps above ground. Hypogeous fungi are usually called truffles or false truffles. There is evidence that hypogeous fungi evolved from epigeous fungi. During their evolution, truffles lost the ability to disperse their spores by air currents, and propagate instead by animal consumption and subsequent defecation. In amateur mushroom hunting, and to a large degree in academic mycology as well, identification of higher fungi is based on the features of the sporocarp. The largest known fruitbody is a specimen of Phellinus ellipsoideus (formerly Fomitiporia ellipsoidea) found on Hainan Island, part of China. It measures up to in length and is estimated to weigh between . Ecology A wide variety of animals feed on epigeous and hypogeous fungi. The mammals that feed on fungi are as diverse as fungi themselves and are called mycophages. Squirrels and chipmunks eat the greatest variety of fungi, but there are many other mammals that
https://en.wikipedia.org/wiki/Sporocarp%20%28ferns%29
A sporocarp is a specialised type of structure in the aquatic ferns of the order Salviniales whose primary function is the production and release of spores. Sporocarps are found only in the Salviniales, a group that is aquatic and heterosporous, but the structures are very different in the two families of the order. In the Salviniaceae family, the sporocarp is nothing more than a modified sorus, a single cluster of spore-producing tissues enclosed by a thin sphere of tissue and attached to the leaves. In the Marsileaceae (water-clover) family, the sporocarp is a more elaborate structure formed from an entire leaf whose development and form is greatly modified. These are hairy, short-stalked, bean-shaped structures (usually 3 to 8 mm in diameter) with a hardened outer covering. This outer covering is tough and resistant to drying out, allowing the spores inside to survive unfavorable conditions such as winter frost or summer desiccation. Despite this toughness, the sporocarps will open readily in water if conditions are favorable, and specimens have been successfully germinated after being stored for more than forty years. Each growing season, only one sporocarp develops per node along the rhizome near the base of the other leaf-stalks. The sporocarps are functionally and developmentally modified leaves, although they have much shorter stalks than the vegetative leaves. Inside the sporocarp, the modified leaflets bear several sori, each of which consists of several sporangia covered by a thin hood of tissue (the indusium). Each sorus includes a mix of two types of sporangium, each type producing only one of two kinds of spores. Toward the center of each sorus and developing first are the megasporangia, each of which will produce a single large female megaspore. Surrounding them at the edge of the sorus and developing later are the microsporangia, each of which will produce many small male microspores.
https://en.wikipedia.org/wiki/Bogolyubov%20Prize%20for%20young%20scientists
The Bogoliubov Prize for young scientists is an award offered to young researchers in theoretical physics by the Joint Institute for Nuclear Research (JINR), an international intergovernmental organization located in Dubna, Russia. The award is issued in memory of the physicist and mathematician Nikolay Bogoliubov. The prize is awarded to young (up to 33-year-old) researchers for "outstanding contributions in fields of theoretical physics related to Bogoliubov's scientific interests". The awardee is one who has demonstrated "early scientific maturity" and whose results are recognized worldwide and peer-reviewed. The laureates generally emulate Bogoliubov's own skill in using sophisticated mathematics to attempt to solve concrete physical problems (mostly in the fields of nonlinear dynamics, statistical physics, quantum field theory and elementary particle physics). Jury The jury is presided by the theoretical physicist Dmitry Shirkov, who co-authored many works with Nikolay Bogoliubov. Laureates 1999 Oleg Shvedov (Moscow State University, Russia): for a series of works on asymptotical methods in statistical physics and quantum field theory. 2001 Evgenii Ivashkevich (JINR, Russia): for a series of works on analytical methods in non-equilibrium statistical mechanics. 2005 Aurélien Barrau (the Laboratory of sub-atomic physics and cosmology and Joseph Fourier University, Grenoble, France): for a series of works on astrophysics and cosmology. See also List of physics awards
https://en.wikipedia.org/wiki/Charles%20Otis%20Whitman
Charles Otis Whitman (December 6, 1842 – December 14, 1910) was an American zoologist, who was influential to the founding of classical ethology (study of animal behavior). A dedicated educator who preferred to teach a few research students at a time, he made major contributions in the areas of evolution and embryology of worms, comparative anatomy, heredity, and animal behaviour. He was known as the "Father of Zoology" in Japan. Biography Whitman was born in Woodstock, Maine. His parents were Adventist pacifists and prevented his efforts to enlist in the Union army in 1862. He worked as a part-time teacher and converted to Unitarianism. He graduated from Bowdoin College in 1868. Following graduation, Whitman became principal of the Westford Academy, a small Unitarian-oriented college preparatory school outside Lowell, Massachusetts. In 1872 he moved to Boston and after becoming a member of the Boston Society of Natural History in 1874, he decided to study zoology full-time. In 1875, he took a leave of absence and went to the University of Leipzig in Germany to complete a Ph.D. which he obtained in 1878. A year later he received a postdoctoral fellowship at the Johns Hopkins University, but immediately gave it up when after being recommended by noted biologist Edward Sylvester Morse, he was hired by the Japanese government to succeed Morse as professor at the Tokyo Imperial University from 1879 to 1881. Influenced by his training in Germany, he introduced systematic methods of biological research, including the use of the microscope. After leaving Japan, Whitman performed research at the Naples Zoological Station (1882), became an assistant at the Museum of Comparative Zoology, Harvard University (1883–5), then directed the Allis Lake Laboratory, in Milwaukee (1886–9), where he founded the Journal of Morphology (1887). In 1884, Whitman married Emily Nunn. He moved to Clark University (Worcester, Massachusetts) (1889–92), then became a professor and curator of
https://en.wikipedia.org/wiki/Nifurtimox
Nifurtimox, sold under the brand name Lampit, is a medication used to treat Chagas disease and sleeping sickness. For sleeping sickness it is used together with eflornithine in nifurtimox-eflornithine combination treatment. In Chagas disease it is a second-line option to benznidazole. It is given by mouth. Common side effects include abdominal pain, headache, nausea, and weight loss. There are concerns from animal studies that it may increase the risk of cancer but these concerns have not been found in human trials. Nifurtimox is not recommended in pregnancy or in those with significant kidney or liver problems. It is a type of nitrofuran. Nifurtimox came into medication use in 1965. It is on the World Health Organization's List of Essential Medicines. It is not available commercially in Canada. It was approved for medical use in the United States in August 2020. In regions of the world where the disease is common nifurtimox is provided for free by the World Health Organization (WHO). Medical uses Nifurtimox has been used to treat Chagas disease, when it is given for 30 to 60 days. However, long-term use of nifurtimox does increase chances of adverse events like gastrointestinal and neurological side effects. Due to the low tolerance and completion rate of nifurtimox, benznidazole is now being more considered for those who have Chagas disease and require long-term treatment. In the United States nifurtimox is indicated in children and adolescents (birth to less than 18 years of age and weighing at least for the treatment of Chagas disease (American Trypanosomiasis), caused by Trypanosoma cruzi. Nifurtimox has also been used to treat African trypanosomiasis (sleeping sickness), and is active in the second stage of the disease (central nervous system involvement). When nifurtimox is given on its own, about half of all patients will relapse, but the combination of melarsoprol with nifurtimox appears to be efficacious. Trials are awaited comparing melarsoprol/nifu
https://en.wikipedia.org/wiki/Bethe%20formula
The Bethe formula or Bethe–Bloch formula describes the mean energy loss per distance travelled of swift charged particles (protons, alpha particles, atomic ions) traversing matter (or alternatively the stopping power of the material). For electrons the energy loss is slightly different due to their small mass (requiring relativistic corrections) and their indistinguishability, and since they suffer much larger losses by Bremsstrahlung, terms must be added to account for this. Fast charged particles moving through matter interact with the electrons of atoms in the material. The interaction excites or ionizes the atoms, leading to an energy loss of the traveling particle. The non-relativistic version was found by Hans Bethe in 1930; the relativistic version (shown below) was found by him in 1932. The most probable energy loss differs from the mean energy loss and is described by the Landau-Vavilov distribution. The formula For a particle with speed v, charge z (in multiples of the electron charge), and energy E, traveling a distance x into a target of electron number density n and mean excitation energy I (see below), the relativistic version of the formula reads, in SI units: where c is the speed of light and ε0 the vacuum permittivity, , e and me the electron charge and rest mass respectively. Here, the electron density of the material can be calculated by where ρ is the density of the material, Z its atomic number, A its relative atomic mass, NA the Avogadro number and Mu the Molar mass constant. In the figure to the right, the small circles are experimental results obtained from measurements of various authors, while the red curve is Bethe's formula. Evidently, Bethe's theory agrees very well with experiment at high energy. The agreement is even better when corrections are applied (see below). For low energies, i.e., for small velocities of the particle β << 1, the Bethe formula reduces to This can be seen by first replacing βc by v in eq. (1) and then ne
https://en.wikipedia.org/wiki/Hamburger%E2%80%93Hamilton%20stages
In developmental biology, the Hamburger–Hamilton stages (HH) are a series of 46 chronological stages in chick development, starting from laying of the egg and ending with a newly hatched chick. It is named for its creators, Viktor Hamburger and Howard L. Hamilton. Chicken embryos are a useful model organism in experimental embryology for a number of reasons. Their domestication as poultry makes them more readily available than other vertebrates (such as mice), and being oviparous, the embryos are easily accessible. However, the rate of development can be affected by a range of factors; including the specific breed, the temperature of incubation, the delay between laying and incubation, and the time of year, raising the need to create a standardised system based on morphology rather than chronological age. There had been a previous attempt to create a morphological system for staging chick development by the German embryologists Keibel and Abraham in 1900, but this system lacked detail and was not widely used, with most researchers relying on somite number or age to identify the stage of development. Hamburger and Hamilton aimed to provide a detailed description of developmental events, modeled on an earlier system for Axolotl by Harrison. The Hamburger–Hamilton system provides advantages over the Carnegie system in that it allows the developing chick to be accurately characterized during all embryonic stages, and is used universally in chick embryology. Stages of development Chick embryos can be "staged" according to the different morphological landmarks. Although most organ systems have a stereotypical appearance at each stage, there are a few which particularly lend themselves to use in staging chick development. In the very early embryo, the primitive streak is the only visible landmark, and its shape and size are used to stage HH1-6 embryos. The nervous system is formed by a process of neurulation. Stages 5–8 may be defined by the formation of a head fo
https://en.wikipedia.org/wiki/Gold%20Standard%20%28carbon%20offset%20standard%29
The Gold Standard (GS), or Gold Standard for the Global Goals, is a standard and logo certification mark program, for non-governmental emission reductions projects, in the Clean Development Mechanism (CDM), the Voluntary Carbon Market and other climate and development interventions. It is published and administered by the Gold Standard Foundation, a non-profit foundation headquartered in Geneva, Switzerland. It was designed with an intent to ensure that carbon credits are real, verifiable, and that projects make measurable contributions to sustainable development. The objective of the GS is to add branding, with a quality label, to carbon credits generated by projects which can then be bought and traded by countries that have a binding legal commitment according to the Kyoto Protocol, businesses, or other organizations for carbon offsetting purposes. History The Gold Standard for CDM (GS-CER) was developed in 2003 by World Wide Fund for Nature (WWF), South-North, and Helio International. The Voluntary Gold Standard (GS-VER), a standard for use within the voluntary carbon market, was launched in May 2006. The programs were created following a 12-month consultation period that included workshops and web-based consultation conducted by an independent standard advisory board composed of non-governmental organizations (NGOs), scientists, project developers and government representatives. As of October 2018, more than 80 non-profit organizations internationally had officially endorsed the Gold Standard program. The program is administered by the Gold Standard Foundation, a non-profit foundation under Swiss law that is headquartered in Geneva, Switzerland. It also employs local experts in Brazil, India, and South Africa. In July 2008, the Gold Standard Version 2.0 was released, including sets of guidelines and manuals on the GS requirements, toolkits, and other supporting documents to be used by project developers and DOEs. This relegated the previously applicable man
https://en.wikipedia.org/wiki/Capsule%20of%20hip%20joint
The capsule of hip joint, articular capsule, capsular ligament, is strong and dense attachment of the hip joint. Anterosuperiorly, it is attached to the margin of the acetabulum 5 to 6 mm. beyond the labrum behind; but in front, it is attached to the outer margin of the labrum, and, opposite to the notch where the margin of the cavity is deficient, it is connected to the transverse ligament, and by a few fibers to the edge of the obturator foramen. It surrounds the neck of the femur, and is attached, in front, to the intertrochanteric line; above, to the base of the neck; behind, to the neck, about 1.25 cm. above the intertrochanteric crest; below, to the lower part of the neck, close to the lesser trochanter. From its femoral attachment some of the fibers are reflected upward along the neck as longitudinal bands, termed retinacula. The capsule is much thicker at the upper and forepart of the joint, where the most resistance is required; behind and below, it is thin and loose. It consists of two sets of fibers, circular and longitudinal. The circular fibers, zona orbicularis, are most abundant at the lower and back part of the capsule, and form a sling or collar around the neck of the femur. Anteriorly they blend with the deep surface of the iliofemoral ligament, and gain an attachment to the anterior inferior iliac spine. The longitudinal fibers are greatest in amount at the upper and front part of the capsule, where they are reinforced by distinct bands, or accessory ligaments, of which the most important is the iliofemoral ligament. The other accessory bands are known as the pubofemoral ligament and the ischiofemoral ligament. The external surface of the capsule is rough, covered by numerous muscles, and separated in front from the psoas major and iliacus by the iliopectineal bursa, which not infrequently communicates by a circular aperture with the cavity of the joint. Pathologies Hip Capsule Contracture This pathology is similar to the frozen should
https://en.wikipedia.org/wiki/Acousto-optics
Acousto-optics is a branch of physics that studies the interactions between sound waves and light waves, especially the diffraction of laser light by ultrasound (or sound in general) through an ultrasonic grating. Introduction Optics has had a very long and full history, from ancient Greece, through the renaissance and modern times. As with optics, acoustics has a history of similar duration, again starting with the ancient Greeks. In contrast, the acousto-optic effect has had a relatively short history, beginning with Brillouin predicting the diffraction of light by an acoustic wave, being propagated in a medium of interaction, in 1922. This was then confirmed with experimentation in 1932 by Debye and Sears, and also by Lucas and Biquard. The particular case of diffraction on the first order, under a certain angle of incidence, (also predicted by Brillouin), has been observed by Rytow in 1935. Raman and Nath (1937) have designed a general ideal model of interaction taking into account several orders. This model was developed by Phariseau (1956) for diffraction including only one diffraction order. In general, acousto-optic effects are based on the change of the refractive index of a medium due to the presence of sound waves in that medium. Sound waves produce a refractive index grating in the material, and it is this grating that is "seen" by the light wave. These variations in the refractive index, due to the pressure fluctuations, may be detected optically by refraction, diffraction, and interference effects, reflection may also be used. The acousto-optic effect is extensively used in the measurement and study of ultrasonic waves. However, the growing principal area of interest is in acousto-optical devices for the deflection, modulation, signal processing and frequency shifting of light beams. This is due to the increasing availability and performance of lasers, which have made the acousto-optic effect easier to observe and measure. Technical progress in
https://en.wikipedia.org/wiki/Coffee-Mate
Coffee-mate is a lactose-free coffee creamer manufactured by Nestlé, available in powdered, liquid and concentrated liquid forms. It was introduced in 1961 by Carnation. Ingredients Coffee-mate Original is mostly made up of three ingredients: corn syrup solids, hydrogenated vegetable oil, and sodium caseinate. Sodium caseinate, a form of casein, is a milk derivative; however, this is a required ingredient in non-dairy creamers, which are considered non-dairy due to the lack of lactose. Coffee-mate Original also contains small amounts of dipotassium phosphate, to prevent coagulation; mono- and diglycerides, used as an emulsifier; sodium aluminosilicate, an anticaking agent; artificial flavor; and annatto color. Varieties The original product was introduced in February 1961, followed by Coffee-mate Lite and Coffee-mate Liquid in 1989. In the U.S., where the product is manufactured by Nestlé in Glendale, California, the product is available in liquid, liquid concentrate and powdered forms. American Coffee-mate comes in over 25 different flavors, including gingerbread, Parisian almond creme as well as peppermint mocha. Discontinued varieties include Coffee-mate Soy and Coffee-mate Half & Half. In Europe, it is only available in powder form as a coffee creamer in one or two varieties depending on the country with no added flavors. The European version of Coffee-mate is manufactured without the use of hydrogenated fat, which is linked to heart disease. Tea-mate A Tea-mate powdered variety for whitening tea was also introduced in the UK in a jar as well as in other countries in sachets or cartons. In the UK, the variety was subsequently discontinued owing to poor sales performance. In other locations, such as in India, the product remains available. Other uses Coffee-Mate was used on television horror-anthology series Are You Afraid of the Dark? as the "midnight dust" characters threw into a campfire to make an explosive puff of smoke and sparks as they introduced th
https://en.wikipedia.org/wiki/Colm%20McFadden
Colm Anthony McFadden (; born 1983) is an Irish Gaelic footballer who plays at full forward for St Michael's and, from 2002 to 2016, for the Donegal county team. McFadden is Donegal's most-capped Championship player. He played an integral role in Donegal's successful 2011–14 run of matches, starting every Championship game in that period. Among other accolades, he has one All Star to his name (2012), one All-Ireland Senior Football Championship (2012), three Ulster Senior Football Championships (2011, 2012 and 2014) and one National Football League (2007). Top scorer in the 2012 All-Ireland Senior Football Championship, he was subsequently shortlisted for All Stars Footballer of the Year, but the award went to team-mate Karl Lacey. McFadden's haul of Ulster Senior Football Championships was a joint county team record (alongside such past players as Anthony Molloy, Martin McHugh, Joyce McMullan and Donal Reid) for four years until Patrick McBrearty, Neil McGee, Paddy McGrath, Leo McLoone, Frank McGlynn, Michael Murphy and Anthony Thompson surpassed it in 2018. A staff member of St Eunan's College in Letterkenny, McFadden has been deputy principal since 2019. Playing career Club McFadden's club have not had much success at senior level. They reached the final of the 2011 Donegal Senior Football Championship— their first ever senior final—but lost, though McFadden scored three points including one free. Previously, in 2004, they reached the final of All-Ireland Intermediate Club Football Championship, in which McFadden played but was held scoreless. McFadden's father was a coach at the club. Inter-county Youth McFadden's father and older brother played football and encouraged his own interest. McFadden is left-footed (i.e. ciotóg). He played at older age grades from early on. He was a county player by under-16. Himself and Christy Toye, who was in his class at primary school and would later play alongside him many times for Donegal, played in (and won) the Ted
https://en.wikipedia.org/wiki/Caccioppoli%20set
In mathematics, a Caccioppoli set is a set whose boundary is measurable and has (at least locally) a finite measure. A synonym is set of (locally) finite perimeter. Basically, a set is a Caccioppoli set if its characteristic function is a function of bounded variation. History The basic concept of a Caccioppoli set was first introduced by the Italian mathematician Renato Caccioppoli in the paper : considering a plane set or a surface defined on an open set in the plane, he defined their measure or area as the total variation in the sense of Tonelli of their defining functions, i.e. of their parametric equations, provided this quantity was bounded. The measure of the boundary of a set was defined as a functional, precisely a set function, for the first time: also, being defined on open sets, it can be defined on all Borel sets and its value can be approximated by the values it takes on an increasing net of subsets. Another clearly stated (and demonstrated) property of this functional was its lower semi-continuity. In the paper , he precised by using a triangular mesh as an increasing net approximating the open domain, defining positive and negative variations whose sum is the total variation, i.e. the area functional. His inspiring point of view, as he explicitly admitted, was those of Giuseppe Peano, as expressed by the Peano-Jordan Measure: to associate to every portion of a surface an oriented plane area in a similar way as an approximating chord is associated to a curve. Also, another theme found in this theory was the extension of a functional from a subspace to the whole ambient space: the use of theorems generalizing the Hahn–Banach theorem is frequently encountered in Caccioppoli research. However, the restricted meaning of total variation in the sense of Tonelli added much complication to the formal development of the theory, and the use of a parametric description of the sets restricted its scope. Lamberto Cesari introduced the "right" generalization
https://en.wikipedia.org/wiki/Muscular%20branches%20of%20ulnar%20nerve
The muscular branches of ulnar nerve are a variety of branches of the ulnar nerve. One supplies the flexor carpi ulnaris muscle (a superficial muscle of the anterior compartment of the forearm), while the other supplies the ulnar half of the flexor digitorum profundus muscle (a deep muscle of the anterior compartment of the forearm). Structure The muscular branches of the ulnar nerve arise at the elbow. Multiple branches may supply each muscle. Variation The branching pattern of the muscular branches of the ulnar nerve varies significantly. Function The muscular branches of the ulnar nerve supply two flexor muscles of the anterior compartment of the forearm: flexor carpi ulnaris muscle, a superficial muscle. the ulnar / medial half of the flexor digitorum profundus muscle, a deep muscle.
https://en.wikipedia.org/wiki/Deep%20branch%20of%20ulnar%20nerve
The deep branch of the ulnar nerve is a terminal, primarily motor branch of the ulnar nerve. It is accompanied by the deep palmar branch of ulnar artery. Structure It passes between the abductor digiti minimi and the flexor digiti minimi brevis. It then perforates the opponens digiti minimi and follows the course of the deep palmar arch beneath the flexor tendons. As the deep ulnar nerve passes across the palm, it lies in a fibrous tunnel formed between the hook of the hamate and the pisiform (Guyon's canal). Function At its origin it innervates the hypothenar muscles. As it crosses the deep part of the hand, it innervates all the interosseous muscles and the third and fourth lumbricals. It ends by innervating the adductor pollicis and the medial (deep) head of the flexor pollicis brevis. It also sends articular filaments to the wrist-joint (following Hilton's law)
https://en.wikipedia.org/wiki/Superficial%20branch%20of%20ulnar%20nerve
The superficial branch of the ulnar nerve is a terminal branch of the ulnar nerve. It supplies the palmaris brevis and the skin on the ulnar side of the hand. It also divides into a common palmar digital nerve and a proper palmar digital nerve. The proper digital branches are distributed to the fingers in the same manner as those of the median nerve.
https://en.wikipedia.org/wiki/CN2%20algorithm
The CN2 induction algorithm is a learning algorithm for rule induction. It is designed to work even when the training data is imperfect. It is based on ideas from the AQ algorithm and the ID3 algorithm. As a consequence it creates a rule set like that created by AQ but is able to handle noisy data like ID3. Description of algorithm The algorithm must be given a set of examples, TrainingSet, which have already been classified in order to generate a list of classification rules. A set of conditions, SimpleConditionSet, which can be applied, alone or in combination, to any set of examples is predefined to be used for the classification. routine CN2(TrainingSet) let the ClassificationRuleList be empty repeat let the BestConditionExpression be Find_BestConditionExpression(TrainingSet) if the BestConditionExpression is not nil then let the TrainingSubset be the examples covered by the BestConditionExpression remove from the TrainingSet the examples in the TrainingSubset let the MostCommonClass be the most common class of examples in the TrainingSubset append to the ClassificationRuleList the rule 'if ' the BestConditionExpression ' then the class is ' the MostCommonClass until the TrainingSet is empty or the BestConditionExpression is nil return the ClassificationRuleList routine Find_BestConditionExpression(TrainingSet) let the ConditionalExpressionSet be empty let the BestConditionExpression be nil repeat let the TrialConditionalExpressionSet be the set of conditional expressions, {x and y where x belongs to the ConditionalExpressionSet and y belongs to the SimpleConditionSet}. remove all formulae in the TrialConditionalExpressionSet that are either in the ConditionalExpressionSet (i.e., the unspecialized ones) or null (e.g., big = y and big = n) for every expression, F, in the TrialConditionalExpressionSet
https://en.wikipedia.org/wiki/Occurrences%20of%20Grandi%27s%20series
This article lists occurrences of the paradoxical infinite "sum" +1 -1 +1 -1 ... , sometimes called Grandi's series. Parables Guido Grandi illustrated the series with a parable involving two brothers who share a gem. Thomson's lamp is a supertask in which a hypothetical lamp is turned on and off infinitely many times in a finite time span. One can think of turning the lamp on as adding 1 to its state, and turning it off as subtracting 1. Instead of asking the sum of the series, one asks the final state of the lamp. One of the best-known classic parables to which infinite series have been applied, Achilles and the tortoise, can also be adapted to the case of Grandi's series. Numerical series The Cauchy product of Grandi's series with itself is 1 − 2 + 3 − 4 + · · ·. Several series resulting from the introduction of zeros into Grandi's series have interesting properties; for these see Summation of Grandi's series#Dilution. Grandi's series is just one example of a divergent geometric series. The rearranged series 1 − 1 − 1 + 1 + 1 − 1 − 1 + · · · occurs in Euler's 1775 treatment of the pentagonal number theorem as the value of the Euler function at q = 1. Power series The power series most famously associated with Grandi's series is its ordinary generating function, Fourier series Hyperbolic sine In his 1822 Théorie Analytique de la Chaleur, Joseph Fourier obtains what is currently called a Fourier sine series for a scaled version of the hyperbolic sine function, He finds that the general coefficient of sin nx in the series is For n > 1 the above series converges, while the coefficient of sin x appears as 1 − 1 + 1 − 1 + · · · and so is expected to be 1⁄2. In fact, this is correct, as can be demonstrated by directly calculating the Fourier coefficient from an integral: Dirac comb Grandi's series occurs more directly in another important series, At x = , the series reduces to −1 + 1 − 1 + 1 − · · · and so one might expect it to meaningfully equal −1⁄2. In
https://en.wikipedia.org/wiki/Nordic%20Institute%20for%20Theoretical%20Physics
The Nordic Institute for Theoretical Physics, or NORDITA, or Nordita (), is an international organisation for research in theoretical physics. It was established as Nordisk Institut for Teoretisk Atomfysik in 1957 by Niels Bohr and the Swedish physicist Torsten Gustafson. Nordita was originally located at the Niels Bohr Institute in Copenhagen (Denmark), but moved to the AlbaNova University Centre in Stockholm (Sweden) on 1 January 2007. The main research areas at Nordita are astrophysics, hard and soft condensed matter physics, and high-energy physics. Research Since Nordita's establishment in 1957 the original focus on research in atomic and nuclear physics has been broadened. Research carried out by Nordita's academic staff presently includes astrophysics, biological physics, hard condensed matter physics and materials physics, soft condensed matter physics, cosmology, statistical physics and complex systems, high-energy physics, and gravitational physics and cosmology. The in-house research forms the backbone of Nordita activities and complements the more service oriented functions. By mission, Nordita has the task of facilitating interactions between physicists in the Nordic countries as well as with the international community; therefore the comparably small institute has a large number of visitors, conferences and scientific programs that last several weeks. Notable former or present researchers at Nordita include Alexander V. Balatsky, Holger Bech Nielsen, Axel Brandenburg, Gerald E. Brown, Paolo Di Vecchia, James Hamilton, John Hertz, Sabine Hossenfelder, Alan Luther, Ben Roy Mottelson, Christopher J. Pethick, Leon Rosenfeld, Kim Sneppen, John Wettlaufer, and Konstantin Zarembo. Organization Nordita is governed by a board consisting of one representative and one alternate member from each Nordic country, headed by a chair person. The board appoints a number of research committees which evaluate proposals and advice the board on scientific and educ
https://en.wikipedia.org/wiki/Summation%20of%20Grandi%27s%20series
General considerations Stability and linearity The formal manipulations that lead to 1 − 1 + 1 − 1 + · · · being assigned a value of 1⁄2 include: Adding or subtracting two series term-by-term, Multiplying through by a scalar term-by-term, "Shifting" the series with no change in the sum, and Increasing the sum by adding a new term to the series' head. These are all legal manipulations for sums of convergent series, but 1 − 1 + 1 − 1 + · · · is not a convergent series. Nonetheless, there are many summation methods that respect these manipulations and that do assign a "sum" to Grandi's series. Two of the simplest methods are Cesàro summation and Abel summation. Cesàro sum The first rigorous method for summing divergent series was published by Ernesto Cesàro in 1890. The basic idea is similar to Leibniz's probabilistic approach: essentially, the Cesàro sum of a series is the average of all of its partial sums. Formally one computes, for each n, the average σn of the first n partial sums, and takes the limit of these Cesàro means as n goes to infinity. For Grandi's series, the sequence of arithmetic means is 1, 1⁄2, 2⁄3, 2⁄4, 3⁄5, 3⁄6, 4⁄7, 4⁄8, … or, more suggestively, (1⁄2+1⁄2), 1⁄2, (1⁄2+1⁄6), 1⁄2, (1⁄2+1⁄10), 1⁄2, (1⁄2+1⁄14), 1⁄2, … where for even n and for odd n. This sequence of arithmetic means converges to 1⁄2, so the Cesàro sum of Σak is 1⁄2. Equivalently, one says that the Cesàro limit of the sequence 1, 0, 1, 0, … is 1⁄2. The Cesàro sum of 1 + 0 − 1 + 1 + 0 − 1 + · · · is 2⁄3. So the Cesàro sum of a series can be altered by inserting infinitely many 0s as well as infinitely many brackets. The series can also be summed by the more general fractional (C, a) methods. Abel sum Abel summation is similar to Euler's attempted definition of sums of divergent series, but it avoids Callet's and N. Bernoulli's objections by precisely constructing the function to use. In fact, Euler likely meant to limit his definition to power series, and in practice he used i
https://en.wikipedia.org/wiki/Gadget%20%28computer%20science%29
In computational complexity theory, a gadget is a subunit of a problem instance that simulates the behavior of one of the fundamental units of a different computational problem. Gadgets are typically used to construct reductions from one computational problem to another, as part of proofs of NP-completeness or other types of computational hardness. The component design technique is a method for constructing reductions by using gadgets. traces the use of gadgets to a 1954 paper in graph theory by W. T. Tutte, in which Tutte provided gadgets for reducing the problem of finding a subgraph with given degree constraints to a perfect matching problem. However, the "gadget" terminology has a later origin, and does not appear in Tutte's paper. Example Many NP-completeness proofs are based on many-one reductions from 3-satisfiability, the problem of finding a satisfying assignment to a Boolean formula that is a conjunction (Boolean and) of clauses, each clause being the disjunction (Boolean or) of three terms, and each term being a Boolean variable or its negation. A reduction from this problem to a hard problem on undirected graphs, such as the Hamiltonian cycle problem or graph coloring, would typically be based on gadgets in the form of subgraphs that simulate the behavior of the variables and clauses of a given 3-satisfiability instance. These gadgets would then be glued together to form a single graph, a hard instance for the graph problem in consideration. For instance, the problem of testing 3-colorability of graphs may be proven NP-complete by a reduction from 3-satisfiability of this type. The reduction uses two special graph vertices, labeled as "Ground" and "False", that are not part of any gadget. As shown in the figure, the gadget for a variable x consists of two vertices connected in a triangle with the ground vertex; one of the two vertices of the gadget is labeled with x and the other is labeled with the negation of x. The gadget for a clause consists o
https://en.wikipedia.org/wiki/Adaptive%20replacement%20cache
Adaptive Replacement Cache (ARC) is a page replacement algorithm with better performance than LRU (least recently used). This is accomplished by keeping track of both frequently used and recently used pages plus a recent eviction history for both. The algorithm was developed at the IBM Almaden Research Center. In 2006, IBM was granted a patent for the adaptive replacement cache policy. Summary Basic LRU maintains an ordered list (the cache directory) of resource entries in the cache, with the sort order based on the time of most recent access. New entries are added at the top of the list, after the bottom entry has been evicted. Cache hits move to the top, pushing all other entries down. ARC improves the basic LRU strategy by splitting the cache directory into two lists, T1 and T2, for recently and frequently referenced entries. In turn, each of these is extended with a ghost list (B1 or B2), which is attached to the bottom of the two lists. These ghost lists act as scorecards by keeping track of the history of recently evicted cache entries, and the algorithm uses ghost hits to adapt to recent change in resource usage. Note that the ghost lists only contain metadata (keys for the entries) and not the resource data itself, i.e. as an entry is evicted into a ghost list its data is discarded. The combined cache directory is organised in four LRU lists: T1, for recent cache entries. T2, for frequent entries, referenced at least twice. B1, ghost entries recently evicted from the T1 cache, but are still tracked. B2, similar ghost entries, but evicted from T2. T1 and B1 together are referred to as L1, a combined history of recent single references. Similarly, L2 is the combination of T2 and B2. The whole cache directory can be visualised in a single line: . . . [ B1 <-[ T1 <-!-> T2 ]-> B2 ] . . [ . . . . [ . . . . . . ! . .^. . . . ] . . . . ] [ fixed cache size (c) ] The inner [ ] brackets indicate actual
https://en.wikipedia.org/wiki/Zonal%20polynomial
In mathematics, a zonal polynomial is a multivariate symmetric homogeneous polynomial. The zonal polynomials form a basis of the space of symmetric polynomials. They appear as zonal spherical functions of the Gelfand pairs (here, is the hyperoctahedral group) and , which means that they describe canonical basis of the double class algebras and . They are applied in multivariate statistics. The zonal polynomials are the case of the C normalization of the Jack function.
https://en.wikipedia.org/wiki/Lesney%20Products
Lesney Products & Co. Ltd. was a British manufacturing company responsible for the conception, manufacture, and distribution of die-cast toys under the "Matchbox" name. The company existed from 1947 until 1982. History Lesney was founded on 19 January 1947 as an industrial die-casting company by Leslie Smith (6 March 1918 - 26 May 2005) and Rodney Smith (26 August 1917 - 20 July 2013). The name "Lesney" was an acronym from both partners' (which were not related by blood) names. They had been school friends and served together in the Royal Navy during World War II. Shortly after they founded the company, Rodney Smith introduced to his partner a man named John "Jack" Odell, an engineer he had met in a previous job at D.C.M.T. (another die-casting company). Mr. Odell initially rented a space in the Lesney building to make his own die-casting products, but he joined the company as a partner in that same year. Lesney originally started operations in a derelict pub in north London (The Rifleman), but later, as finances allowed, changed location several times before finally moving to a factory in Hackney which became synonymous with the company. In late 1947 they received a request for parts for a toy gun. As that proved to be a viable alternative to reducing their factory's output during periods in which they received fewer or smaller industrial orders, they started making die cast model toys the following year. Seeing no future for the company, Rodney Smith left in 1951. The first model toy they produced in 1948; a die-cast road roller based clearly on a Dinky model (the industry leader in die-cast toy cars at that time); in hindsight proves to be the first of perhaps three major milestones on the path to their eventual destiny. It established transportation as a viable and interesting theme; other similar models followed, including a cowboy-influenced covered wagon and a soap-box racer. The company continued to produce non-toy items; of those marketed directly by
https://en.wikipedia.org/wiki/Turner%20Controversy
The Turner Controversy was a dispute within the Socialist Party of Great Britain (SPGB) regarding the nature of socialism instigated by party member Tony Turner. The dispute ultimately led to an exodus of members who formed the short-lived Movement for Social Integration. When membership and activity was at a peak in the period after the Second World War, Turner began giving lectures for the party on what he envisioned socialism would be like. The content of these lectures led him to develop a position that caused controversy in the party by the early to mid-1950s and which was elaborated by Turner and his supporters in articles in the party’s internal discussion journal of the time, Forum. Three interlocking propositions underpinned the ‘Turnerite’ viewpoint: that the society of mass consumerism and automated labour which capitalism had become had to be swept away in its entirety if alienation was to be abolished and a truly human community created. This meant a return to pre-industrial methods of production, on lines inspired by Tolstoy and William Morris’ News From Nowhere. that the creation of the new socialist society was not simply in the interests of the working class but was in the interests of the whole of humanity, irrespective of class, a proposition they thought it essential for the Party to recognise in its everyday propaganda, and the means of creating the new peaceful and cooperative society had to be entirely peaceful, indeed pacifist (and in the view of some, possibly even gradual). This view was in direct contradiction to the party's 'Declaration of Principles', which identifies socialism as being the product of class struggle and which claims that the socialist movement will organise for the capture of political power, including power over the state’s coercive machinery, which can be wielded to repress those who resist the imposition of socialism. A series of acrimonious disputes between the ‘Turnerites’ and the majority of the party culmin
https://en.wikipedia.org/wiki/Programming%20by%20demonstration
In computer science, programming by demonstration (PbD) is an end-user development technique for teaching a computer or a robot new behaviors by demonstrating the task to transfer directly instead of programming it through machine commands. The terms programming by example (PbE) and programming by demonstration (PbD) appeared in software development research as early as the mid 1980s to define a way to define a sequence of operations without having to learn a programming language. The usual distinction in literature between these terms is that in PbE the user gives a prototypical product of the computer execution, such as a row in the desired results of a query; while in PbD the user performs a sequence of actions that the computer must repeat, generalizing it to be used in different data sets. These two terms were first undifferentiated, but PbE then tended to be mostly adopted by software development researchers while PbD tended to be adopted by robotics researchers. Today, PbE refers to an entirely different concept, supported by new programming languages that are similar to simulators. This framework can be contrasted with Bayesian program synthesis. Robot programming by demonstration The PbD paradigm is first attractive to the robotics industry due to the costs involved in the development and maintenance of robot programs. In this field, the operator often has implicit knowledge on the task to achieve (he/she knows how to do it), but does not have usually the programming skills (or the time) required to reconfigure the robot. Demonstrating how to achieve the task through examples thus allows to learn the skill without explicitly programming each detail. The first PbD strategies proposed in robotics were based on teach-in, guiding or play-back methods that consisted basically in moving the robot (through a dedicated interface or manually) through a set of relevant configurations that the robot should adopt sequentially (position, orientation, state of the gr
https://en.wikipedia.org/wiki/OSSEC
OSSEC (Open Source HIDS SECurity) is a free, open-source host-based intrusion detection system (HIDS). It performs log analysis, integrity checking, Windows registry monitoring, rootkit detection, time-based alerting, and active response. It provides intrusion detection for most operating systems, including Linux, OpenBSD, FreeBSD, OS X, Solaris and Windows. OSSEC has a centralized, cross-platform architecture allowing multiple systems to be easily monitored and managed. OSSEC has a log analysis engine that is able to correlate and analyze logs from multiple devices and formats. History In June 2008, the OSSEC project and all the copyrights owned by Daniel B. Cid, the project leader, were acquired by Third Brigade, Inc. They promised to continue to contribute to the open source community and to extend commercial support and training to the OSSEC open source community. In May 2009, Trend Micro acquired Third Brigade and the OSSEC project, with promises to keep it open source and free. In 2018, Trend released the domain name and source code to the OSSEC Foundation. The OSSEC project is currently maintained by Atomicorp who stewards the free and open source version and also offers a commercial version. Software components OSSEC consists of a main application, an agent, and a web interface. Manager (or server), which is required for distributed network or stand-alone installations. Agent, a small program installed on the systems to be monitored. Agentless mode, can be used to monitor firewalls, routers, and even Unix systems. OSSEC Features Log based Intrusion Detection (LID) : Actively monitors and analyzes data from multiple log data points in real-time. Rootkit and Malware Detection : Process and file level analysis to detect malicious applications and rootkits. Active Response : Respond to attacks and changes on the system in real time through multiple mechanisms including firewall policies, integration with 3rd parties such as CDN’s and support por
https://en.wikipedia.org/wiki/Helix%E2%80%93coil%20transition%20model
Helix–coil transition models are formalized techniques in statistical mechanics developed to describe conformations of linear polymers in solution. The models are usually but not exclusively applied to polypeptides as a measure of the relative fraction of the molecule in an alpha helix conformation versus turn or random coil. The main attraction in investigating alpha helix formation is that one encounters many of the features of protein folding but in their simplest version. Most of the helix–coil models contain parameters for the likelihood of helix nucleation from a coil region, and helix propagation along the sequence once nucleated; because polypeptides are directional and have distinct N-terminal and C-terminal ends, propagation parameters may differ in each direction. The two states are helix state: characterized by a common rotating pattern kept together by hydrogen bonds, (see alpha-helix). coil state: conglomerate of randomly ordered sequence of atoms (see random coil). Common transition models include the Zimm–Bragg model and the Lifson–Roig model, and their extensions and variations. Energy of host poly-alanine helix in aqueous solution: where m is number of residues in the helix.
https://en.wikipedia.org/wiki/Intermediate%20dorsal%20cutaneous%20nerve
The intermediate dorsal cutaneous nerve (external dorsal cutaneous branch), the smaller, passes along the lateral part of the dorsum of the foot, and divides into dorsal digital branches, which supply the contiguous sides of the third and fourth, and of the fourth and fifth toes. It also supplies the skin of the lateral side of the foot and ankle, and communicates with the sural nerve. The branches of the superficial peroneal nerve supply the skin of the dorsal surfaces of all the toes excepting the lateral side of the little toe, and the adjoining sides of the great and second toes, the former being supplied by the lateral dorsal cutaneous nerve from the sural nerve, and the latter by the medial branch of the deep peroneal nerve. Frequently some of the lateral branches of the superficial peroneal are absent, and their places are then taken by branches of the sural nerve.
https://en.wikipedia.org/wiki/Medial%20dorsal%20cutaneous%20nerve
The medial dorsal cutaneous nerve (internal dorsal cutaneous branch) passes in front of the ankle-joint, and divides into two dorsal digital branches, one of which supplies the medial side of the great toe, the other, the adjacent side of the second and third toes. It also supplies the integument of the medial side of the foot and ankle, and communicates with the saphenous nerve, and with the deep peroneal nerve. Additional images
https://en.wikipedia.org/wiki/Displacement%20activity
Displacement activities occur when an animal experiences high motivation for two or more conflicting behaviours: the resulting displacement activity is usually unrelated to the competing motivations. Birds, for example, may peck at grass when uncertain whether to attack or flee from an opponent; similarly, a human may scratch their head when they do not know which of two options to choose. Displacement activities may also occur when animals are prevented from performing a single behaviour for which they are highly motivated. Displacement activities often involve actions which bring comfort to the animal such as scratching, preening, drinking or feeding. In the assessment of animal welfare, displacement activities are sometimes used as evidence that an animal is highly motivated to perform a behaviour that the environment prevents. One example is that when hungry hens are trained to eat from a particular food dispenser and then find the dispenser blocked, they often begin to pace and preen themselves vigorously. These actions have been interpreted as displacement activities, and similar pacing and preening can be used as evidence of frustration in other situations. Psychiatrist and primatologist Alfonso Troisi proposed that displacement activities can be used as non-invasive measures of stress in primates. He noted that various non-human primates perform self-directed activities such as grooming and scratching in situations likely to involve anxiety and uncertainty, and that these behaviours are increased by anxiogenic (anxiety-producing) drugs and reduced by anxiolytic (anxiety-reducing) drugs. In humans, he noted that similar self-directed behaviour, together with aimless manipulation of objects (chewing pens, twisting rings), can be used as indicators of "stressful stimuli and may reflect an emotional condition of negative affect". More recently the term 'displacement activity' has been widely adopted to describe a form of procrastination. It is commonly us
https://en.wikipedia.org/wiki/Persin
Persin is a fungicidal toxin present in the avocado. Persin is an oil-soluble compound structurally similar to a fatty acid, and it leaches into the body of the fruit from the seeds. The relatively low concentrations of persin in the ripe pulp of the avocado fruit is generally considered harmless to humans. Negative effects in humans are primarily in allergic individuals. When persin is consumed by domestic animals through the leaves or bark of the avocado tree, or skins and seeds of the avocado fruit, it is toxic and dangerous. Pathology Consumption of the leaves and bark of the avocado tree, or the skin and pit of the avocado fruit have been shown to have the following effects: In birds, which are particularly sensitive to the avocado toxin, the symptoms are: increased heart rate, myocardial tissue damage, labored breathing, disordered plumage, unrest, weakness, and apathy. High doses cause acute respiratory syndrome (asphyxia), with death approximately 12 to 24 hours after consumption. Lactating rabbits and mice: non-infectious mastitis and agalactia after consumption of leaves or bark. Rabbits: cardial arrhythmia, submandibular edema and death after consumption of leaves. Cows and goats: mastitis, decreased milk production after consumption of leaves or bark. Horses: clinical effects occur mainly in mares, and includes noninfectious mastitis, as well as occasional gastritis and colic. Cats, dogs: mild stomach upset may occur, with potential to cause heart damage. Hares, pigs, rats, sheep, ostriches, chickens, turkeys and fish: symptoms of intoxication similar those described above. The lethal dose is not known; the effect is different depending upon the animal species. Mice: non-fatal injury to the lactating mammary gland from 60 to 100 mg/kg Persin. Necrosis of myocardial fibres with 100 mg/kg Persin. 200 mg/kg lethal. Diagnosis Diagnosis of avocado toxicosis relies on history of exposure and clinical signs. There are no readily available specific tests
https://en.wikipedia.org/wiki/25th%20meridian%20west%20from%20Washington
The 25th meridian of longitude west from Washington is a line of longitude approximately 102.05 degrees west of the Prime Meridian of Greenwich. In the United States of America, the meridian 25 degrees west of the Washington Meridian defines the eastern boundary of the State of Colorado, the western boundary of the State of Kansas, and the western boundary of the State of Nebraska south of the 41st parallel north. History On January 1, 1861, the Act Admitting the State of Kansas to the Union defined the western boundary of the new state as the 25th meridian of longitude west from Washington. This rendered the western portion of the Territory of Kansas unorganized. Thirty days later on February 28, 1861, the Act Organizing the Territory of Colorado defined the eastern boundary of the new territory as the 25th meridian of longitude west from Washington. The creation of the Colorado Territory moved the western boundary of the Territory of Nebraska south of the 41st parallel north east to this meridian. These boundaries on the 25th meridian of longitude west from Washington remained when Nebraska became a state on March 1, 1867, and Colorado became a state on August 1, 1876. Longitude in the United States Latitude and longitude uniquely describe the location of any point on Earth. Latitude may be simply calculated from astronomical or solar observation, either at land or sea, interrupted only by cloudy skies. Longitude, on the other hand, requires both astronomical or solar observation and some form of time reference to a longitude reference point. John Harrison produced the first precise marine chronometer in 1761. The completion of the first North American telegraph line between Washington, D.C. and Baltimore on May 24, 1844, introduced a technology that could transmit time signals at a large fraction of the speed of light. On September 28, 1850, the United States adopted two primary meridians of longitude for officially use: the Greenwich Meridian (thr
https://en.wikipedia.org/wiki/Dental%20abscess
A dental abscess is a localized collection of pus associated with a tooth. The most common type of dental abscess is a periapical abscess, and the second most common is a periodontal abscess. In a periapical abscess, usually the origin is a bacterial infection that has accumulated in the soft, often dead, pulp of the tooth. This can be caused by tooth decay, broken teeth or extensive periodontal disease (or combinations of these factors). A failed root canal treatment may also create a similar abscess. A dental abscess is a type of odontogenic infection, although commonly the latter term is applied to an infection which has spread outside the local region around the causative tooth. Classification The main types of dental abscess are: Periapical abscess: The result of a chronic, localized infection located at the tip, or apex, of the root of a tooth. Periodontal abscess: begins in a periodontal pocket (see: periodontal abscess) Gingival abscess: involving only the gum tissue, without affecting either the tooth or the periodontal ligament (see: periodontal abscess) Pericoronal abscess: involving the soft tissues surrounding the crown of a tooth (see: Pericoronitis) Combined periodontic-endodontic abscess: a situation in which a periapical abscess and a periodontal abscess have combined (see: Combined periodontic-endodontic lesions). Signs and symptoms The pain is continuous and may be described as extreme, growing, sharp, shooting, or throbbing. Putting pressure or warmth on the tooth may induce extreme pain. The area may be sensitive to touch and possibly swollen as well. This swelling may be present at either the base of the tooth, the gum, and/or the cheek, and sometimes can be reduced by applying ice packs. An acute abscess may be painless but still have a swelling present on the gum. It is important to get anything that presents like this checked by a dental professional as it may become chronic later. In some cases, a tooth abscess may perforate
https://en.wikipedia.org/wiki/Dell%20PowerConnect
The current portfolio of PowerConnect switches are now being offered as part of the Dell Networking brand: information on this page is an overview of all current and past PowerConnect switches as per August 2013, but any updates on current portfolio will be detailed on the Dell Networking page. PowerConnect was a Dell series of network switches. The PowerConnect "classic" switches are based on Broadcom or Marvell Technology Group fabric and firmware. Dell acquired Force10 Networks in 2011 to expand its data center switch products. Dell also offers the PowerConnect M-series which are switches for the M1000e blade-server enclosure and the PowerConnect W-series which is a Wi-Fi platform based on . Starting in 2013 Dell will re-brand their networking portfolio to Dell Networking which covers both the legacy PowerConnect products as well as the Force10 products. Product line The Dell PowerConnect line is marketed for business computer networking. They connect computers and servers in small to medium-sized networks using Ethernet. The brand name was first announced in July 2001, as traditional personal computer sales were declining. By September 2002 Cisco Systems cancelled a reseller agreement with Dell. Previously under storage business general manager Darren Thomas, in September 2010 Dario Zamarian was named to head networking platforms within Dell. PowerConnect switches are available in pre-configured web-managed models as well as more expensive managed models. there is not a single underlying operating system: the models with a product-number up to 5500 run on a proprietary OS made by Marvell while the Broadcom powered switches run on an OS based on VxWorks. With the introduction of the 8100 series the switches will run on DNOS or Dell Networking Operating System which is based on a Linux kernel for DNOS 5.x and 6.x. Via PowerConnect W-series Dell offers a range of Aruba WiFi products. The Powerconnect-J (Juniper Networks) and B (Brocade) series are not longer
https://en.wikipedia.org/wiki/Conservative%20system
In mathematics, a conservative system is a dynamical system which stands in contrast to a dissipative system. Roughly speaking, such systems have no friction or other mechanism to dissipate the dynamics, and thus, their phase space does not shrink over time. Precisely speaking, they are those dynamical systems that have a null wandering set: under time evolution, no portion of the phase space ever "wanders away", never to be returned to or revisited. Alternately, conservative systems are those to which the Poincaré recurrence theorem applies. An important special case of conservative systems are the measure-preserving dynamical systems. Informal introduction Informally, dynamical systems describe the time evolution of the phase space of some mechanical system. Commonly, such evolution is given by some differential equations, or quite often in terms of discrete time steps. However, in the present case, instead of focusing on the time evolution of discrete points, one shifts attention to the time evolution of collections of points. One such example would be Saturn's rings: rather than tracking the time evolution of individual grains of sand in the rings, one is instead interested in the time evolution of the density of the rings: how the density thins out, spreads, or becomes concentrated. Over short time-scales (hundreds of thousands of years), Saturn's rings are stable, and are thus a reasonable example of a conservative system and more precisely, a measure-preserving dynamical system. It is measure-preserving, as the number of particles in the rings does not change, and, per Newtonian orbital mechanics, the phase space is incompressible: it can be stretched or squeezed, but not shrunk (this is the content of Liouville's theorem). Formal definition Formally, a measurable dynamical system is conservative if and only if it is non-singular, and has no wandering sets. A measurable dynamical system (X, Σ, μ, τ) is a Borel space (X, Σ) equipped with a sigma-finite m
https://en.wikipedia.org/wiki/Ambush%20predator
Ambush predators or sit-and-wait predators are carnivorous animals that capture or trap prey via stealth, luring or by (typically instinctive) strategies utilizing an element of surprise. Unlike pursuit predators, who chase to capture prey using sheer speed or endurance, ambush predators avoid fatigue by staying in concealment, waiting patiently for the prey to get near, before launching a sudden overwhelming attack that quickly incapacitates and captures the prey. The ambush is often opportunistic, and may be set by hiding in a burrow, by camouflage, by aggressive mimicry, or by the use of a trap (e.g. a web). The predator then uses a combination of senses to detect and assess the prey, and to time the strike. Nocturnal ambush predators such as cats and snakes have vertical slit pupils helping them to judge the distance to prey in dim light. Different ambush predators use a variety of means to capture their prey, from the long sticky tongues of chameleons to the expanding mouths of frogfishes. Ambush predation is widely distributed in the animal kingdom, spanning some members of numerous groups such as the starfish, cephalopods, crustaceans, spiders, insects such as mantises, and vertebrates such as many snakes and fishes. Strategy Ambush predators usually remain motionless (sometimes hidden) and wait for prey to come within ambush distance before pouncing. Ambush predators are often camouflaged, and may be solitary. Pursuit predation becomes a better strategy than ambush predation when the predator is faster than the prey. Ambush predators use many intermediate strategies. For example, when a pursuit predator is faster than its prey over a short distance, but not in a long chase, then either stalking or ambush becomes necessary as part of the strategy. Bringing the prey within range Concealment Ambush often relies on concealment, whether by staying out of sight or by means of camouflage. Burrows Ambush predators such as trapdoor spiders and Australian
https://en.wikipedia.org/wiki/Environmental%20statistics
Environment statistics is the application of statistical methods to environmental science. It covers procedures for dealing with questions concerning the natural environment in its undisturbed state, the interaction of humanity with the environment, and urban environments. The field of environmental statistics has seen rapid growth in the past few decades as a response to increasing concern over the environment in the public, organizational, and governmental sectors. The United Nations' Framework for the Development of Environment Statistics (FDES) defines the scope of environment statistics as follows: The scope of environment statistics covers biophysical aspects of the environment and those aspects of the socio-economic system that directly influence and interact with the environment. The scope of environment, social and economic statistics overlap. It is not easy – or necessary – to draw a clear line dividing these areas. Social and economic statistics that describe processes or activities with a direct impact on, or direct interaction with, the environment are used widely in environment statistics. They are within the scope of the FDES. Uses Statistical analysis is essential to the field of environmental sciences, allowing researchers to gain an understanding of environmental issues through researching and developing potential solutions to the issues they study. The applications of statistical methods to environmental sciences are numerous and varied. Environmental statistics are used in many fields including; health and safety organizations, standard bodies, research institutes, water and river authorities, meteorological organizations, fisheries, protection agencies, and in risk, pollution, regulation and control concerns. Environmental statistics is especially pertinent and widely used in the academic, governmental, regulatory, technological, and consulting industries. Specific applications of statistical analysis within the field of environmental scienc
https://en.wikipedia.org/wiki/TopoFusion
TopoFusion GPS Mapping software designed to plan and analyze trails using topographic maps and GPS tracks. History The software was created in 2002 by two brothers who were outdoor bikepacking enthusiasts and felt software could help them plan better trails. They developed the first version of the software in 2002 and one included it as part of his doctorate dissertation on GPS Driven Trail Simulation and Network Production. In 2004 the developers and one other jointly presented the paper Digital Trail Libraries which illustrated some of the graph theory algorithms used by the software. the software remains supported with refined functionality and improved support for additional maps and GPS Devices. Features The software was designed to plan and analyze trails. When used for planning proposed routes may be planned and checked against different maps, and the result(s) downloaded to a GPS tracking device. Topofusion is particularly noted for eased of switch and combining maps and for capability of simultaneously managing multiple trails. After a trail has been executed the resultant GPS log can be uploaded to TopoFusion and the actual route analyzed with the addition of any photographic images recorded on route. The product is marketed as a fully featured 'professional version and a more basic version with reduced functionality at lower cost. A fully featured trial version which is not time limited is available which restricts usability by watermarking map display tiles by overlaying the word 'DEMO'. The software is available directly Microsoft Windows only, however TopoFusion has claimed users have reported success using VMWare Fusion and Parallels emulation on Mac OS. Applications TopoFusion has been found useful by those engaged in the sport of geocaching. The software has been used in assisting analysis of GPS routes. A survey reported in 2004 of GPS tracking of motorists visiting the Acadia National Park in Maine, United States was assisted by us
https://en.wikipedia.org/wiki/Melde%27s%20experiment
Melde's experiment is a scientific experiment carried out in 1859 by the German physicist Franz Melde on the standing waves produced in a tense cable originally set oscillating by a tuning fork, later improved with connection to an electric vibrator. This experiment, "a lecture-room standby", attempted to demonstrate that mechanical waves undergo interference phenomena. In the experiment, mechanical waves traveled in opposite directions form immobile points, called nodes. These waves were called standing waves by Melde since the position of the nodes and loops (points where the cord vibrated) stayed static. Standing waves were first discovered by Franz Melde, who coined the term "standing wave" around 1860. Melde generated parametric oscillations in a string by employing a tuning fork to periodically vary the tension at twice the resonance frequency of the string. History Wave phenomena in nature have been investigated for centuries, some being some of the most controverted themes in the history of science, and so the case is with the wave nature of light. In the 17th century, Sir Isaac Newton described light through a corpuscular theory. The English physicist Thomas Young later contrasted Newton's theories in the 18th century and established the scientific basis upon which rest the wave theories. At the end of the 19th century, at the peak of the Second Industrial Revolution, the creation of electricity as the technology of the era offered a new contribution to the wave theories. This advance allowed Franz Melde to recognize the phenomena of wave interference and the creation of standing waves. Later, the Scottish physicist James Clerk Maxwell in his study of the wave nature of light succeeded in expressing waves and the electromagnetic spectrum in a mathematical formula. Principle A string undergoing transverse vibration illustrates many features common to all vibrating acoustic systems, whether these are the vibrations of a guitar string or the standing wave
https://en.wikipedia.org/wiki/Debug%20symbol
A debug symbol is a special kind of symbol that attaches additional information to the symbol table of an object file, such as a shared library or an executable. This information allows a symbolic debugger to gain access to information from the source code of the binary, such as the names of identifiers, including variables and routines. The symbolic information may be compiled together with the module's binary file, or distributed in a separate file, or simply discarded during the compilation and/or linking. This information can be helpful while trying to investigate and fix a crashing application or any other fault. Embedded symbols Debug symbols typically include not only the name of a function or global variable, but also the name of the source code file in which the symbol occurs, as well as the line number at which it is defined. Other information includes the type of the symbol (integer, float, function, exception, etc.), the scope (block scope or global scope), the size, and, for classes, the name of the class, and the methods and members in it. All of this additional information can take up quite a bit of space, especially the filenames and line numbers. Thus, binaries with debug symbols can become quite large, often several times the stripped file size. To avoid this extra size, most operating system distributions ship binaries that are stripped, i.e. from which all of the debugging symbols have been removed. This is accomplished, for example, with the strip command in Unix. Some compilers will output the symbolic debugging information into a separate file, rather than placing it together with the binary. SysV ABI The SysV application binary interface (ABI) includes a specification for the format of debug symbols. This allows any compatible compiler or assembler to create debug symbols in a standardized format, and for any debugger, such as the GNU Debugger (GDB), to gain access and display these symbols. For example, part of the important debug inf
https://en.wikipedia.org/wiki/Thomas%20William%20K%C3%B6rner
Thomas William Körner (born 17 February 1946) is a British pure mathematician and the author of three books on popular mathematics. He is titular Professor of Fourier Analysis in the University of Cambridge and a Fellow of Trinity Hall. He is the son of the philosopher Stephan Körner and of Edith Körner. He studied at Trinity Hall, Cambridge, and wrote his PhD thesis Some Results on Kronecker, Dirichlet and Helson Sets there in 1971, studying under Nicholas Varopoulos. In 1972 he won the Salem Prize. He has written academic mathematics books aimed at undergraduates: Fourier Analysis Exercises for Fourier Analysis A Companion to Analysis Vectors, Pure and Applied Calculus for the Ambitious He has also written three books aimed at secondary school students, the popular 1996 title The Pleasures of Counting, Naive Decision Making (published 2008) on probability, statistics and game theory, and Where Do Numbers Come From? (published October 2019). External links Professor Körner's website
https://en.wikipedia.org/wiki/John%20Marburger
John Harmen "Jack" Marburger III (February 8, 1941 – July 28, 2011) was an American physicist who directed the Office of Science and Technology Policy in the administration of President George W. Bush, serving as the Science Advisor to the President. His tenure was marred by controversy regarding his defense of the administration against allegations from over two dozen Nobel Laureates, amongst others, that scientific evidence was being suppressed or ignored in policy decisions, including those relating to stem cell research and global warming. However, he has also been credited with keeping the political effects of the September 11 attacks from harming science research—by ensuring that tighter visa controls did not hinder the movement of those engaged in scientific research—and with increasing awareness of the relationship between science and government. He also served as the President of Stony Brook University from 1980 until 1994, and director of Brookhaven National Laboratory from 1998 until 2001. Early life Marburger was born on Staten Island, New York, to Virginia Smith and John H. Marburger Jr., and grew up in Severna Park, Maryland. He attended Princeton University, graduating in 1962 with a B.A. in physics, followed by a Ph.D. in applied physics from Stanford University in 1967. After completing his education, he served as a professor of physics and electrical engineering at the University of Southern California beginning in 1966, specializing in the theoretical physics of nonlinear optics and quantum optics, and co-founded the Center for Laser Studies at that institution. He rose to become chairman of the physics department in 1972, and then dean of the College of Letters, Arts and Sciences in 1976. He was engaged as a public speaker on science, including hosting a series of educational television programs on CBS. He was also outspoken on campus issues, and was designated the university's spokesperson during a scandal over preferential treatment of athlet
https://en.wikipedia.org/wiki/Lifson%E2%80%93Roig%20model
In polymer science, the Lifson–Roig model is a helix-coil transition model applied to the alpha helix-random coil transition of polypeptides; it is a refinement of the Zimm–Bragg model that recognizes that a polypeptide alpha helix is only stabilized by a hydrogen bond only once three consecutive residues have adopted the helical conformation. To consider three consecutive residues each with two states (helix and coil), the Lifson–Roig model uses a 4x4 transfer matrix instead of the 2x2 transfer matrix of the Zimm–Bragg model, which considers only two consecutive residues. However, the simple nature of the coil state allows this to be reduced to a 3x3 matrix for most applications. The Zimm–Bragg and Lifson–Roig models are but the first two in a series of analogous transfer-matrix methods in polymer science that have also been applied to nucleic acids and branched polymers. The transfer-matrix approach is especially elegant for homopolymers, since the statistical mechanics may be solved exactly using a simple eigenanalysis. Parameterization The Lifson–Roig model is characterized by three parameters: the statistical weight for nucleating a helix, the weight for propagating a helix and the weight for forming a hydrogen bond, which is granted only if three consecutive residues are in a helical state. Weights are assigned at each position in a polymer as a function of the conformation of the residue in that position and as a function of its two neighbors. A statistical weight of 1 is assigned to the "reference state" of a coil unit whose neighbors are both coils, and a "nucleation" unit is defined (somewhat arbitrarily) as two consecutive helical units neighbored by a coil. A major modification of the original Lifson–Roig model introduces "capping" parameters for the helical termini, in which the N- and C-terminal capping weights may vary independently. The correlation matrix for this modification can be represented as a matrix M, reflecting the statistical weights
https://en.wikipedia.org/wiki/Multiplicity%20%28statistical%20mechanics%29
In statistical mechanics, multiplicity (also called statistical weight) refers to the number of microstates corresponding to a particular macrostate of a thermodynamic system. Commonly denoted , it is related to the configuration entropy of an isolated system via Boltzmann's entropy formula where is the entropy and is Boltzmann's constant. Example: the two-state paramagnet A simplified model of the two-state paramagnet provides an example of the process of calculating the multiplicity of particular macrostate. This model consists of a system of microscopic dipoles which may either be aligned or anti-aligned with an externally applied magnetic field . Let represent the number of dipoles that are aligned with the external field and represent the number of anti-aligned dipoles. The energy of a single aligned dipole is while the energy of an anti-aligned dipole is thus the overall energy of the system is The goal is to determine the multiplicity as a function of ; from there, the entropy and other thermodynamic properties of the system can be determined. However, it is useful as an intermediate step to calculate multiplicity as a function of and This approach shows that the number of available macrostates is . For example, in a very small system with dipoles, there are three macrostates, corresponding to Since the and macrostates require both dipoles to be either anti-aligned or aligned, respectively, the multiplicity of either of these states is 1. However, in the either dipole can be chosen for the aligned dipole, so the multiplicity is 2. In the general case, the multiplicity of a state, or the number of microstates, with aligned dipoles follows from combinatorics, resulting in where the second step follows from the fact that Since the energy can be related to and as follows: Thus the final expression for multiplicity as a function of internal energy is This can be used to calculate entropy in accordance with Boltzmann's entropy f
https://en.wikipedia.org/wiki/Racah%20parameter
When an atom has more than one electron there will be some electrostatic repulsion between those electrons. The amount of repulsion varies from atom to atom, depending upon the number and spin of the electrons and the orbitals they occupy. The total repulsion can be expressed in terms of three parameters A, B and C which are known as the Racah parameters after Giulio Racah, who first described them. They are generally obtained empirically from gas-phase spectroscopic studies of atoms. They are often used in transition-metal chemistry to describe the repulsion energy associated with an electronic term. For example, the interelectronic repulsion of a 3P term is A + 7B, and of a 3F term is A - 8B, and the difference between them is therefore 15B. Definition The Racah parameters are defined as where are Slater integrals and are the Slater-Condon parameters where is the normalized radial part of an electron orbital, and . See also Tanabe–Sugano diagram Nephelauxetic effect
https://en.wikipedia.org/wiki/Intergenerational%20equity
Intergenerational equity in economic, psychological, and sociological contexts, is the idea of fairness or justice between generations. The concept can be applied to fairness in dynamics between children, youth, adults, and seniors. It can also be applied to fairness between generations currently living and future generations. Conversations about intergenerational equity occur across several fields. It is often discussed in public economics, especially with regard to transition economics, social policy, and government budget-making. Many cite the growing U.S. national debt as an example of intergenerational inequity, as future generations will shoulder the consequences. Intergenerational equity is also explored in environmental concerns, including sustainable development, and climate change. The continued depletion of natural resources that has occurred in the past century will likely be a significant burden for future generations. Intergenerational equity is also discussed with regard to standards of living, specifically on inequities in the living standards experienced by people of different ages and generations. Intergenerational equity issues also arise in the arenas of elderly care, social justice, and housing affordability. Political rights The debate around political rights for youth, children and future generations includes discussions around when people should have political power, and how much they should have. Adam Benforado argues, for example, that giving children more political rights than adults results in everyone being better off. Those seeking rights or greater consideration for future generations discuss methods such as deliberative democracy or a dedicated agency. Public economics usage History Since the first recorded debt issuance in Sumaria in 1796 BC, one of the penalties for failure to repay a loan has been debt bondage. In some instances, this repayment of financial debt with labor included the debtor's children, essentially condemn
https://en.wikipedia.org/wiki/Ascendency
Ascendency or ascendancy is a quantitative attribute of an ecosystem, defined as a function of the ecosystem's trophic network. Ascendency is derived using mathematical tools from information theory. It is intended to capture in a single index the ability of an ecosystem to prevail against disturbance by virtue of its combined organization and size. One way of depicting ascendency is to regard it as "organized power", because the index represents the magnitude of the power that is flowing within the system towards particular ends, as distinct from power that is dissipated naturally. Almost half a century earlier, Alfred J. Lotka (1922) had suggested that a system's capacity to prevail in evolution was related to its ability to capture useful power. Ascendency can thus be regarded as a refinement of Lotka's supposition that also takes into account how power is actually being channeled within a system. In mathematical terms, ascendency is the product of the aggregate amount of material or energy being transferred in an ecosystem times the coherency with which the outputs from the members of the system relate to the set of inputs to the same components (Ulanowicz 1986). Coherence is gauged by the average mutual information shared between inputs and outputs (Rutledge et al. 1976). Originally, it was thought that ecosystems increase uniformly in ascendency as they developed, but subsequent empirical observation has suggested that all sustainable ecosystems are confined to a narrow "window of vitality" (Ulanowicz 2002). Systems with relative values of ascendency plotting below the window tend to fall apart due to lack of significant internal constraints, whereas systems above the window tend to be so "brittle" that they become vulnerable to external perturbations. Sensitivity analysis on the components of the ascendency reveals the controlling transfers within the system in the sense of Liebig (Ulanowicz and Baird 1999). That is, ascendency can be used to identify
https://en.wikipedia.org/wiki/Operational%20transformation
Operational transformation (OT) is a technology for supporting a range of collaboration functionalities in advanced collaborative software systems. OT was originally invented for consistency maintenance and concurrency control in collaborative editing of plain text documents. Its capabilities have been extended and its applications expanded to include group undo, locking, conflict resolution, operation notification and compression, group-awareness, HTML/XML and tree-structured document editing, collaborative office productivity tools, application-sharing, and collaborative computer-aided media design tools. In 2009 OT was adopted as a core technique behind the collaboration features in Apache Wave and Google Docs. History Operational Transformation was pioneered by C. Ellis and S. Gibbs in the GROVE (GRoup Outline Viewing Edit) system in 1989. Several years later, some correctness issues were identified and several approaches were independently proposed to solve these issues, which was followed by another decade of continuous efforts of extending and improving OT by a community of dedicated researchers. In 1998, a Special Interest Group on Collaborative Editing was set up to promote communication and collaboration among CE and OT researchers. Since then, SIGCE holds annual CE workshops in conjunction with major CSCW (Computer Supported Cooperative Work) conferences, such as ACM, CSCW, GROUP and ECSCW. System architecture Collaboration systems utilizing Operational Transformations typically use replicated document storage, where each client has their own copy of the document; clients operate on their local copies in a lock-free, non-blocking manner, and the changes are then propagated to the rest of the clients; this ensures the client high responsiveness in an otherwise high-latency environment such as the Internet. When a client receives the changes propagated from another client, it typically transforms the changes before executing them; the transformation ensur
https://en.wikipedia.org/wiki/Dewar%E2%80%93Chatt%E2%80%93Duncanson%20model
The Dewar–Chatt–Duncanson model is a model in organometallic chemistry that explains the chemical bonding in transition metal alkene complexes. The model is named after Michael J. S. Dewar, Joseph Chatt and L. A. Duncanson. The alkene donates electron density into a π-acid metal d-orbital from a π-symmetry bonding orbital between the carbon atoms. The metal donates electrons back from a (different) filled d-orbital into the empty π* antibonding orbital. Both of these effects tend to reduce the carbon-carbon bond order, leading to an elongated C−C distance and a lowering of its vibrational frequency. In Zeise's salt K[PtCl3(C2H4)].H2O the C−C bond length has increased to 134 picometres from 133 pm for ethylene. In the nickel compound Ni(C2H4)(PPh3)2 the value is 143 pm. The interaction also causes carbon atoms to "rehybridise" from sp2 towards sp3, which is indicated by the bending of the hydrogen atoms on the ethylene back away from the metal. In silico calculations show that 75% of the binding energy is derived from the forward donation and 25% from backdonation. This model is a specific manifestation of the more general π backbonding model.
https://en.wikipedia.org/wiki/Tax-benefit%20model
A tax-benefit model is a form of microsimulation model. It is usually based on a representative or administrative data set and certain policy rules. These models are used to cost certain policy reforms and to determine the winners and losers of reform. One example is EUROMOD, which models taxes and benefits for 27 EU states, and its post-Brexit offshoot, UKMOD. Overview Tax-benefit models are used by policy makers and researchers to examine the effects of proposed or hypothetical policy changes on income inequality, poverty and government budget. Their primary advantage over conventional cross-country comparison method is that they are very powerful at evaluating policy changes not only ex post, but also ex ante. Generally, tax-benefit models can simulate income taxes, property taxes, social contributions, social assistance, income benefits and other benefits. The underlying micro-data are obtained mainly through household surveys. These data include information about households' income, expenditure and family composition. Most of the tax-benefit models are operated by governments or research institutions. Very few models are publicly available. Depending on their purpose, tax-benefit models may or may not ignore behavioral responses of individuals. General framework The basic steps in conducting research using a simple tax-benefit model are: Gross micro-data describing households' income, expenditure and family composition are collected and processed; These data enter a tax-benefit model; First simulation takes place; Disposable income of each household is calculated and the results of the simulation are summarized; A set of rules of the policies enters the model and the second simulation takes place; Disposable income of each household is calculated and the results of the simulation are summarized; The impact of the set of policy changes is evaluated by comparing the results from the two simulations. A dynamic tax-benefit model PoliSim's webpage p
https://en.wikipedia.org/wiki/Rod%20Burstall
Rodney Martineau "Rod" Burstall FRSE (born 1934) is a British computer scientist and one of four founders of the Laboratory for Foundations of Computer Science at the University of Edinburgh. Biography Burstall studied physics at the University of Cambridge, then an M.Sc. in operational research at Birmingham University. He worked for three years before returning to Birmingham University to earn a Ph.D. in 1966 with thesis titled Heuristic and Decision Tree Methods on Computers: Some Operational Research Applications under the supervision of N. A. Dudley and K. B. Haley. Burstall was an early and influential proponent of functional programming, pattern matching, and list comprehension, and is known for his work with Robin Popplestone on POP, an innovative programming language developed at Edinburgh around 1970, and later work with John Darlington on NPL and program transformation and with David MacQueen and Don Sannella on Hope, a precursor to Standard ML, Miranda, and Haskell. In 1995, he was elected a Fellow of the Royal Society of Edinburgh. Burstall retired in 2000, becoming Professor Emeritus. In 2002 David Rydeheard and Don Sannella assembled a festschrift for Rod Burstall that was published in Formal Aspects of Computing. In 2009, he was awarded the ACM SIGPLAN Programming Language Achievement Award. Books May 1971: Programming in POP-11, Edinburgh University Press. 1980: (with Alan Bundy) Artificial Intelligence: An Introductory Course, Edinburgh University Press. 1988: (with D. E. Rydeheard) Computational Category Theory, Prentice-Hall, .
https://en.wikipedia.org/wiki/Racah%20W-coefficient
Racah's W-coefficients were introduced by Giulio Racah in 1942. These coefficients have a purely mathematical definition. In physics they are used in calculations involving the quantum mechanical description of angular momentum, for example in atomic theory. The coefficients appear when there are three sources of angular momentum in the problem. For example, consider an atom with one electron in an s orbital and one electron in a p orbital. Each electron has electron spin angular momentum and in addition the p orbital has orbital angular momentum (an s orbital has zero orbital angular momentum). The atom may be described by LS coupling or by jj coupling as explained in the article on angular momentum coupling. The transformation between the wave functions that correspond to these two couplings involves a Racah W-coefficient. Apart from a phase factor, Racah's W-coefficients are equal to Wigner's 6-j symbols, so any equation involving Racah's W-coefficients may be rewritten using 6-j symbols. This is often advantageous because the symmetry properties of 6-j symbols are easier to remember. Racah coefficients are related to recoupling coefficients by Recoupling coefficients are elements of a unitary transformation and their definition is given in the next section. Racah coefficients have more convenient symmetry properties than the recoupling coefficients (but less convenient than the 6-j symbols). Recoupling coefficients Coupling of two angular momenta and is the construction of simultaneous eigenfunctions of and , where , as explained in the article on Clebsch–Gordan coefficients. The result is where and . Coupling of three angular momenta , , and , may be done by first coupling and to and next coupling and to total angular momentum : Alternatively, one may first couple and to and next couple and to : Both coupling schemes result in complete orthonormal bases for the dimensional space spanned by Hence, the two total angular momentum bases
https://en.wikipedia.org/wiki/6-j%20symbol
Wigner's 6-j symbols were introduced by Eugene Paul Wigner in 1940 and published in 1965. They are defined as a sum over products of four Wigner 3-j symbols, The summation is over all six allowed by the selection rules of the 3-j symbols. They are closely related to the Racah W-coefficients, which are used for recoupling 3 angular momenta, although Wigner 6-j symbols have higher symmetry and therefore provide a more efficient means of storing the recoupling coefficients. Their relationship is given by: Symmetry relations The 6-j symbol is invariant under any permutation of the columns: The 6-j symbol is also invariant if upper and lower arguments are interchanged in any two columns: These equations reflect the 24 symmetry operations of the automorphism group that leave the associated tetrahedral Yutsis graph with 6 edges invariant: mirror operations that exchange two vertices and a swap an adjacent pair of edges. The 6-j symbol is zero unless j1, j2, and j3 satisfy triangle conditions, i.e., In combination with the symmetry relation for interchanging upper and lower arguments this shows that triangle conditions must also be satisfied for the triads (j1, j5, j6), (j4, j2, j6), and (j4, j5, j3). Furthermore, the sum of the elements of each triad must be an integer. Therefore, the members of each triad are either all integers or contain one integer and two half-integers. Special case When j6 = 0 the expression for the 6-j symbol is: The triangular delta is equal to 1 when the triad (j1, j2, j3) satisfies the triangle conditions, and zero otherwise. The symmetry relations can be used to find the expression when another j is equal to zero. Orthogonality relation The 6-j symbols satisfy this orthogonality relation: Asymptotics A remarkable formula for the asymptotic behavior of the 6-j symbol was first conjectured by Ponzano and Regge and later proven by Roberts. The asymptotic formula applies when all six quantum numbers j1, ..., j6 are taken to be large
https://en.wikipedia.org/wiki/Goodman%20and%20Kruskal%27s%20gamma
In statistics, Goodman and Kruskal's gamma is a measure of rank correlation, i.e., the similarity of the orderings of the data when ranked by each of the quantities. It measures the strength of association of the cross tabulated data when both variables are measured at the ordinal level. It makes no adjustment for either table size or ties. Values range from −1 (100% negative association, or perfect inversion) to +1 (100% positive association, or perfect agreement). A value of zero indicates the absence of association. This statistic (which is distinct from Goodman and Kruskal's lambda) is named after Leo Goodman and William Kruskal, who proposed it in a series of papers from 1954 to 1972. Definition The estimate of gamma, G, depends on two quantities: Ns, the number of pairs of cases ranked in the same order on both variables (number of concordant pairs), Nd, the number of pairs of cases ranked in reversed order on both variables (number of reversed pairs), where "ties" (cases where either of the two variables in the pair are equal) are dropped. Then This statistic can be regarded as the maximum likelihood estimator for the theoretical quantity , where and where Ps and Pd are the probabilities that a randomly selected pair of observations will place in the same or opposite order respectively, when ranked by both variables. Critical values for the gamma statistic are sometimes found by using an approximation, whereby a transformed value, t of the statistic is referred to Student t distribution, where and where n is the number of observations (not the number of pairs): Yule's Q A special case of Goodman and Kruskal's gamma is Yule's Q, also known as the Yule coefficient of association, which is specific to 2×2 matrices. Consider the following contingency table of events, where each value is a count of an event's frequency: Yule's Q is given by: Although computed in the same fashion as Goodman and Kruskal's gamma, it has a slightly broader interpretatio
https://en.wikipedia.org/wiki/9-j%20symbol
In physics, Wigner's 9-j symbols were introduced by Eugene Paul Wigner in 1937. They are related to recoupling coefficients in quantum mechanics involving four angular momenta Recoupling of four angular momentum vectors Coupling of two angular momenta and is the construction of simultaneous eigenfunctions of and , where , as explained in the article on Clebsch–Gordan coefficients. Coupling of three angular momenta can be done in several ways, as explained in the article on Racah W-coefficients. Using the notation and techniques of that article, total angular momentum states that arise from coupling the angular momentum vectors , , , and may be written as Alternatively, one may first couple and to and and to , before coupling and to : Both sets of functions provide a complete, orthonormal basis for the space with dimension spanned by Hence, the transformation between the two sets is unitary and the matrix elements of the transformation are given by the scalar products of the functions. As in the case of the Racah W-coefficients the matrix elements are independent of the total angular momentum projection quantum number (): Symmetry relations A 9-j symbol is invariant under reflection about either diagonal as well as even permutations of its rows or columns: An odd permutation of rows or columns yields a phase factor , where For example: Reduction to 6j symbols The 9-j symbols can be calculated as sums over triple-products of 6-j symbols where the summation extends over all admitted by the triangle conditions in the factors: . Special case When the 9-j symbol is proportional to a 6-j symbol: Orthogonality relation The 9-j symbols satisfy this orthogonality relation: The triangular delta is equal to 1 when the triad (j1, j2, j3) satisfies the triangle conditions, and zero otherwise. 3n-j symbols The 6-j symbol is the first representative, , of -j symbols that are defined as sums of products of of Wigner's 3-jm coefficients. The sums are ove
https://en.wikipedia.org/wiki/Cladoptosis
Cladoptosis (Ancient Greek "branch", "falling" [noun]; sometimes pronounced with the p silent) is the regular shedding of branches. It is the counterpart for branches of the familiar process of regular leaf shedding by deciduous trees. As in leaf shedding, an abscission layer forms, and the branch is shed cleanly. Functions of cladoptosis Cladoptosis is thought to have three possible functions: self-pruning (i.e. programmed plant senescence), drought response (characteristic of xerophytes) and liana defence. Self-pruning is the shedding of branches that are shaded or diseased, which are potentially a drain on the resources of the tree. Drought response is similar to the leaf-fall response of drought-deciduous trees; however, leafy shoots are shed in place of leaves. Western red cedar (Thuja plicata) provides an example, as do other members of the family Cupressaceae. In tropical forests, infestation of tree canopies by woody climbers or lianas can be a serious problem. Cladoptosis – by giving a clean bole with no support for climbing plants – may be an adaptation against lianas, as in the case of Castilla. See also Abscission Marcescence: the opposite phenomenon – withered branches (or leaves) stay on
https://en.wikipedia.org/wiki/Life%20annuity
A life annuity is an annuity, or series of payments at fixed intervals, paid while the purchaser (or annuitant) is alive. The majority of life annuities are insurance products sold or issued by life insurance companies however substantial case law indicates that annuity products are not necessarily insurance products. Annuities can be purchased to provide an income during retirement, or originate from a structured settlement of a personal injury lawsuit. Life annuities may be sold in exchange for the immediate payment of a lump sum (single-payment annuity) or a series of regular payments (flexible payment annuity), prior to the onset of the annuity. The payment stream from the issuer to the annuitant has an unknown duration based principally upon the date of death of the annuitant. At this point the contract will terminate and the remainder of the fund accumulated is forfeited unless there are other annuitants or beneficiaries in the contract. Thus a life annuity is a form of longevity insurance, where the uncertainty of an individual's lifespan is transferred from the individual to the insurer, which reduces its own uncertainty by pooling many clients. History The instrument's evolution has been long and continues as part of actuarial science. Ulpian is credited with generating an actuarial life annuity table between AD 211 and 222. Medieval German and Dutch cities and monasteries raised money by the sale of life annuities, and it was recognized that pricing them was difficult. The early practice for selling this instrument did not consider the age of the nominee, thereby raising interesting concerns. These concerns got the attention of several prominent mathematicians over the years, such as Huygens, Bernoulli, de Moivre and others: even Gauss and Laplace had an interest in matters pertaining to this instrument. It seems that Johan de Witt was the first writer to compute the value of a life annuity as the sum of expected discounted future payments, while Ha
https://en.wikipedia.org/wiki/LocalTalk-to-Ethernet%20bridge
A LocalTalk-to-Ethernet Bridge is a network bridge that joins the physical layer of the AppleTalk networking used by previous generations of Apple Computer products to an Ethernet network. This was an important class of products in the late 1980s and early 1990s, before Ethernet support became universal on the Mac lineup. Some LocalTalk-to-Ethernet Bridges only performed Appletalk bridging. Others were also able to bridge other protocols using ad-hoc standards. One example was the MacIP system that allowed LocalTalk-based Macs to send and receive TCP/IP (internet) packets using the bridges as a go-between. Examples Hardware devices: Asante: AsanteTalk Cayman Systems: GatorBox Compatible Systems: Ether Route/TCP, Ether Route II, RISC Router 3000E Dayna Communications: EtherPrint, EtherPrint Plus, EtherPrint-T, EtherPrint-T Plus Farallon: EtherPrint, EtherWave LocalTalk Adapter, InterRoute/5, StarRouter, EtherMac iPrint Adapter LT FOCUS Enhancements EtherLAN PRINT Hayes Inter-bridge Kinetics: FastPath - in later years, available from Shiva Networks Sonic Systems: microPrint, microBridge TCP/IP Transware: EtherWay Tribe Computer Works: TribeStar Webster Computer Corporation: MultiGate, MultiPort Gateway, MultiPort/LT Software in MacTCP era (<1995): Apple IP Gateway from Apple Computer SuperBridge/TCP from Sonic Systems Software in Open Transport era (>1995): Internet Gateway from Vicomsoft IPNetRouter from Sustainable Softworks LocalTalk Bridge from Apple Computer Other Software macipgw Netatalk
https://en.wikipedia.org/wiki/Vai%C5%9Be%E1%B9%A3ika%20S%C5%ABtra
Vaiśeṣika Sūtra (Sanskrit: वैशेषिक सूत्र), also called Kanada sutra, is an ancient Sanskrit text at the foundation of the Vaisheshika school of Hindu philosophy. The sutra was authored by the Hindu sage Kanada, also known as Kashyapa. According to some scholars, he flourished before the advent of Buddhism because the Vaiśeṣika Sūtra makes no mention of Buddhism or Buddhist doctrines; however, the details of Kanada's life are uncertain, and the Vaiśeṣika Sūtra was likely compiled sometime between 6th and 2nd century BCE, and finalized in the currently existing version before the start of the common era. A number of scholars have commented on it since the beginning of common era; the earliest commentary known is the Padartha Dharma Sangraha of Prashastapada. Another important secondary work on Vaiśeṣika Sūtra is Maticandra's Dasha padartha sastra which exists both in Sanskrit and its Chinese translation in 648 CE by Yuanzhuang. The Vaiśeṣika Sūtra is written in aphoristic sutras style, and presents its theories on the creation and existence of the universe using naturalistic atomism, applying logic and realism, and is one of the earliest known systematic realist ontology in human history. The text discusses motions of different kind and laws that govern it, the meaning of dharma, a theory of epistemology, the basis of Atman (self, soul), and the nature of yoga and moksha. The explicit mention of motion as the cause of all phenomena in the world and several propositions about it make it one of the earliest texts on physics. Etymology The name Vaiśeṣika Sūtra (Sanskrit: वैशेषिक सूत्र) is derived from viśeṣa, विशेष, which means "particularity", that is to be contrasted from "universality". The classes particularity and universality belong to different categories of experience. Manuscripts Till the 1950s, only one manuscript of Vaiseshika sutra was known and this manuscript was part of a bhasya by the 15th century Sankaramisra. Scholars had doubted its authenticity,
https://en.wikipedia.org/wiki/EVA%20%28benchmark%29
EVA was a continuously running benchmark project for assessing the quality and value of protein structure prediction and secondary structure prediction methods. Methods for predicting both secondary structure and tertiary structure - including homology modeling, protein threading, and contact order prediction - were compared to results from each week's newly solved protein structures deposited in the Protein Data Bank. The project aimed to determine the prediction accuracy that would be expected for non-expert users of common, publicly available prediction webservers; this is similar to the related LiveBench project and stands in contrast to the bi-yearly benchmark CASP, which aims to identify the maximum accuracy achievable by prediction experts.
https://en.wikipedia.org/wiki/LiveBench
LiveBench is a continuously running benchmark project for assessing the quality of protein structure prediction and secondary structure prediction methods. LiveBench focuses mainly on homology modeling and protein threading but also includes secondary structure prediction, comparing publicly available webserver output to newly deposited protein structures in the Protein Data Bank. Like the EVA project and unlike the related CASP and CAFASP experiments, LiveBench is intended to study the accuracy of predictions that would be obtained by non-expert users of publicly available prediction methods. A major advantage of LiveBench and EVA over CASP projects, which run once every two years, is their comparatively large data set.
https://en.wikipedia.org/wiki/Adductor%20hiatus
In human anatomy, the adductor hiatus also known as hiatus magnus is a hiatus (gap) between the adductor magnus muscle and the femur that allows the passage of the femoral vessels from the anterior thigh to the posterior thigh and then the popliteal fossa. It is the termination of the adductor canal and lies about 8–13.5 cm. superior to the adductor tubercle. Structure Kale et al. classified the adductor hiatus according to its shape and the structures surrounding. An adductor hiatus is described as oval or bridging depending on the shape of the upper boundary. It can also be described as muscular or fibrous depending on whether the structure surrounding is the muscular part or the tendinous part of the adductor magnus muscle. For example, the top drawing on the right shows an oval fibrous type of adductor hiatus, and the bottom one shows a bridging muscular adductor hiatus. Four structures are associated with the adductor hiatus. However, only two structures enter and then leave through the hiatus; namely the femoral artery and femoral vein. Those vessels become the popliteal vessels (popliteal artery and popliteal vein) immediately after they leave the hiatus, where they form a network of anastomoses called the genicular arteries. The genicular arteries supply the knee joint. The other two structures that are associated with the adductor hiatus are the descending genicular artery and the saphenous nerve. The saphenous nerve does not leave through the adductor hiatus but penetrates superficially halfway through the adductor canal. Clinical significance Fracture of distal femur Fracture at the supracondylar area of femur, where the adductor part of the adductor magnus attaches, will most likely cause damage to the femoral artery and may cause impairment of the blood supply to the lower leg. Popliteal artery can also be damaged by the fracture of distal femur. Popliteal artery entrapment syndrome Abnormality in the relationship between the adductor hiatus a
https://en.wikipedia.org/wiki/Klaus%20Sutner
Klaus Sutner is a Teaching Professor of Computer Science at Carnegie Mellon University, and is also a former Associate Dean of Undergraduate Programs for the Carnegie Mellon School of Computer Science. His research interests include cellular automata, discrete mathematics as pertains to computation, and computational complexity theory. He developed a hybrid Mathematica/C++ application named Automata that manipulates finite-state machines and their syntactic semigroups. He "has survived five decades in the martial arts. Barely." and is the head instructor at the Three-Rivers Aikikai.
https://en.wikipedia.org/wiki/Aczel%27s%20anti-foundation%20axiom
In the foundations of mathematics, Aczel's anti-foundation axiom is an axiom set forth by , as an alternative to the axiom of foundation in Zermelo–Fraenkel set theory. It states that every accessible pointed directed graph corresponds to exactly one set. In particular, according to this axiom, the graph consisting of a single vertex with a loop corresponds to a set that contains only itself as element, i.e. a Quine atom. A set theory obeying this axiom is necessarily a non-well-founded set theory. Accessible pointed graphs An accessible pointed graph is a directed graph with a distinguished vertex (the "root") such that for any node in the graph there is at least one path in the directed graph from the root to that node. The anti-foundation axiom postulates that each such directed graph corresponds to the membership structure of exactly one set. For example, the directed graph with only one node and an edge from that node to itself corresponds to a set of the form x = {x}. See also von Neumann universe
https://en.wikipedia.org/wiki/SORL1
Sortilin-related receptor, L(DLR class) A repeats containing is a protein that in humans is encoded by the SORL1 gene. SORL1 (also known as SORLA, SORLA1, or LR11; SORLA or SORL1 are used, often interchangeably, for the protein product of the SORL1 gene) is a 2214 residue Type I transmembrane protein receptor that binds certain peptides and integral membrane protein cargo in the endolysosomal pathway and delivers them for sorting to the retromer multi protein complex; the gene is predominantly expressed in the central nervous system. Endosomal traffic jams linked to SORL1 retromer dysfunction are the earliest cellular pathology in both familial and the more common sporadic Alzheimer’s patients. Retromer regulates protein trafficking from the early endosome either back to the trans-Golgi (retrograde) or back to the plasma membrane (direct recycling). Two forms of retromer are known: the VPS26A retromer and the VPS26B retromer, the latter being dedicated to direct recycling in the CNS. SORL1 is a multi domain single-pass membrane protein whose large ectodomain resides primarily in endosomal tubules, being connected by its transmembrane helical domain and cytoplasmic tail to the VPS26 retromer subunit on the outer endosomal membrane. The age at onset of SORL1 mutation carriers varies, which has complicated segregation analyses. Nevertheless, protein−truncating variants (PTVs) are observed almost exclusively in AD patients, indicating that SORL1 is haploinsufficient. However, most variants are rare missense variants that can be benign, or risk−increasing, but recent reports have indicated that some variants are causative for disease. In fact, specific missense variants have been observed only in AD cases, some of which may have a dominant negative effect.. ALZFORUM has created an interactive web page that maps all of the currently known variants onto the schematic of the SORLA domain structure shown in the Figure on the right, along with information for each one.
https://en.wikipedia.org/wiki/Joint%20Institute%20for%20Nuclear%20Astrophysics
The Joint Institute for Nuclear Astrophysics Center for the Evolution of the Elements (JINA-CEE) is a multi-institutional Physics Frontiers Center funded by the US National Science Foundation since 2014. From 2003 to 2014, JINA was a collaboration between Michigan State University, the University of Notre Dame, the University of Chicago, and directed by Michael Wiescher from the University of Notre Dame. Principal investigators were Hendrik Schatz, Timothy Beers and Jim Truran. JINA-CEE is a collaboration between Michigan State University, the University of Notre Dame, University of Washington and Arizona State University and a number of associated institutions, centers, and national laboratories in the US and across the world, with the goal to bring together nuclear experimenters, nuclear theorists, astrophysical modelers, astrophysics theorists, and observational astronomers to address the open scientific questions at the intersection of nuclear physics and astrophysics. JINA-CEE serves as an intellectual center and focal point for the field of nuclear astrophysics, and is intended to enable scientific work and exchange of data and information across field boundaries within its collaboration, and for the field as a whole though workshops, schools, and web-based tools and data bases. It is led by director Hendrik Schatz with Michael Wiescher, Timothy Beers, Sanjay Reddy and Frank Timmes as principal investigators. Most JINA-CEE nuclear physics experiments are carried out at the Nuclear Science Laboratory at the University of Notre Dame, the National Superconducting Cyclotron Laboratory at Michigan State University and the ATLAS/CARIBOU facility at Argonne National Laboratory. JINA-CEE is heavily involved in observations with the Apache Point Observatory within the framework of extensions to the Sloan Digital Sky Survey, LAMOST in China, SkyMapper in Australia, and the Hubble Space Telescope. Among many other observational data, JINA-CEE also uses heavily X-ray ob
https://en.wikipedia.org/wiki/The%20Journal%20of%20Food%20Science%20Education
The Journal of Food Science Education was an online peer-reviewed scientific journal published by the Institute of Food Technologists (Chicago, Illinois). It was established in 2002 as the first scientific electronic journal of the Institute that was published online only. Its main focus was the education methods involved in food science and technology. This involved the recruitment of future food scientists, education at the undergraduate and postgraduate levels, and continuing education through distance learning, e-learning, and lifelong learning. The January 2007 issue featured the first themed issue with improved food science education in the K-12 grade range (Kindergarten, primary, middle school, and secondary). Editors Wayne T. Iwaoka was the inaugural scientific editor and served from 2000 to 2005. Grady W. Chism III served as scientific editor from 2006 to 2013 and Shelly J. Schmidt was scientific editor from 2014 to 2021.
https://en.wikipedia.org/wiki/Comprehensive%20Reviews%20in%20Food%20Science%20and%20Food%20Safety
Comprehensive Reviews in Food Science and Food Safety is an online peer-reviewed scientific journal published by the Institute of Food Technologists (Chicago, Illinois) that was established in 2002. Its main focus is food science and food safety. This includes nutrition, genetics, food microbiology, food chemistry, history, and food engineering. Editors Its first editor was David R. Lineback (University of Maryland, College Park), who held the position from 2002 to 2004. From 2004 to 2006, R. Paul Singh (University of California, Davis) served as editor. The journal was edited by Manfred Kroger (Pennsylvania State University) from 2006 to 2018. Mary Ellen Camire (University of Maine, Orono) has been the editor since 2018. Abstracting and indexing The journal is indexed and abstracted in the following bibliographic databases: See also Food safety
https://en.wikipedia.org/wiki/Role%20Class%20Model
In computer science, the role class model is a role analysis pattern described (but not invented ) by Francis G. Mossé in his article on Modelling Roles. The role class pattern provides the ability for a class to play multiple roles and to embed the role characteristic in a dedicated class. In our society, as we built it, roles are everywhere. Anyone trying to work in a team to create something has a role. In cinematography, many different persons take part in the creation of a film: the film director, the producer, actors, play writer(s), etc. Even our State organisations are based on various roles. In a Republic, you have a President, Ministers, Deputies, etc. Dealing with these situations is one of the problems encountered most during object-oriented analysis. Francis G. Mossé has identified 5 role analysis patterns that can be used to solve most role related problems: Role Inheritance, Association Roles, Role Classes, Generalised Role Classes and Association Class Roles. They all have various degrees of constraints, flexibility or power, which together offer a complete solution to most role-related problems. Intent A model that allows a class to play one or more roles at the same time. A role - as defined by Francis Mossé in Modelling Roles - is a concept of a purpose that a class could have in a certain context. Context The following example is given: Many persons work on a film, each of them with a different role. At the difference of other concepts, a person is not restricted to one role. One could be both the director and a character in a film. Modelling roles for such a concept would require that a class could play more than a single role. A solution using inheritance to conceptualise a role - cf. the Inheritance Role Model - is not possible, as this would allow a person to play only a single role. As one can see in Figure 1 below, the inheritance role model says that a character, who is a person, is playing in a film. But there is no way to say that
https://en.wikipedia.org/wiki/Aggresome
In eukaryotic cells, an aggresome refers to an aggregation of misfolded proteins in the cell, formed when the protein degradation system of the cell is overwhelmed. Aggresome formation is a highly regulated process that possibly serves to organize misfolded proteins into a single location. Biogenesis Correct folding requires proteins to assume one particular structure from a constellation of possible but incorrect conformations. The failure of polypeptides to adopt their proper structure is a major threat to cell function and viability. Consequently, elaborate systems have evolved to protect cells from the deleterious effects of misfolded proteins. Upon synthesis, proteins are in their linear and non-functional form, called a nascent protein. They must undergo co-translational folding as quickly as possible in order to become a functional, three-dimensional structure. Normally folded proteins are referred to as being in their native structure. In this state, they have undergone a hydrophobic collapse process, indicated by outward-facing hydrophilic components and inward-facing hydrophobic components. The solubility of proteins is an important biochemical aspect of protein folding as it has been shown to affect the formation of protein aggregates. Contrary to native structures, a misfolded protein will often have outward-facing hydrophobic regions which acts as an attractant to other insoluble proteins. There are some chaperones which identify aggregates by recognizing their hydrophobic region. These chaperone may work as solubilizers. Cells mainly deploy three mechanisms to counteract misfolded proteins: up-regulating chaperones to assist protein refolding, proteolytic degradation of the misfolded/damaged proteins involving ubiquitin–proteasome and autophagy–lysosome systems, and formation of detergent-insoluble aggresomes by transporting the misfolded proteins along microtubules to a region near the nucleus. Intracellular deposition of misfolded protein agg
https://en.wikipedia.org/wiki/Hydraulic%20clearance
Hydraulic clearance. Flow in narrow clearances are of vital importance in hydraulic system component design. The flow in a narrow circular clearance of a spool valve can be calculated according to the formula below if the height is negligible compared to the width of the clearance, such as most of the clearances in hydraulic pumps, hydraulic motors, and spool valves. Flow is considered to be laminar. The formula below is valid for a spool valve when the spool is steady. Concentric spool/valve housing position i.e. the height/radial clearance c is the same all around: Units as per SI conventions : Flow Qi = (∆P · π · d · c3) ÷ (12 · ν · ρ · L) where: Q = volumetric flow rate (m^3/sec) ΔP = P1-P2 = pressure drop over the clearance (N/m^2, Pa) d = valve spool diameter (metre) c = clearance height (radial clearance) (metre) ν = kinematic viscosity for the oil (m^2/sec) ρ = density for the oil (kg/m^3) L = clearance length (metre) As can be seen from the formula, the clearance height c has much more influence on the leakage than the length. The formula clearly hints of pure laminar flow conditions. It is also valid for gases. Contact between the spool and the wall, the value that is generally used for practical calculations: Flow Qe = 2.5 · Qi
https://en.wikipedia.org/wiki/32nd%20meridian%20west%20from%20Washington
The 32nd meridian of longitude west from Washington is a line of longitude approximately 109°3′5.194″ west of the Prime Meridian of Greenwich. In the United States of America, the meridian 32 degrees west of the Washington Meridian defines the western boundaries of the State of Colorado and the State of New Mexico and the eastern boundaries of the State of Utah and the State of Arizona. History On February 28, 1861, the Act Organizing the Territory of Colorado defined the western boundary of the new territory as the 32nd meridian of longitude west from Washington. The creation of the Colorado Territory moved the eastern boundary of the Territory of Utah west to this meridian. Two years later on February 24, 1863, the Act Organizing the Territory of Arizona defined the eastern boundary of the new territory as the 32nd meridian of longitude west from Washington. This in turn moved the western boundary of the Territory of New Mexico east to this meridian. These boundaries on the 32nd meridian of longitude west from Washington remained when Colorado became a state on August 1, 1876, Utah became a state on January 4, 1896, New Mexico became a state on January 6, 1912, and Arizona became a state on February 14, 1912. The point of intersection of these four states, known as the Four Corners, is the only place in the United States where four states touch. Longitude in the United States Latitude and longitude uniquely describe the location of any point on Earth. Latitude may be simply calculated from astronomical or solar observation, either at land or sea, interrupted only by cloudy skies. Longitude, on the other hand, requires both astronomical or solar observation and some form of time reference to a longitude reference point. John Harrison produced the first precise marine chronometer in 1761. The completion of the first North American electrical telegraph line between Washington and Baltimore on May 24, 1844, introduced a technology that could transmit tim
https://en.wikipedia.org/wiki/Security%20controls
Security controls are safeguards or countermeasures to avoid, detect, counteract, or minimize security risks to physical property, information, computer systems, or other assets. In the field of information security, such controls protect the confidentiality, integrity and availability of information. Systems of controls can be referred to as frameworks or standards. Frameworks can enable an organization to manage security controls across different types of assets with consistency. Types of security controls Security controls can be classified by various criteria. For example, controls are occasionally classified by when they act relative to a security breach: Before the event, preventive controls are intended to prevent an incident from occurring e.g. by locking out unauthorized intruders; During the event, detective controls are intended to identify and characterize an incident in progress e.g. by sounding the intruder alarm and alerting the security guards or police; After the event, corrective controls are intended to limit the extent of any damage caused by the incident e.g. by recovering the organization to normal working status as efficiently as possible. Security controls can also be classified according to their characteristics, for example: Physical controls e.g. fences, doors, locks and fire extinguishers; Procedural or administrative controls e.g. incident response processes, management oversight, security awareness and training; Technical or logical controls e.g. user authentication (login) and logical access controls, antivirus software, firewalls; Legal and regulatory or compliance controls e.g. privacy laws, policies and clauses. For more information on security controls in computing, see Defense in depth (computing) and Information security Information security standards and control frameworks Numerous information security standards promote good security practices and define frameworks or systems to structure the analysis and design for managin
https://en.wikipedia.org/wiki/Sublimatory
A sublimatory or sublimation apparatus is equipment, commonly laboratory glassware, for purification of compounds by selective sublimation. In principle, the operation resembles purification by distillation, except that the products do not pass through a liquid phase. Overview A typical sublimation apparatus separates a mix of appropriate solid materials in a vessel in which it applies heat under a controllable atmosphere (air, vacuum or inert gas). If the material is not at first solid, then it may freeze under reduced pressure. Conditions are so chosen that the solid volatilizes and condenses as a purified compound on a cooled surface, leaving the non-volatile residual impurities or solid products behind. The form of the cooled surface often is a so-called cold finger which for very low-temperature sublimation may actually be cryogenically cooled. If the operation is a batch process, then the sublimed material can be collected from the cooled surface once heating ceases and the vacuum is released. Although this may be quite convenient for small quantities, adapting sublimation processes to large volume is generally not practical with the apparatus becoming extremely large and generally needing to be disassembled to recover products and remove residue. Among the advantages of applying the principle to certain materials are the comparatively low working temperatures, reduced exposure to gases such as oxygen that might harm certain products, and the ease with which it can be performed on extremely small quantities. The same apparatus may also be used for conventional distillation of extremely small quantities due to the very small volume and surface area between evaporating and condensing regions, although this is generally only useful if the cold finger can be cold enough to solidify the condensate. Temperature gradient More sophisticated variants of sublimation apparatus include those that apply a temperature gradient so as to allow for controlled recrystall
https://en.wikipedia.org/wiki/ArVid
ArVid (Archiver on Video) () is a data backup solution using a VHS tape as a storage medium. It was very popular in Russia and the rest of the former USSR in the mid-1990s. It was produced in Zelenograd, Russia by PO KSI. Features Using low-cost VHS tapes and recording units for data backup. High reliability Hamming code error correction Easy data copying between two VHS units (eliminating need of a computer for data copying) Disadvantages Inefficient tape capacity usage (only 2 grades of luminance signal spectrum were used) Poor software support Operation A VHS recorder unit should be connected to an ArVid ISA board by a composite video cable. Unit operation is controlled by a remote control emulator using an LED. Device may operate in two modes: low data rate at 200 KB/s and high data rate at 325 KB/s (equivalent to roughly 1.33× and 2.17× CDR recording speed). The original, lower recording speed was retained as a user option because not all VHS recorders of the time offered sufficient recording quality to reliably support the higher speed. An E-180 video tape is able to hold 2 GB of uncompressed data at the lower rate, more than sufficient for most PC hard drives of the time. This can be shown by calculating 200 KB/s × 60 s/min × 60 min/h × 3 h = 2.06 GB (2.06 × 230 bytes), which also leaves a few minutes spare for header and synchronisation space. Note that it is unclear here whether "200 kbyte" means (200 × 103) or (200 × 210); the above calculation assumes the latter, but the former still produces a capacity of 2.01 GB (2.01 × 230 bytes), providing 2.00 GB of capacity in a little under 2 hours and 59 minutes. Similarly, this means that an E240 4-hour tape, using the higher data rate, would be capable of storing between 4.35 and 4.46 GB (230 bytes), approximately equivalent to a standard single-layer recordable DVD. Models ArVid 1010, 100 kbyte/s, 4 kbyte RAM, was first of ArVid devices. Its production started in 1992. ArVid 1020, 200
https://en.wikipedia.org/wiki/Sarawak%20Biodiversity%20Centre
Sarawak Biodiversity Centre is a statutory body that was set up by the government of Sarawak in 1997 for the regulation of access and collection of biological resources for research or commercial purposes. In 2004, the centre was relieved of its regulatory role and started to get involved in biotechnology-based research on the biological resources in the state. History Sarawak Biodiversity Centre (SBC) was established in 1997 following the enactment of the Sarawak Biodiversity Centre Ordinance by the Sarawak state government for conservation, utilization, protection and sustainable development of biodiversity in the state. This was followed by the enactment of Sarawak Biodiversity Regulations in 1998. In December 2003, the Sarawak State Legislative Assembly passed the Sarawak Biodiversity Centre (Amendment) Ordinance 2003. The state assembly also revised the Sarawak Biodiversity Regulations in 2004. Following these revisions, Sarawak Biodiversity Centre was relieved of its previous role and assumed a new role of research and development of the state biological resources and documentation of indigenous knowledge of utilising biological resources. In 2017, SBC hosted BioBorneo and Bioeconomy Day. SBC collaborated with Mitsubishi Corporation on cultivating indigenous algae since 2012. In 2019, SBC and Mitsubishi launched one of the largest algae cultivation facility in Southeast Asia. It is expected to produce 60 tonnes of dried algae biomass per hectare per year. Programmes Traditional Knowledge Documentation Programme This programme exists to prevent the loss of traditional knowledge in indigenous communities because knowledge is passed to the next generations only through oral tradition. This programme is carried out through capacity-building workshops where local communities are trained with documentation techniques as well as growing and management of useful indigenous plants. As of 9 November 2020, a total of 6,420 plants were documented with 1,713 species of
https://en.wikipedia.org/wiki/Gene-for-gene%20relationship
The gene-for-gene relationship was discovered by Harold Henry Flor who was working with rust (Melampsora lini) of flax (Linum usitatissimum). Flor showed that the inheritance of both resistance in the host and parasite ability to cause disease is controlled by pairs of matching genes. One is a plant gene called the resistance (R) gene. The other is a parasite gene called the avirulence (Avr) gene. Plants producing a specific R gene product are resistant towards a pathogen that produces the corresponding Avr gene product. Gene-for-gene relationships are a widespread and very important aspect of plant disease resistance. Another example can be seen with Lactuca serriola versus Bremia lactucae. Clayton Oscar Person was the first scientist to study plant pathosystem ratios rather than genetics ratios in host-parasite systems. In doing so, he discovered the differential interaction that is common to all gene-for-gene relationships and that is now known as the Person differential interaction. Resistance genes Classes of resistance gene There are several different classes of R Genes. The major classes are the NBS-LRR genes and the cell surface pattern recognition receptors (PRR). The protein products of the NBS-LRR R genes contain a nucleotide binding site (NBS) and a leucine rich repeat (LRR). The protein products of the PRRs contain extracellular, juxtamembrane, transmembrane and intracellular non-RD kinase domains. Within the NBS-LRR class of R genes are two subclasses: One subclass has an amino-terminal Toll/Interleukin 1 receptor homology region (TIR). This includes the N resistance gene of tobacco against tobacco mosaic virus (TMV). The other subclass does not contain a TIR and instead has a leucine zipper region at its amino terminal. The protein products encoded by this class of resistance gene are located within the plant cell cytoplasm. The PRR class of R genes includes the rice XA21 resistance gene that recognizes the ax21 peptide and the Arabidopsis F
https://en.wikipedia.org/wiki/Pacific%20Ocean%20Shelf%20Tracking%20Project
The Pacific Ocean Shelf Tracking Project (POST) is a field project of the Census of Marine Life that researches the behavior of marine animals through the use of ocean telemetry and data management systems. This system of telemetry consists of highly efficient lines of acoustic receivers that create sections of the continental shelf along the coast of the Pacific Northwest. The acoustic receivers pick up signals from the tagged animals as they pass along the lines, allowing for the documentation of movement patterns. The receivers also allow for the estimation of parameters such as swimming speed and mortality. The trackers sit on the seabed of the continental shelf and in the major rivers of the world. This method can be used to improve fishing skills and management. The program started in 2002 and was initially limited to the study of the movement and ocean-survival of both hatchery-raised and wild salmon in the Pacific Northwest. After the successful pilot period, the program has now moved into the tracking of trout, sharks, rockfish, and lingcod. See also Ocean Tracking Network
https://en.wikipedia.org/wiki/Union%20label
A union label (sometimes called a union bug) is a label, mark or emblem which advertises that the employees who make a product or provide a service are represented by the labor union or group of unions whose label appears, in order to attract customers who prefer to buy union-made products. The term "union bug" is frequently used to describe a minuscule union label appearing on printed materials, which supposedly resembles a small insect. Origin and history The invention of the union label concept is attributed to the Carpenter's Eight-Hour League in San Francisco, California which adopted a stamp in 1869 for use on products produced by factories employing men on the eight- (as opposed to ten-) hour day. In 1874, that city's unionized cigar-making workers created a similar "white labor" label to differentiate their cigars from those made by poorly paid, non-unionized Chinese workers. The concept of the union label as a tool for harnessing support from fellow working-class consumers for unionization spread rapidly in the next decades, first among the cigarmakers (their union adopted the first national union label in 1880), but among other unions as well, including typographers, garment workers, coopers, bakers and iron molders. By 1909, the American Federation of Labor had created its Union Label Department. See also Printer's mark
https://en.wikipedia.org/wiki/AntiPatterns
AntiPatterns: Refactoring Software, Architectures, and Projects in Crisis is a book about anti-patterns: specific repeated practices in software architecture, software design and software project management that initially appear to be beneficial, but ultimately result in bad consequences that outweigh hoped-for advantages. This study covers several recurring problematic software-related patterns, the forces that inspire their repeated adoption, and proven-in-practice remedial actions, called refactored solutions. The authors are William Brown, Raphael Malveau, Skip McCormick, and Tom Mowbray; with Scott Thomas joining in on second and third books. Four of the five authors worked together at Mitre Corporation in the late 1990s. Sometimes referred to as an "Upstart Gang-Of-Four" the authors were frequently (and often unfavorably) compared to the original Design Patterns by Gang of Four. This began with a favorable review and 1998 runner-up Jolt Productivity Award given by Software Development magazine. The controversy around this book, and the concept of an anti-pattern has been said to stem from a somewhat common misunderstanding that the authors were somehow opposed to design patterns. However the authors explained within the book itself that they are big fans of design patterns; their objective was to build on the concept by providing constructive means for dealing with the frequent "patterns of failure" they had professionally dealt with. Reviews Reviewed in C/C++ Users Journal July 1998 v16 n7 p63(2) by Marc Briand July 1998/AntiPatterns - Refactoring Software, Architectures, and Projects in Crisis
https://en.wikipedia.org/wiki/Foundations%20of%20Physics
Foundations of Physics is a monthly journal "devoted to the conceptual bases and fundamental theories of modern physics and cosmology, emphasizing the logical, methodological, and philosophical premises of modern physical theories and procedures". The journal publishes results and observations based on fundamental questions from all fields of physics, including: quantum mechanics, quantum field theory, special relativity, general relativity, string theory, M-theory, cosmology, thermodynamics, statistical physics, and quantum gravity Foundations of Physics has been published since 1970. Its founding editors were Henry Margenau and Wolfgang Yourgrau. The 1999 Nobel laureate Gerard 't Hooft was editor-in-chief from January 2007. At that stage, it absorbed the associated journal for shorter submissions Foundations of Physics Letters, which had been edited by Alwyn Van der Merwe since its foundation in 1988. Past editorial board members (which include several Nobel laureates) include Louis de Broglie, Robert H. Dicke, Murray Gell-Mann, Abdus Salam, Ilya Prigogine and Nathan Rosen. Carlo Rovelli was announced as new editor-in-chief in February 2016. Einstein–Cartan–Evans theory Between 2003 and 2005, Foundations of Physics Letters published a series of papers by Myron W. Evans claiming to make obsolete well-established results of quantum field theory and general relativity. In 2008, an editorial was written by the new Editor-in-Chief Gerard 't Hooft distancing the journal from the topic of Einstein–Cartan–Evans theory. Abstracting and indexing According to the Journal Citation Reports, the journal has a 2021 impact factor of 1.276. The journal is abstracted and indexed in the following databases:
https://en.wikipedia.org/wiki/MIT%20General%20Circulation%20Model
The MIT General Circulation Model (MITgcm) is a numerical computer code that solves the equations of motion governing the ocean or Earth's atmosphere using the finite volume method. It was developed at the Massachusetts Institute of Technology and was one of the first non-hydrostatic models of the ocean. It has an automatically generated adjoint that allows the model to be used for data assimilation. The MITgcm is written in the programming language Fortran. History See also Physical oceanography Global climate model
https://en.wikipedia.org/wiki/Vertical%20resistance
The term vertical resistance, used commonly in context of plant selection, was first used by J.E. Vanderplank to describe single-gene resistance. This contrasted the term horizontal resistance which was used to describe many-gene resistance. Raoul A. Robinson further refined the definition of vertical resistance, emphasizing that in vertical resistance there are single genes for resistance in the host plant, and there are also single genes for parasitic ability in the parasite. This phenomenon is known as the gene-for-gene relationship, and it is the defining character of vertical resistance.
https://en.wikipedia.org/wiki/Horizontal%20resistance
In genetics, the term horizontal resistance was first used by J. E. Vanderplank to describe many-gene resistance, which is sometimes also called generalized resistance. This contrasts with the term vertical resistance which was used to describe single-gene resistance. Raoul A. Robinson further refined the definition of horizontal resistance. Unlike vertical resistance and parasitic ability, horizontal resistance and horizontal parasitic ability are entirely independent of each other in genetic terms. In the first round of breeding for horizontal resistance, plants are exposed to pathogens and selected for partial resistance. Those with no resistance die, and plants unaffected by the pathogen have vertical resistance and are removed. The remaining plants have partial resistance and their seed is stored and bred back up to sufficient volume for further testing. The hope is that in these remaining plants are multiple types of partial-resistance genes, and by crossbreeding this pool back on itself, multiple partial resistance genes will come together and provide resistance to a larger variety of pathogens. Successive rounds of breeding for horizontal resistance proceed in a more traditional fashion, selecting plants for disease resistance as measured by yield. These plants are exposed to native regional pathogens, and given minimal assistance in fighting them.
https://en.wikipedia.org/wiki/Hill%E2%80%93Robertson%20effect
In population genetics, the Hill–Robertson effect, or Hill–Robertson interference, is a phenomenon first identified by Bill Hill and Alan Robertson in 1966. It provides an explanation as to why there may be an evolutionary advantage to genetic recombination. Explanation In a population of finite but effective size which is subject to natural selection, varying extents of linkage disequilibria (LD) will occur. These can be caused by genetic drift or by mutation, and they will tend to slow down the process of evolution by natural selection. This is most easily seen by considering the case of disequilibria caused by mutation: Consider a population of individuals whose genome has only two genes, a and b. If an advantageous mutant (A) of gene a arises in a given individual, that individual's genes will through natural selection become more frequent in the population over time. However, if a separate advantageous mutant (B) of gene b arises before A has gone to fixation, and happens to arise in an individual who does not carry A, then individuals carrying B and individuals carrying A will be in competition. If recombination is present, then individuals carrying both A and B (of genotype AB) will eventually arise. Provided there are no negative epistatic effects of carrying both, individuals of genotype AB will have a greater selective advantage than aB or Ab individuals, and AB will hence go to fixation. However, if there is no recombination, AB individuals can only occur if the latter mutation (B) happens to occur in an Ab individual. The chance of this happening depends on the frequency of new mutations, and on the size of the population, but is in general unlikely unless A is already fixed, or nearly fixed. Hence one should expect the time between the A mutation arising and the population becoming fixed for AB to be much longer in the absence of recombination. Hence recombination allows evolution to progress faster. [Note: This effect is often erroneously equated wit
https://en.wikipedia.org/wiki/Dribbleware
Dribbleware, in the context of computer software, is a product for which patches are often being released. The term usually has negative connotations, and can refer to software which hasn't been tested properly prior to release, or for which planned features could not be implemented. Dribbleware is not necessarily due to poor programming; it can be indicative of a product whose development was rushed to meet a release date.
https://en.wikipedia.org/wiki/Prompt%20gamma%20neutron%20activation%20analysis
Prompt-gamma neutron activation analysis (PGAA) is a very widely applicable technique for determining the presence and amount of many elements simultaneously in samples ranging in size from micrograms to many grams. It is a non-destructive method, and the chemical form and shape of the sample are relatively unimportant. Typical measurements take from a few minutes to several hours per sample. The technique can be described as follows. The sample is continuously irradiated with a beam of neutrons. The constituent elements of the sample absorb some of these neutrons and emit prompt gamma rays which are measured with a gamma ray spectrometer. The energies of these gamma rays identify the neutron-capturing elements, while the intensities of the peaks at these energies reveal their concentrations. The amount of analyte element is given by the ratio of count rate of the characteristic peak in the sample to the rate in a known mass of the appropriate elemental standard irradiated under the same conditions. Typically, the sample will not acquire significant long-lived radioactivity, and the sample may be removed from the facility and used for other purposes. One of the typical applications of PGAA is an online belt elemental analyzer or bulk material analyzer used in cement, coal and mineral industries.
https://en.wikipedia.org/wiki/CAFASP
CAFASP, or the Critical Assessment of Fully Automated Structure Prediction, is a large-scale blind experiment in protein structure prediction that studies the performance of automated structure prediction webservers in homology modeling, fold recognition, and ab initio prediction of protein tertiary structures based only on amino acid sequence. The experiment runs once every two years in parallel with CASP, which focuses on predictions that incorporate human intervention and expertise. Compared to related benchmarking techniques LiveBench and EVA, which run weekly against newly solved protein structures deposited in the Protein Data Bank, CAFASP generates much less data, but has the advantage of producing predictions that are directly comparable to those produced by human prediction experts. Recently CAFASP has been run essentially integrated into the CASP results rather than as a separate experiment.
https://en.wikipedia.org/wiki/Finite%20von%20Neumann%20algebra
In mathematics, a finite von Neumann algebra is a von Neumann algebra in which every isometry is a unitary. In other words, for an operator V in a finite von Neumann algebra if , then . In terms of the comparison theory of projections, the identity operator is not (Murray-von Neumann) equivalent to any proper subprojection in the von Neumann algebra. Properties Let denote a finite von Neumann algebra with center . One of the fundamental characterizing properties of finite von Neumann algebras is the existence of a center-valued trace. A von Neumann algebra is finite if and only if there exists a normal positive bounded map with the properties: , if and then , for , for and . Examples Finite-dimensional von Neumann algebras The finite-dimensional von Neumann algebras can be characterized using Wedderburn's theory of semisimple algebras. Let Cn × n be the n × n matrices with complex entries. A von Neumann algebra M is a self adjoint subalgebra in Cn × n such that M contains the identity operator I in Cn × n. Every such M as defined above is a semisimple algebra, i.e. it contains no nilpotent ideals. Suppose M ≠ 0 lies in a nilpotent ideal of M. Since M* ∈ M by assumption, we have M*M, a positive semidefinite matrix, lies in that nilpotent ideal. This implies (M*M)k = 0 for some k. So M*M = 0, i.e. M = 0. The center of a von Neumann algebra M will be denoted by Z(M). Since M is self-adjoint, Z(M) is itself a (commutative) von Neumann algebra. A von Neumann algebra N is called a factor if Z(N) is one-dimensional, that is, Z(N) consists of multiples of the identity I. Theorem Every finite-dimensional von Neumann algebra M is a direct sum of m factors, where m is the dimension of Z(M). Proof: By Wedderburn's theory of semisimple algebras, Z(M) contains a finite orthogonal set of idempotents (projections) {Pi} such that PiPj = 0 for i ≠ j, Σ Pi = I, and where each Z(M)Pi is a commutative simple algebra. Every complex simple algebras is isomorphic to
https://en.wikipedia.org/wiki/Of%20the%20form
In mathematics, the phrase "of the form" indicates that a mathematical object, or (more frequently) a collection of objects, follows a certain pattern of expression. It is frequently used to reduce the formality of mathematical proofs. Example of use Here is a proof which should be appreciable with limited mathematical background: Statement: The product of any two even natural numbers is also even. Proof: Any even natural number is of the form 2n, where n is a natural number. Therefore, let us assume that we have two even numbers which we will denote by 2k and 2l. Their product is (2k)(2l) = 4(kl) = 2(2kl). Since 2kl is also a natural number, the product is even. Note: In this case, both exhaustivity and exclusivity were needed. That is, it was not only necessary that every even number is of the form 2n (exhaustivity), but also that every expression of the form 2n is an even number (exclusivity). This will not be the case in every proof, but normally, at least exhaustivity is implied by the phrase of the form.
https://en.wikipedia.org/wiki/Stropharia%20rugosoannulata
Stropharia rugosoannulata, commonly known as the wine cap stropharia, "garden giant", burgundy mushroom, king stropharia, or wine-red stropharia (Japanese: saketsubatake), is an agaric of the family Strophariaceae native to Europe and North America. Unlike many other members of the genus Stropharia, it is regarded as a choice edible and is commercially cultivated. Description The king stropharia can grow to high with a reddish-brown convex to flattening cap up to across, the size leading to another colloquial name godzilla mushroom. The gills are initially pale, then grey, and finally dark purple-brown in colour. The firm flesh is white, as is the tall stem which bears a wrinkled ring. This is the origin of the specific epithet which means "wrinkled-ringed". Distribution and habitat The species is found on wood chips across North America in summer and autumn. It is also found in Europe, and has been introduced to Australia and New Zealand. The mushroom was reported in April 2018 in Colombia, in the city of Bogota. Ecology In Paul Stamets' book Mycelium Running, a study done by Christiane Pischl showed that the king stropharia makes an excellent garden companion to corn. The fungus also has a European history of being grown with corn. A 2006 study, published in the journal Applied and Environmental Microbiology, found the king stropharia to have the ability to attack the nematode Panagrellus redivivus; the fungus produces unique spiny cells called acanthocytes which are able to immobilise and digest the nematodes. Uses Described as very tasty by some authors, the fungus is easily cultivated on a medium similar to that on which it grows naturally. Antonio Carluccio recommends sautéeing them in butter or grilling them.
https://en.wikipedia.org/wiki/Ringwoodite
Ringwoodite is a high-pressure phase of Mg2SiO4 (magnesium silicate) formed at high temperatures and pressures of the Earth's mantle between depth. It may also contain iron and hydrogen. It is polymorphous with the olivine phase forsterite (a magnesium iron silicate). Ringwoodite is notable for being able to contain hydroxide ions (oxygen and hydrogen atoms bound together) within its structure. In this case two hydroxide ions usually take the place of a magnesium ion and two oxide ions. Combined with evidence of its occurrence deep in the Earth's mantle, this suggests that there is from one to three times the world ocean's equivalent of water in the mantle transition zone from 410 to 660 km deep. This mineral was first identified in the Tenham meteorite in 1969, and is inferred to be present in large quantities in the Earth's mantle. Olivine, wadsleyite, and ringwoodite are polymorphs found in the upper mantle of the earth. At depths greater than about , other minerals, including some with the perovskite structure, are stable. The properties of these minerals determine many of the properties of the mantle. Ringwoodite was named after the Australian earth scientist Ted Ringwood (1930–1993), who studied polymorphic phase transitions in the common mantle minerals olivine and pyroxene at pressures equivalent to depths as great as about 600 km. Characteristics Ringwoodite is polymorphous with forsterite, Mg2SiO4, and has a spinel structure. Spinel group minerals crystallize in the isometric system with an octahedral habit. Olivine is most abundant in the upper mantle, above about ; the olivine polymorphs wadsleyite and ringwoodite are thought to dominate the transition zone of the mantle, a zone present from about 410 to 660 km depth. Ringwoodite is thought to be the most abundant mineral phase in the lower part of Earth's transition zone. The physical and chemical property of this mineral partly determine properties of the mantle at those depths. The pressure r
https://en.wikipedia.org/wiki/Triple%20modular%20redundancy
In computing, triple modular redundancy, sometimes called triple-mode redundancy, (TMR) is a fault-tolerant form of N-modular redundancy, in which three systems perform a process and that result is processed by a majority-voting system to produce a single output. If any one of the three systems fails, the other two systems can correct and mask the fault. The TMR concept can be applied to many forms of redundancy, such as software redundancy in the form of N-version programming, and is commonly found in fault-tolerant computer systems. Space satellite systems often use TMR, although satellite RAM usually uses Hamming error correction. Some ECC memory uses triple modular redundancy hardware (rather than the more common Hamming code), because triple modular redundancy hardware is faster than Hamming error correction hardware. Called repetition code, some communication systems use N-modular redundancy as a simple form of forward error correction. For example, 5-modular redundancy communication systems (such as FlexRay) use the majority of 5 samples – if any 2 of the 5 results are erroneous, the other 3 results can correct and mask the fault. Modular redundancy is a basic concept, dating to antiquity, while the first use of TMR in a computer was the Czechoslovak computer SAPO, in the 1950s. General case The general case of TMR is called N-modular redundancy, in which any positive number of replications of the same action is used. The number is typically taken to be at least three, so that error correction by majority vote can take place; it is also usually taken to be odd, so that no ties may happen. Majority logic gate In TMR, three identical logic circuits (logic gates) are used to compute the same set of specified Boolean function. If there are no circuit failures, the outputs of the three circuits are identical. But due to circuit failures, the outputs of the three circuits may be different. A majority logic gate is used to decide which of the circuits' ou