source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Concurrency%20semantics
In computer science, concurrency semantics is a way to give meaning to concurrent systems in a mathematically rigorous way. Concurrency semantics is often based on mathematical theories of concurrency such as various process calculi, the actor model, or Petri nets. A more detailed account of concurrency semantics is given here: Concurrency (computer science). Semantics Formal methods
https://en.wikipedia.org/wiki/Sediment%E2%80%93water%20interface
In oceanography and limnology, the sediment–water interface is the boundary between bed sediment and the overlying water column. The term usually refers to a thin layer (approximately 1 cm deep, though variable) of water at the very surface of sediments on the seafloor. In the ocean, estuaries, and lakes, this layer interacts with the water above it through physical flow and chemical reactions mediated by the micro-organisms, animals, and plants living at the bottom of the water body. The topography of this interface is often dynamic, as it is affected by physical processes (e.g. currents causing rippling or resuspension) and biological processes (e.g. bioturbation generating mounds or trenches). Physical, biological, and chemical processes occur at the sediment-water interface as a result of a number of gradients such as chemical potential gradients, pore water gradients, and oxygen gradients. Definition The location of the top of the sediment-water interface in the water column is defined as the break in the vertical gradient of some dissolved component, such as oxygen, where the concentration transitions from higher concentration in the well-mixed water above to a lower concentration at the sediment surface. This can include less than 1 mm to several mm of the water column. Physical processes Physical movement of water and sediments alter the thickness and topography of the sediment-water interface. Sediment resuspension by waves, tides, or other disturbing forces (e.g. human feet at a beach) allows sediment pore water and other dissolved components to diffuse out of the sediments and mix with the water above. For resuspension to occur the movement of water has to be powerful enough to have a strong critical shear stress that is greater than the bed shear stress. For example, a very consolidated bed would only be resuspended under a high critical shear stress, while a "fluff layer" of very loose particles could be resuspended under a low critical shear s
https://en.wikipedia.org/wiki/Nucleoprotein
Nucleoproteins are proteins conjugated with nucleic acids (either DNA or RNA). Typical nucleoproteins include ribosomes, nucleosomes and viral nucleocapsid proteins. Structures Nucleoproteins tend to be positively charged, facilitating interaction with the negatively charged nucleic acid chains. The tertiary structures and biological functions of many nucleoproteins are understood. Important techniques for determining the structures of nucleoproteins include X-ray diffraction, nuclear magnetic resonance and cryo-electron microscopy. Viruses Virus genomes (either DNA or RNA) are extremely tightly packed into the viral capsid. Many viruses are therefore little more than an organised collection of nucleoproteins with their binding sites pointing inwards. Structurally characterised viral nucleoproteins include influenza, rabies, Ebola, Bunyamwera, Schmallenberg, Hazara, Crimean-Congo hemorrhagic fever, and Lassa. Deoxyribonucleoproteins A deoxyribonucleoprotein (DNP) is a complex of DNA and protein. The prototypical examples are nucleosomes, complexes in which genomic DNA is wrapped around clusters of eight histone proteins in eukaryotic cell nuclei to form chromatin. Protamines replace histones during spermatogenesis. Functions The most widespread deoxyribonucleoproteins are nucleosomes, in which the component is nuclear DNA. The proteins combined with DNA are histones and protamines; the resulting nucleoproteins are located in chromosomes. Thus, the entire chromosome, i.e. chromatin in eukaryotes consists of such nucleoproteins. In eukaryotic cells, DNA is associated with about an equal mass of histone proteins in a highly condensed nucleoprotein complex called chromatin. Deoxyribonucleoproteins in this kind of complex interact to generate a multiprotein regulatory complex in which the intervening DNA is looped or wound. The deoxyribonucleoproteins participate in regulating DNA replication and transcription. Deoxyribonucleoproteins are also involved in hom
https://en.wikipedia.org/wiki/Winning%20percentage
In sports, a winning percentage is the fraction of games or matches a team or individual has won. The statistic is commonly used in standings or rankings to compare teams or individuals. It is defined as wins divided by the total number of matches played (i.e. wins plus draws plus losses). A draw counts as a win. Discussion For example, if a team's season record is 30 wins and 20 losses, the winning percentage would be 60% or 0.600: If a team's season record is 30–15–5 (i.e. it has won thirty games, lost fifteen and tied five times), and if the five tie games are counted as 2 wins, then the team has an adjusted record of 32 wins, resulting in a 65% or winning percentage for the fifty total games from: In North America, winning percentages are expressed as decimal values to three decimal places. It is the same value, but without the last step of multiplying by 100% in the formula above. Furthermore, they are usually read aloud as if they were whole numbers (e.g. 1.000, "a thousand" or 0.500, "five hundred"). In this case, the name "winning percentage" is actually a misnomer, since it is not expressed as a percentage. A winning percentage such as .536 ("five thirty-six") expressed as a percentage would be 53.6%. Winning percentage is one way to compare the record of two teams; however, another standard method most frequently used in baseball and professional basketball standings is games behind. In baseball, a pitcher is assessed wins and losses as an individual statistic and thus has his own winning percentage, based on his win–loss record. However, in association football, a manager's abilities may be measured by win percentage. In this case, the formula is wins divided by total number of matches; draws are not considered as "half-wins", and the quotient is always in percentage form. In the National Football League, division winners and playoff qualifiers are technically determined by winning percentage and not by number of wins. Ties are currently cou
https://en.wikipedia.org/wiki/Universal%20Transverse%20Mercator%20coordinate%20system
The Universal Transverse Mercator (UTM) is a map projection system for assigning coordinates to locations on the surface of the Earth. Like the traditional method of latitude and longitude, it is a horizontal position representation, which means it ignores altitude and treats the earth surface as a perfect ellipsoid. However, it differs from global latitude/longitude in that it divides earth into 60 zones and projects each to the plane as a basis for its coordinates. Specifying a location means specifying the zone and the x, y coordinate in that plane. The projection from spheroid to a UTM zone is some parameterization of the transverse Mercator projection. The parameters vary by nation or region or mapping system. Most zones in UTM span 6 degrees of longitude, and each has a designated central meridian. The scale factor at the central meridian is specified to be 0.9996 of true scale for most UTM systems in use. History The National Oceanic and Atmospheric Administration (NOAA) website states that the system was developed by the United States Army Corps of Engineers, starting in the early 1940s. However, a series of aerial photos found in the Bundesarchiv-Militärarchiv (the military section of the German Federal Archives) apparently dating from 1943–1944 bear the inscription UTMREF followed by grid letters and digits, and projected according to the transverse Mercator, a finding that would indicate that something called the UTM Reference system was developed in the 1942–43 time frame by the Wehrmacht. It was probably carried out by the Abteilung für Luftbildwesen (Department for Aerial Photography). From 1947 onward the US Army employed a very similar system, but with the now-standard 0.9996 scale factor at the central meridian as opposed to the German 1.0. For areas within the contiguous United States the Clarke Ellipsoid of 1866 was used. For the remaining areas of Earth, including Hawaii, the International Ellipsoid was used. The World Geodetic System WGS84 ell
https://en.wikipedia.org/wiki/List%20of%20symbols%20of%20Scientology
This is a list of symbols of Scientology, the Church of Scientology, and related organizations. List Trademarks All official symbols of Scientology are trademarks held by the Religious Technology Center (RTC). They are said by the center to be used "on Scientology religious materials to signify their authenticity ... and provide a legal mechanism to ensure the spiritual technologies are orthodox and ministered according to Mr. Hubbard's Scripture. These marks also provide the means to prevent anyone from engaging in some distorted use of Mr. Hubbard's writings, thereby ensuring the purity of the religion for all eternity."
https://en.wikipedia.org/wiki/Cutlet
Cutlet (derived from French côtelette, côte, "rib") refers to: a thin slice of meat from the leg or ribs of mutton, veal, pork, or chicken a dish made of such slice, often breaded (also known in various languages as a cotoletta, Kotelett, kotlet or kotleta) a croquette or cutlet-shaped patty made of ground meat a kind of fish cut where the fish is sliced perpendicular to the spine, rather than parallel (as with fillets); often synonymous with steak a prawn or shrimp with its head and outer shell removed, leaving only the flesh and tail a mash of vegetables (usually potatoes) fried with bread History Cutlet were a typical starter in French cuisine, as a variation of croquettes with a shape of small rib (côtelette in French). The bone was simulated by a piece of fried bread or pasta. The recipe became popular in all Europe due to the influence of French cuisine. American and Canadian cuisines From the late 1700s until about 1900, virtually all recipes for "cutlets" in English-language cookbooks referenced veal cutlets. Then pork cutlets began to appear. More recently, in American and Canadian cuisine, cutlets have also been made using chicken, although this was also imported from Europe. The cutlet is usually coated with flour, egg and bread crumbs, then fried in a pan with some oil. Austrian cuisine Australian cuisine Australians eat lamb cutlets battered with egg yolk and breadcrumbs. Chicken cutlets are also very popular, but known as chicken schnitzel. Both lamb cutlets and chicken schnitzel are a staple of Australian children's cuisine. Amongst most Australians of Italian descent, the term schnitzel is replaced by the term cutlet. Cutlets amongst this population are usually veal or chicken. British cuisine In British cuisine a cutlet is usually unbreaded and can also be called a chop. If referring to beef, more than one piece together would be generally called a rib of beef or a rib joint, whilst lamb ribs are called a rack, or rack of lamb. L
https://en.wikipedia.org/wiki/Information%20technology%20consulting
In management, information technology consulting (also called IT consulting, computer consultancy, business and technology services, computing consultancy, technology consulting, and IT advisory) is a field of activity which focuses on advising organizations on how best to use information technology (IT) in achieving their business objectives, but it can also refer more generally to IT outsourcing. Once a business owner defines the needs to take a business to the next level, a decision maker will define a scope, cost and a time frame of the project. The role of the IT consultancy company is to support and nurture the company from the very beginning of the project until the end, and deliver the project not only in the scope, time and cost but also with complete customer satisfaction. See also List of major IT consulting firms Consultant Outsourcing
https://en.wikipedia.org/wiki/Jury%20selection
Jury selection is the selection of the people who will serve on a jury during a jury trial. The group of potential jurors (the "jury pool,” also known as the venire) is first selected from among the community using a reasonably random method. Jury lists are compiled from voter registrations and driver license or ID renewals. From those lists, summonses are mailed. A panel of jurors is then assigned to a courtroom. The prospective jurors are randomly selected to sit in the jury box. At this stage, they will be questioned in court by the judge and/or attorneys in the United States. Depending on the jurisdiction, attorneys may have an opportunity to mount a challenge for cause argument or use one of a limited number of peremptory challenges. In some jurisdictions that have capital punishment, the jury must be death-qualified to remove those who are opposed to the death penalty. Jury selection and techniques for voir dire are taught to law students in trial advocacy courses. However, attorneys sometimes use expert assistance in systematically choosing the jury via a process of scientific jury selection, although other uses of jury research are becoming more common. The jury selected is said to have been "empaneled". Voir dire Selected jurors are generally subjected to a system of examination whereby both the prosecution (or plaintiff, in a civil case) and defence can object to a juror. In common law countries, this is known as voir dire. Voir dire can include both general questions asked of an entire pool of prospective jurors, answered by means such as a show of hands, and questions asked of individual prospective jurors and calling for a verbal answer. In some jurisdictions, the attorneys for the parties may question the potential jurors; in other jurisdictions, the trial judge conducts the voir dire. The method and scope of the possible rejections varies between countries: In England, these objections would have to be very well based, such as the defendant kn
https://en.wikipedia.org/wiki/Graded%20poset
In mathematics, in the branch of combinatorics, a graded poset is a partially-ordered set (poset) P equipped with a rank function ρ from P to the set N of all natural numbers. ρ must satisfy the following two properties: The rank function is compatible with the ordering, meaning that for all x and y in the order, if x < y then ρ(x) < ρ(y), and The rank is consistent with the covering relation of the ordering, meaning that for all x and y, if y covers x then ρ(y) = ρ(x) + 1. The value of the rank function for an element of the poset is called its rank. Sometimes a graded poset is called a ranked poset but that phrase has other meanings; see Ranked poset. A rank or rank level of a graded poset is the subset of all the elements of the poset that have a given rank value. Graded posets play an important role in combinatorics and can be visualized by means of a Hasse diagram. Examples Some examples of graded posets (with the rank function in parentheses) are: the natural numbers N with their usual order (rank: the number itself), or some interval [0, N] of this poset, Nn, with the product order (sum of the components), or a subposet of it that is a product of intervals, the positive integers, ordered by divisibility (number of prime factors, counted with multiplicity), or a subposet of it formed by the divisors of a fixed N, the Boolean lattice of finite subsets of a set (number of elements of the subset), the lattice of partitions of a set into finitely many parts, ordered by reverse refinement (number of parts), the lattice of partitions of a finite set X, ordered by refinement (number of elements of X minus number of parts), a group and a generating set, or equivalently its Cayley graph, ordered by the weak or strong Bruhat order, and ranked by word length (length of shortest reduced word). In particular for Coxeter groups, for example permutations of a totally ordered n-element set, with either the weak or strong Bruhat order (number of adjacent inver
https://en.wikipedia.org/wiki/Table%20of%20Clebsch%E2%80%93Gordan%20coefficients
This is a table of Clebsch–Gordan coefficients used for adding angular momentum values in quantum mechanics. The overall sign of the coefficients for each set of constant , , is arbitrary to some degree and has been fixed according to the Condon–Shortley and Wigner sign convention as discussed by Baird and Biedenharn. Tables with the same sign convention may be found in the Particle Data Group's Review of Particle Properties and in online tables. Formulation The Clebsch–Gordan coefficients are the solutions to Explicitly: The summation is extended over all integer for which the argument of every factorial is nonnegative. For brevity, solutions with and are omitted. They may be calculated using the simple relations and Specific values The Clebsch–Gordan coefficients for j values less than or equal to 5/2 are given below. When , the Clebsch–Gordan coefficients are given by . SU(N) Clebsch–Gordan coefficients Algorithms to produce Clebsch–Gordan coefficients for higher values of and , or for the su(N) algebra instead of su(2), are known. A web interface for tabulating SU(N) Clebsch–Gordan coefficients is readily available.
https://en.wikipedia.org/wiki/Production%20of%20antibiotics
Production of antibiotics is a naturally occurring event, that thanks to advances in science can now be replicated and improved upon in laboratory settings. Due to the discovery of penicillin by Alexander Fleming, and the efforts of Florey and Chain in 1938, large-scale, pharmaceutical production of antibiotics has been made possible. As with the initial discovery of penicillin, most antibiotics have been discovered as a result of happenstance. Antibiotic production can be grouped into three methods: natural fermentation, semi-synthetic, and synthetic. As more and more bacteria continue to develop resistance to currently produced antibiotics, research and development of new antibiotics continues to be important. In addition to research and development into the production of new antibiotics, repackaging delivery systems is important to improving efficacy of the antibiotics that are currently produced. Improvements to this field have seen the ability to add antibiotics directly into implanted devices, aerosolization of antibiotics for direct delivery, and combination of antibiotics with non antibiotics to improve outcomes. The increase of antibiotic resistant strains of pathogenic bacteria has led to an increased urgency for the funding of research and development of antibiotics and a desire for production of new and better acting antibiotics. Identifying useful antibiotics Despite the wide variety of known antibiotics, less than 1% of antimicrobial agents have medical or commercial value. For example, whereas penicillin has a high therapeutic index as it does not generally affect human cells, this is not so for many antibiotics. Other antibiotics simply lack advantage over those already in use, or have no other practical applications. Useful antibiotics are often discovered using a screening process. To conduct such a screen, isolates of many different microorganisms are cultured and then tested for production of diffusible products that inhibit the growth of t
https://en.wikipedia.org/wiki/DisplayPort
DisplayPort (DP) is a digital display interface developed by a consortium of PC and chip manufacturers and standardized by the Video Electronics Standards Association (VESA). It is primarily used to connect a video source to a display device such as a computer monitor. It can also carry audio, USB, and other forms of data. DisplayPort was designed to replace VGA, FPD-Link, and Digital Visual Interface (DVI). It is backward compatible with other interfaces, such as HDMI and DVI, through the use of either active or passive adapters. It is the first display interface to rely on packetized data transmission, a form of digital communication found in technologies such as Ethernet, USB, and PCI Express. It permits the use of internal and external display connections. Unlike legacy standards that transmit a clock signal with each output, its protocol is based on small data packets known as micro packets, which can embed the clock signal in the data stream, allowing higher resolution using fewer pins. The use of data packets also makes it extensible, meaning more features can be added over time without significant changes to the physical interface. DisplayPort can be used to transmit audio and video simultaneously, although each can be transmitted without the other. The video signal path can range from six to sixteen bits per color channel, and the audio path can have up to eight channels of 24-bit, 192kHz uncompressed PCM audio. A bidirectional, half-duplex auxiliary channel carries device management and device control data for the Main Link, such as VESA EDID, MCCS, and DPMS standards. The interface is also capable of carrying bidirectional USB signals. The interface uses a differential signal that is not compatible with DVI or HDMI. However, dual-mode DisplayPort ports are designed to transmit a single-link DVI or HDMI protocol (TMDS) across the interface through the use of an external passive adapter, enabling compatibility mode and converting the signal from 3.3 to
https://en.wikipedia.org/wiki/Universal%20polar%20stereographic%20coordinate%20system
The universal polar stereographic (UPS) coordinate system is used in conjunction with the universal transverse Mercator (UTM) coordinate system to locate positions on the surface of the Earth. Like the UTM coordinate system, the UPS coordinate system uses a metric-based cartesian grid laid out on a conformally projected surface. UPS covers the Earth's polar regions, specifically the areas north of 84°N and south of 80°S, which are not covered by the UTM grids, plus an additional 30 minutes of latitude extending into UTM grid to provide some overlap between the two systems. In the polar regions, directions can become complicated, with all geographic north–south lines converging at the poles. The difference between UPS grid north and true north can therefore be anything up to 180°—in some places, grid north is true south, and vice versa. UPS grid north is arbitrarily defined as being along the prime meridian in the Antarctic and the 180th meridian in the Arctic; thus, east and west on the grids when moving directly away from the pole are along the 90°E and 90°W meridians respectively. Projection system As the name indicates, the UPS system uses a stereographic projection. Specifically, the projection used in the system is a secant version based on an elliptical model of the earth. The scale factor at each pole is adjusted to 0.994 so that the latitude of true scale is 81.11451786859362545° (about 81° 06' 52.3") North and South. The scale factor inside the regions at latitudes higher than this parallel is too small, whereas the regions at latitudes below this line have scale factors that are too large, reaching 1.0016 at 80° latitude. The scale factor at the origin (the poles) is adjusted to minimize the overall distortion of scale within the mapped region. As with the Mercator projection, the region near the tangent (or secant) point on a Stereographic map remains very close to true scale for an angular distance of a few degrees. In the ellipsoidal model, a stereog
https://en.wikipedia.org/wiki/Perfect%20ruler
A perfect ruler of length is a ruler with integer markings , for which there exists an integer such that any positive integer is uniquely expressed as the difference for some . This is referred to as an -perfect ruler. An optimal perfect ruler is one of the smallest length for fixed values of and . Example A 4-perfect ruler of length is given by . To verify this, we need to show that every positive integer is uniquely expressed as the difference of two markings: See also Golomb ruler Sparse ruler All-interval tetrachord Combinatorics
https://en.wikipedia.org/wiki/Regime%20shift
Regime shifts are large, abrupt, persistent changes in the structure and function of ecosystems, the climate, financial systems or other complex systems. A regime is a characteristic behaviour of a system which is maintained by mutually reinforced processes or feedbacks. Regimes are considered persistent relative to the time period over which the shift occurs. The change of regimes, or the shift, usually occurs when a smooth change in an internal process (feedback) or a single disturbance (external shocks) triggers a completely different system behavior. Although such non-linear changes have been widely studied in different disciplines ranging from atoms to climate dynamics, regime shifts have gained importance in ecology because they can substantially affect the flow of ecosystem services that societies rely upon, such as provision of food, clean water or climate regulation. Moreover, regime shift occurrence is expected to increase as human influence on the planet increases – the Anthropocene – including current trends on human induced climate change and biodiversity loss. When regime shifts are associated with a critical or bifurcation point, they may also be referred to as critical transitions. History of the concept Scholars have been interested in systems exhibiting non-linear change for a long time. Since the early twentieth century, mathematicians have developed a body of concepts and theory for the study of such phenomena based on the study of non-linear system dynamics. This research led to the development of concepts such as catastrophe theory; a branch of bifurcation theory in dynamical systems. In ecology the idea of systems with multiple regimes, domains of attraction called alternative stable states, only arose in the late '60s based upon the first reflections on the meaning of stability in ecosystems by Richard Lewontin and Crawford "Buzz" Holling. The first work on regime shifts in ecosystems was done in a diversity of ecosystems and included impor
https://en.wikipedia.org/wiki/Structural%20stability
In mathematics, structural stability is a fundamental property of a dynamical system which means that the qualitative behavior of the trajectories is unaffected by small perturbations (to be exact C1-small perturbations). Examples of such qualitative properties are numbers of fixed points and periodic orbits (but not their periods). Unlike Lyapunov stability, which considers perturbations of initial conditions for a fixed system, structural stability deals with perturbations of the system itself. Variants of this notion apply to systems of ordinary differential equations, vector fields on smooth manifolds and flows generated by them, and diffeomorphisms. Structurally stable systems were introduced by Aleksandr Andronov and Lev Pontryagin in 1937 under the name "systèmes grossiers", or rough systems. They announced a characterization of rough systems in the plane, the Andronov–Pontryagin criterion. In this case, structurally stable systems are typical, they form an open dense set in the space of all systems endowed with appropriate topology. In higher dimensions, this is no longer true, indicating that typical dynamics can be very complex (cf. strange attractor). An important class of structurally stable systems in arbitrary dimensions is given by Anosov diffeomorphisms and flows. During the late 1950s and the early 1960s, Maurício Peixoto and Marília Chaves Peixoto, motivated by the work of Andronov and Pontryagin, developed and proved Peixoto's theorem, the first global characterization of structural stability. Definition Let G be an open domain in Rn with compact closure and smooth (n−1)-dimensional boundary. Consider the space X1(G) consisting of restrictions to G of C1 vector fields on Rn that are transversal to the boundary of G and are inward oriented. This space is endowed with the C1 metric in the usual fashion. A vector field F ∈ X1(G) is weakly structurally stable if for any sufficiently small perturbation F1, the corresponding flows are topologically
https://en.wikipedia.org/wiki/Generation%20of%20primes
In computational number theory, a variety of algorithms make it possible to generate prime numbers efficiently. These are used in various applications, for example hashing, public-key cryptography, and search of prime factors in large numbers. For relatively small numbers, it is possible to just apply trial division to each successive odd number. Prime sieves are almost always faster. Prime sieving is the fastest known way to deterministically enumerate the primes. There are some known formulas that can calculate the next prime but there is no known way to express the next prime in terms of the previous primes. Also, there is no effective known general manipulation and/or extension of some mathematical expression (even such including later primes) that deterministically calculates the next prime. Prime sieves A prime sieve or prime number sieve is a fast type of algorithm for finding primes. There are many prime sieves. The simple sieve of Eratosthenes (250s BCE), the sieve of Sundaram (1934), the still faster but more complicated sieve of Atkin (2003), and various wheel sieves are most common. A prime sieve works by creating a list of all integers up to a desired limit and progressively removing composite numbers (which it directly generates) until only primes are left. This is the most efficient way to obtain a large range of primes; however, to find individual primes, direct primality tests are more efficient. Furthermore, based on the sieve formalisms, some integer sequences are constructed which also could be used for generating primes in certain intervals. Large primes For the large primes used in cryptography, provable primes can be generated based on variants of Pocklington primality test, while probable primes can be generated with probabilistic primality tests such as the Baillie–PSW primality test or the Miller–Rabin primality test. Both the provable and probable primality tests rely on modular exponentiation. To further reduce the computational c
https://en.wikipedia.org/wiki/Remote%20control%20animal
Remote control animals are animals that are controlled remotely by humans. Some applications require electrodes to be implanted in the animal's nervous system connected to a receiver which is usually carried on the animal's back. The animals are controlled by the use of radio signals. The electrodes do not move the animal directly, as if controlling a robot; rather, they signal a direction or action desired by the human operator and then stimulate the animal's reward centres if the animal complies. These are sometimes called bio-robots or robo-animals. They can be considered to be cyborgs as they combine electronic devices with an organic life form and hence are sometimes also called cyborg-animals or cyborg-insects. Because of the surgery required, and the moral and ethical issues involved, there has been criticism aimed at the use of remote control animals, especially regarding animal welfare and animal rights, especially when relatively intelligent complex animals are used. Non-invasive applications may include stimulation of the brain with ultrasound to control the animal. Some applications (used primarily for dogs) use vibrations or sound to control the movements of the animals. Several species of animals have been successfully controlled remotely. These include moths, beetles, cockroaches, rats, dogfish sharks, mice and pigeons. Remote control animals can be directed and used as working animals for search and rescue operations, covert reconnaissance, data-gathering in hazardous areas, or various other uses. Mammals Rats Several studies have examined the remote control of rats using micro-electrodes implanted into their brains and rely on stimulating the reward centre of the rat. Three electrodes are implanted; two in the ventral posterolateral nucleus of the thalamus which conveys facial sensory information from the left and right whiskers, and a third in the medial forebrain bundle which is involved in the reward process of the rat. This third electro
https://en.wikipedia.org/wiki/NASA%20insignia
The National Aeronautics and Space Administration (NASA) insignia has three main official designs, although the one with stylized red curved text (the "worm") was retired from official use from May 22, 1992, until April 3, 2020, when it was reinstated as a secondary logo. The three logos include the NASA insignia (also known as the "meatball"), the NASA logotype (also known as the "worm"), and the NASA seal. The NASA seal was approved by President Eisenhower in 1959, and slightly modified by President Kennedy in 1961. History The NASA logo dates from 1959, when the National Advisory Committee for Aeronautics (NACA) transformed into an agency that advanced both astronautics and aeronautics—the National Aeronautics and Space Administration. NASA seal In the NASA insignia design, the sphere represents a planet, the stars represent space, the red chevron is a wing representing aeronautics (the latest design in hypersonic wings at the time the logo was developed), and then the orbiting spacecraft going around the wing. It is known officially as the insignia. NASA "meatball" insignia After a NASA Lewis Research Center illustrator's design was chosen for the new agency's official seal, the executive secretary of NASA asked James Modarelli, the head of Reports Division at Lewis Research Center, to design a logo that could be used for less formal purposes. Modarelli simplified the seal, leaving only the white stars and orbital path on a round field of blue with a red vector. He then added white N-A-S-A lettering. George Neago created the original NASA "Meatball" logo selected and applied by NASA from 1958–63. Working as an industrial artist for the Lockheed Corporation's Missile Division in Palo Alto, California (a US Government and NASA contractor) from the mid-1950s to the 1990s, his graphics logo was selected in a graphics competition as the winning entry. James Modarelli was the Reports Department Manager at Lockheed, who supervised George Neago when George create
https://en.wikipedia.org/wiki/Psoas%20major%20muscle
The psoas major ( or ; from ) is a long fusiform muscle located in the lateral lumbar region between the vertebral column and the brim of the lesser pelvis. It joins the iliacus muscle to form the iliopsoas. In animals, this muscle is equivalent to the tenderloin. Structure The psoas major is divided into a superficial and a deep part. The deep part originates from the transverse processes of lumbar vertebrae L1–L5. The superficial part originates from the lateral surfaces of the last thoracic vertebra, lumbar vertebrae L1–L4, and the neighboring intervertebral discs. The lumbar plexus lies between the two layers. Together, the iliacus muscle and the psoas major form the iliopsoas, which is surrounded by the iliac fascia. The iliopsoas runs across the iliopubic eminence through the muscular lacuna to its insertion on the lesser trochanter of the femur. The iliopectineal bursa separates the tendon of the iliopsoas muscle from the external surface of the hip-joint capsule at the level of the iliopubic eminence. The iliac subtendinous bursa lies between the lesser trochanter and the attachment of the iliopsoas. Nerve supply Innervation of the psoas major is through the anterior rami of L1 to L3 nerves. Variation In fewer than 50 percent of human subjects, the psoas major is accompanied by the psoas minor muscle. One study using autopsy data found that the psoas major muscle is substantially thicker in men of African descent than in Caucasian men, and that the occurrence of the psoas minor is also ethnically variant, being present in most of the white subjects and absent in most of the black subjects. In mice, it is mostly a fast-twitching, type II muscle, while in humans it combines slow- and fast-twitching fibers. Function The psoas major joins the upper body and the lower body, the axial to the appendicular skeleton, the inside to the outside, and the back to the front. As part of the iliopsoas, psoas major contributes to flexion in the hip joint. On the lum
https://en.wikipedia.org/wiki/Standard%20social%20science%20model
The term standard social science model (SSSM) was first introduced by John Tooby and Leda Cosmides in the 1992 edited volume The Adapted Mind. They used SSSM as a reference to social science philosophies related to the blank slate, relativism, social constructionism, and cultural determinism. They argue that those philosophies, capsulized within SSSM, formed the dominant theoretical paradigm in the development of the social sciences during the 20th century. According to their proposed SSSM paradigm, the mind is a general-purpose cognitive device shaped almost entirely by culture. After establishing SSSM, Tooby and Cosmides make a case for replacing SSSM with the integrated model (IM), also known as the integrated causal model (ICM), which melds cultural and biological theories for the development of the mind. Supporters of SSSM include those who feel the term was conceived as a point of argument in support of ICM specifically and evolutionary psychology (EP) in general. There are criticisms that the allegation of SSSM is based on a straw man or rhetorical technique. Alleged proponents Steven Pinker names several prominent scientists as proponents of the standard social science model, including Franz Boas, Margaret Mead, B. F. Skinner, Richard Lewontin, John Money, and Stephen Jay Gould. Alternative theoretical paradigm: the integrated model The authors of The Adapted Mind have argued that the SSSM is now out of date and that a progressive model for the social sciences requires evolutionarily-informed models of nature-nurture interactionism, grounded in the computational theory of mind. Tooby and Cosmides refer to this new model as the integrated model (IM). Tooby and Cosmides provide several comparisons between the SSSM and the IM, including the following: Criticism of the coining of the term Richardson (2007) argues that, as proponents of evolutionary psychology (EP), evolutionary psychologists developed the SSSM as a rhetorical technique: The basic move
https://en.wikipedia.org/wiki/Aeroacoustics
Aeroacoustics is a branch of acoustics that studies noise generation via either turbulent fluid motion or aerodynamic forces interacting with surfaces. Noise generation can also be associated with periodically varying flows. A notable example of this phenomenon is the Aeolian tones produced by wind blowing over fixed objects. Although no complete scientific theory of the generation of noise by aerodynamic flows has been established, most practical aeroacoustic analysis relies upon the so-called aeroacoustic analogy, proposed by Sir James Lighthill in the 1950s while at the University of Manchester. whereby the governing equations of motion of the fluid are coerced into a form reminiscent of the wave equation of "classical" (i.e. linear) acoustics in the left-hand side with the remaining terms as sources in the right-hand side. History The modern discipline of aeroacoustics can be said to have originated with the first publication of Lighthill in the early 1950s, when noise generation associated with the jet engine was beginning to be placed under scientific scrutiny. Lighthill's equation Lighthill rearranged the Navier–Stokes equations, which govern the flow of a compressible viscous fluid, into an inhomogeneous wave equation, thereby making a connection between fluid mechanics and acoustics. This is often called "Lighthill's analogy" because it presents a model for the acoustic field that is not, strictly speaking, based on the physics of flow-induced/generated noise, but rather on the analogy of how they might be represented through the governing equations of a compressible fluid. The first equation of interest is the conservation of mass equation, which reads where and represent the density and velocity of the fluid, which depend on space and time, and is the substantial derivative. Next is the conservation of momentum equation, which is given by where is the thermodynamic pressure, and is the viscous (or traceless) part of the stress tensor from th
https://en.wikipedia.org/wiki/Herbrand%27s%20theorem
Herbrand's theorem is a fundamental result of mathematical logic obtained by Jacques Herbrand (1930). It essentially allows a certain kind of reduction of first-order logic to propositional logic. Herbrand's theorem is the logical foundation for most automatic theorem provers. Although Herbrand originally proved his theorem for arbitrary formulas of first-order logic, the simpler version shown here, restricted to formulas in prenex form containing only existential quantifiers, became more popular. Statement Let be a formula of first-order logic with quantifier-free, though it may contain additional free variables. This version of Herbrand's theorem states that the above formula is valid if and only if there exists a finite sequence of terms , possibly in an expansion of the language, with and , such that is valid. If it is valid, it is called a Herbrand disjunction for Informally: a formula in prenex form containing only existential quantifiers is provable (valid) in first-order logic if and only if a disjunction composed of substitution instances of the quantifier-free subformula of is a tautology (propositionally derivable). The restriction to formulas in prenex form containing only existential quantifiers does not limit the generality of the theorem, because formulas can be converted to prenex form and their universal quantifiers can be removed by Herbrandization. Conversion to prenex form can be avoided, if structural Herbrandization is performed. Herbrandization can be avoided by imposing additional restrictions on the variable dependencies allowed in the Herbrand disjunction. Proof sketch A proof of the non-trivial direction of the theorem can be constructed according to the following steps: If the formula is valid, then by completeness of cut-free sequent calculus, which follows from Gentzen's cut-elimination theorem, there is a cut-free proof of . Starting from leaves and working downwards, remove the inferences that introduce existent
https://en.wikipedia.org/wiki/Eusthenopteron
Eusthenopteron (from , 'good', , 'strength', and 'wing' or 'fin') is a genus of prehistoric sarcopterygian (often called lobe-finned fishes) which has attained an iconic status from its close relationships to tetrapods. Early depictions of this animal show it emerging onto land; however, paleontologists now widely agree that it was a strictly aquatic animal. The genus Eusthenopteron is known from several species that lived during the Late Devonian period, about 385 million years ago. Eusthenopteron was first described by J. F. Whiteaves in 1881, as part of a large collection of fishes from Miguasha, Quebec. Some 2,000 Eusthenopteron specimens have been collected from Miguasha, one of which was the object of intensely detailed study and several papers from the 1940s to the 1990s by paleoichthyologist Erik Jarvik. Description Eusthenopteron is medium to large sized tristichopterid, E. foordi is estimated to exceed , while E. jenkinsi probably reached . The earliest-known fossilized evidence of bone marrow has been found in Eusthenopteron, which may be the origin of bone marrow in tetrapods. Eusthenopteron shares many unique features in common with the earliest-known tetrapods. It shares a similar pattern of skull roofing bones with forms such as Ichthyostega and Acanthostega. Eusthenopteron, like other tetrapodomorph fishes, had internal nostrils, (or a choana) which are one of the defining traits of tetrapodomorphs (including tetrapods). It also had labyrinthodont teeth, characterized by infolded enamel, which characterizes all of the earliest known tetrapods as well. Unlike the early tetrapods, they did not have larval gills. Classification Like other fish-like sarcopterygians, Eusthenopteron possessed a two-part cranium, which hinged at mid-length along an intracranial joint. Eusthenopterons notoriety comes from the pattern of its fin endoskeleton, which bears a distinct humerus, ulna, and radius (in the fore-fin) and femur, tibia, and fibula (in the pe
https://en.wikipedia.org/wiki/Thickening%20agent
A thickening agent or thickener is a substance which can increase the viscosity of a liquid without substantially changing its other properties. Edible thickeners are commonly used to thicken sauces, soups, and puddings without altering their taste; thickeners are also used in paints, inks, explosives, and cosmetics. Thickeners may also improve the suspension of other ingredients or emulsions which increases the stability of the product. Thickening agents are often regulated as food additives and as cosmetics and personal hygiene product ingredients. Some thickening agents are gelling agents (gellants), forming a gel, dissolving in the liquid phase as a colloid mixture that forms a weakly cohesive internal structure. Others act as mechanical thixotropic additives with discrete particles adhering or interlocking to resist strain. Thickening agents can also be used when a medical condition such as dysphagia causes difficulty in swallowing. Thickened liquids play a vital role in reducing risk of aspiration for dysphagia patients. Many other food ingredients are used as thickeners, usually in the final stages of preparation of specific foods. These thickeners have a flavor and are not markedly stable, thus are not suitable for general use. However, they are very convenient and effective, and hence are widely used. Different thickeners may be more or less suitable in a given application, due to differences in taste, clarity, and their responses to chemical and physical conditions. For example, for acidic foods, arrowroot is a better choice than cornstarch, which loses thickening potency in acidic mixtures. At (acidic) pH levels below 4.5, guar gum has sharply reduced aqueous solubility, thus also reducing its thickening capability. If the food is to be frozen, tapioca or arrowroot are preferable over cornstarch, which becomes spongy when frozen. Types Food thickeners frequently are based on either polysaccharides (starches, vegetable gums, and pectin), or protein
https://en.wikipedia.org/wiki/Prescaler
A prescaler is an electronic counting circuit used to reduce a high frequency electrical signal to a lower frequency by integer division. The prescaler takes the basic timer clock frequency (which may be the CPU clock frequency or may be some higher or lower frequency) and divides it by some value before feeding it to the timer, according to how the prescaler register(s) are configured. The prescaler values, referred to as prescales, that may be configured might be limited to a few fixed values (powers of 2), or they may be any integer value from 1 to 2^P, where P is the number of prescaler bits. The purpose of the prescaler is to allow the timer to be clocked at the rate a user desires. For shorter (8 and 16-bit) timers, there will often be a tradeoff between resolution (high resolution requires a high clock rate) and range (high clock rates cause the timer to overflow more quickly). For example, one cannot (without some tricks) achieve 1 µs resolution and a 1 sec maximum period using a 16-bit timer. In this example using 1 µs resolution would limit the period to about 65ms maximum. However the prescaler allows tweaking the ratio between resolution and maximum period to achieve a desired effect. Example of use Prescalers are typically used at very high frequency to extend the upper frequency range of frequency counters, phase locked loop (PLL) synthesizers, and other counting circuits. When used in conjunction with a PLL, a prescaler introduces a normally undesired change in the relationship between the frequency step size and phase detector comparison frequency. For this reason, it is common to either restrict the integer to a low value, or use a dual-modulus prescaler in this application. A dual-modulus prescaler is one that has the ability to selectively divide the input frequency by one of two (normally consecutive) integers, such as 32 and 33. Common fixed-integer microwave prescalers are available in modulus 2, 4, 8, 5 and 10, and can operate at frequencie
https://en.wikipedia.org/wiki/Star%20product
In mathematics, the star product is a method of combining graded posets with unique minimal and maximal elements, preserving the property that the posets are Eulerian. Definition The star product of two graded posets and , where has a unique maximal element and has a unique minimal element , is a poset on the set . We define the partial order by if and only if: 1. , and ; 2. , and ; or 3. and . In other words, we pluck out the top of and the bottom of , and require that everything in be smaller than everything in . Example For example, suppose and are the Boolean algebra on two elements. Then is the poset with the Hasse diagram below. Properties The star product of Eulerian posets is Eulerian. See also Product order, a different way of combining posets
https://en.wikipedia.org/wiki/Stream%20capture
Stream capture, river capture, river piracy or stream piracy is a geomorphological phenomenon occurring when a stream or river drainage system or watershed is diverted from its own bed, and flows instead down the bed of a neighbouring stream. This can happen for several reasons, including: Tectonic earth movements, where the slope of the land changes, and the stream is tipped out of its former course Natural damming, such as by a landslide or ice sheet Erosion, either Headward erosion of one stream valley upwards into another, or Lateral erosion of a meander through the higher ground dividing the adjacent streams. Within an area of karst topography, where streams may sink, or flow underground (a sinking or losing stream) and then reappear in a nearby stream valley Glacier retreat The additional water flowing down the capturing stream may accelerate erosion and encourage the development of a canyon (gorge). The now-dry valley of the original stream is known as a wind gap. Capture mechanisms Sea level rise The Kaituna and Pelorus rivers, New Zealand: About 8,000 years ago, a single river was divided by sea water to form two rivers. Tectonic uplift Barmah Choke: About 25,000 years ago, an uplift of the plains near Moama on the Cadell Fault first dammed the Murray River and then forced it to take a new course. The new course dug its way through the so-called Barmah Choke and captured the lower course of the Goulburn River for . Indus-Sutlej-Sarasvati-Yamuna: The Yamuna earlier flowed into the Ghaggar-Hakra River (identified with the Sarasvati River) and later changed its course due to plate tectonics. The Sutlej River flowed into the current channel of the Ghaggar-Hakra River until the 13th century after which it was captured by the Indus River due to plate tectonics. Barrier Range: It was theorised that the original course of the Murray River was to a mouth near Port Pirie where a large delta is still visible protruding into the calm waters of Spencer Gulf.
https://en.wikipedia.org/wiki/Pluriharmonic%20function
In mathematics, precisely in the theory of functions of several complex variables, a pluriharmonic function is a real valued function which is locally the real part of a holomorphic function of several complex variables. Sometimes such a function is referred to as n-harmonic function, where n ≥ 2 is the dimension of the complex domain where the function is defined. However, in modern expositions of the theory of functions of several complex variables it is preferred to give an equivalent formulation of the concept, by defining pluriharmonic function a complex valued function whose restriction to every complex line is a harmonic function with respect to the real and imaginary part of the complex line parameter. Formal definition . Let be a complex domain and be a (twice continuously differentiable) function. The function is called pluriharmonic if, for every complex line formed by using every couple of complex tuples , the function is a harmonic function on the set . Let be a complex manifold and be a function. The function is called pluriharmonic if Basic properties Every pluriharmonic function is a harmonic function, but not the other way around. Further, it can be shown that for holomorphic functions of several complex variables the real (and the imaginary) parts are locally pluriharmonic functions. However a function being harmonic in each variable separately does not imply that it is pluriharmonic. See also Plurisubharmonic function Wirtinger derivatives Notes Historical references . . . . Notes from a course held by Francesco Severi at the Istituto Nazionale di Alta Matematica (which at present bears his name), containing appendices of Enzo Martinelli, Giovanni Battista Rizza and Mario Benedicty. An English translation of the title reads as:-"Lectures on analytic functions of several complex variables – Lectured in 1956–57 at the Istituto Nazionale di Alta Matematica in Rome".
https://en.wikipedia.org/wiki/Pluripolar%20set
In mathematics, in the area of potential theory, a pluripolar set is the analog of a polar set for plurisubharmonic functions. Definition Let and let be a plurisubharmonic function which is not identically . The set is called a complete pluripolar set. A pluripolar set is any subset of a complete pluripolar set. Pluripolar sets are of Hausdorff dimension at most and have zero Lebesgue measure. If is a holomorphic function then is a plurisubharmonic function. The zero set of is then a pluripolar set. See also Skoda-El Mir theorem
https://en.wikipedia.org/wiki/Plurisubharmonic%20function
In mathematics, plurisubharmonic functions (sometimes abbreviated as psh, plsh, or plush functions) form an important class of functions used in complex analysis. On a Kähler manifold, plurisubharmonic functions form a subset of the subharmonic functions. However, unlike subharmonic functions (which are defined on a Riemannian manifold) plurisubharmonic functions can be defined in full generality on complex analytic spaces. Formal definition A function with domain is called plurisubharmonic if it is upper semi-continuous, and for every complex line with the function is a subharmonic function on the set In full generality, the notion can be defined on an arbitrary complex manifold or even a complex analytic space as follows. An upper semi-continuous function is said to be plurisubharmonic if and only if for any holomorphic map the function is subharmonic, where denotes the unit disk. Differentiable plurisubharmonic functions If is of (differentiability) class , then is plurisubharmonic if and only if the hermitian matrix , called Levi matrix, with entries is positive semidefinite. Equivalently, a -function f is plurisubharmonic if and only if is a positive (1,1)-form. Examples Relation to Kähler manifold: On n-dimensional complex Euclidean space , is plurisubharmonic. In fact, is equal to the standard Kähler form on up to constant multiples. More generally, if satisfies for some Kähler form , then is plurisubharmonic, which is called Kähler potential. These can be readily generated by applying the ddbar lemma to Kähler forms on a Kähler manifold. Relation to Dirac Delta: On 1-dimensional complex Euclidean space , is plurisubharmonic. If is a C∞-class function with compact support, then Cauchy integral formula says which can be modified to . It is nothing but Dirac measure at the origin 0 . More Examples If is an analytic function on an open set, then is plurisubharmonic on that open set. Convex functions are plurisubh
https://en.wikipedia.org/wiki/384%20%28number%29
384 (three hundred [and] eighty-four) is the natural number following 383 and preceding 385. It is an even composite positive integer. In mathematics 384 is: the sum of a twin prime pair (191 + 193). the sum of six consecutive primes (53 + 59 + 61 + 67 + 71 + 73). the order of the hyperoctahedral group for n = 4 the double factorial of 8. an abundant number. the third 129-gonal number after 1, 129 and before 766 and 1275. a Harshad number in bases 2, 3, 4, 5, 7, 8, 9, 13, 17, and 62 other bases. a refactorable number. Computing Being a low multiple of a power of two, 384 occurs often in the field of computing. For example, the digest length of the secure hash function SHA-384, the screen resolution of Virtual Boy is 384x224, MP3 Audio layer 1 encoding is 384 kibps, in 3G phones the WAN implementation of CDMA is up to 384 kbit/s.
https://en.wikipedia.org/wiki/One-page%20management%20system
The one-page management system (OPMS) is a set of methods to help people generate ideas through systematic brainstorming and to structure (or organize) ideas as needed for effective resolution of problems. G. S. Chandy invented OPMS, based on John N. Warfield's "interactive management" and "structural approach to system design". OPMS has been applied and codified by other entrepreneurial practitioners across the world, and has been summarized by Alexander Christakis, a long-standing collaborator of Warfield. OPMS aims to enable 'people-at-large' as well as experts to create and implement usable systems of all kinds—individual, organisational or societal. In brief, OPMS enables individual and group users to: Choose an appropriate 'mission' depending on problem/situation confronted; Identify the issue or problem, which provides a simple 'mission statement'; Integrate all the good ideas available to tackle the problem or issue at hand, and eliminate the bad ideas—with a view to enable accomplishment of the chosen mission. The structuring methods of OPMS have developed from the systems modeling tools invented by Warfield: Interpretive structural modeling (ISM); and Field representation & profiling method (FRP).
https://en.wikipedia.org/wiki/Laser%20ultrasonics
Laser-ultrasonics uses lasers to generate and detect ultrasonic waves. It is a non-contact technique used to measure materials thickness, detect flaws and carry out materials characterization. The basic components of a laser-ultrasonic system are a generation laser, a detection laser and a detector. Ultrasound generation by laser The generation lasers are short pulse (from tens of nanoseconds to femtoseconds) and high peak power lasers. Common lasers used for ultrasound generation are solid state Q-Switched Nd:YAG and gas lasers (CO2 or Excimers). The physical principle is of thermal expansion (also called thermoelastic regime) or ablation. In the thermoelastic regime, the ultrasound is generated by the sudden thermal expansion due to the heating of a tiny surface of the material by the laser pulse. If the laser power is sufficient to heat the surface above the material boiling point, some material is evaporated (typically some nanometres) and ultrasound is generated by the recoil effect of the expanding material evaporated. In the ablation regime, a plasma is often formed above the material surface and its expansion can make a substantial contribution to the ultrasonic generation. consequently the emissivity patterns and modal content are different for the two different mechanisms. The frequency content of the generated ultrasound is partially determined by the frequency content of the laser pulses with shorter pulses giving higher frequencies. For very high frequency generation (up to 100sGHz) femtosecond lasers are used often in a pump-probe configuration with the detection system (see picosecond ultrasonics). Historically, fundamental research into the nature of laser-ultrasonics was started in 1979, by Richard J Dewhurst and Stuart B Palmer. They set up a new laboratory in the Department of Applied Physics, University of Hull. Dewhurst provided the laser-matter expertise and Palmer the ultrasound expertise. Investigations were directed towards the develop
https://en.wikipedia.org/wiki/Genetic%20association
Genetic association is when one or more genotypes within a population co-occur with a phenotypic trait more often than would be expected by chance occurrence. Studies of genetic association aim to test whether single-locus alleles or genotype frequencies or more generally, multilocus haplotype frequencies differ between two groups of individuals usually diseased subjects and healthy controls). Genetic association studies are based on the principle that genotypes can be compared "directly", i.e. with the sequences of the actual genomes or exomes via whole genome sequencing or whole exome sequencing. Before 2010, DNA sequencing methods were used. Description Genetic association can be between phenotypes, such as visible characteristics such as flower color or height, between a phenotype and a genetic polymorphism, such as a single nucleotide polymorphism (SNP), or between two genetic polymorphisms. Association between genetic polymorphisms occurs when there is non-random association of their alleles as a result of their proximity on the same chromosome; this is known as genetic linkage. Linkage disequilibrium (LD) is a term used in the study of population genetics for the non-random association of alleles at two or more loci, not necessarily on the same chromosome. It is not the same as linkage, which is the phenomenon whereby two or more loci on a chromosome have reduced recombination between them because of their physical proximity to each other. LD describes a situation in which some combinations of alleles or genetic markers occur more or less frequently in a population than would be expected from a random formation of haplotypes from alleles based on their frequencies. Genetic association studies are performed to determine whether a genetic variant is associated with a disease or trait: if association is present, a particular allele, genotype or haplotype of a polymorphism or polymorphisms will be seen more often than expected by chance in an individual car
https://en.wikipedia.org/wiki/Plant%20milk
Plant milk is a plant beverage with a color resembling that of milk. Plant milks are non-dairy beverages made from a water-based plant extract for flavoring and aroma. Plant milks are consumed as alternatives to dairy milk, and may provide a creamy mouthfeel. As of 2021, there are about 17 different types of plant milks; almond, oat, soy, coconut, and pea are the highest-selling worldwide. Production of plant-based milks, particularly soy, oat, and pea milks, can offer environmental advantages over animal milks in terms of greenhouse gas emissions, land and water use. Plant-based beverages have been consumed for centuries, with the term "milk-like plant juices" used since the 13th century. In the 21st century, they are commonly referred to as plant-based milk, alternative milk, non-dairy milk or vegan milk. For commerce, plant-based beverages are typically packaged in containers similar and competitive to those used for dairy milk, but cannot be labeled as "milk" within the European Union. Across various cultures, plant milk has been both a beverage and a flavor ingredient in sweet and savory dishes, such as the use of coconut milk in curries. It is compatible with vegetarian and vegan lifestyles. Plant milks are also used to make ice cream alternatives, plant cream, vegan cheese, and yogurt-analogues, such as soy yogurt. The global plant milk market was estimated to reach 62billion by 2030. History Before commercial production of 'milks' from legumes, beans and nuts, plant-based mixtures resembling milk have existed for centuries. The Wabanaki and other Native American tribal nations in the northeastern United States made milk and infant formula from nuts. Horchata, a beverage originally made in North Africa from soaked, ground, and sweetened tiger nuts, spread to Iberia (now Spain) before the year 1000. In English, the word "milk" has been used to refer to "milk-like plant juices" since 1200 CE. Recipes from the 13th-century Levant exist describing almond m
https://en.wikipedia.org/wiki/Molecular%20Sciences%20Institute
The Molecular Sciences Institute (MSI), now located in Milpitas, California was founded in Berkeley, California by Sydney Brenner in 1996. Its mission was to operate as an independent non-profit research laboratory that combined genomic experimentation with computer modeling. Current efforts include co-curricular STEAM programming for youth and quick launch strategies for products/technologies addressing global health. In the last few years, MSI has integrated into the startup community through providing research and development facilities and mentorship. This organization has been supported by federal grants from the National Institutes of Health (NIH), the Defense Advanced Research Projects Agency (DARPA) and other funds provided by foundations and corporations.
https://en.wikipedia.org/wiki/Ethernet%20flow%20control
Ethernet flow control is a mechanism for temporarily stopping the transmission of data on Ethernet family computer networks. The goal of this mechanism is to avoid packet loss in the presence of network congestion. The first flow control mechanism, the pause frame, was defined by the IEEE 802.3x standard. The follow-on priority-based flow control, as defined in the IEEE 802.1Qbb standard, provides a link-level flow control mechanism that can be controlled independently for each class of service (CoS), as defined by IEEE P802.1p and is applicable to data center bridging (DCB) networks, and to allow for prioritization of voice over IP (VoIP), video over IP, and database synchronization traffic over default data traffic and bulk file transfers. Description A sending station (computer or network switch) may be transmitting data faster than the other end of the link can accept it. Using flow control, the receiving station can signal the sender requesting suspension of transmissions until the receiver catches up. Flow control on Ethernet can be implemented at the data link layer. The first flow control mechanism, the pause frame, was defined by the Institute of Electrical and Electronics Engineers (IEEE) task force that defined full duplex Ethernet link segments. The IEEE standard 802.3x was issued in 1997. Pause frame An overwhelmed network node can send a pause frame, which halts the transmission of the sender for a specified period of time. A media access control (MAC) frame (EtherType 0x8808) is used to carry the pause command, with the Control opcode set to 0x0001 (hexadecimal). Only stations configured for full-duplex operation may send pause frames. When a station wishes to pause the other end of a link, it sends a pause frame to either the unique 48-bit destination address of this link or to the 48-bit reserved multicast address of . The use of a well-known address makes it unnecessary for a station to discover and store the address of the station at the othe
https://en.wikipedia.org/wiki/Protein%20tag
Protein tags are peptide sequences genetically grafted onto a recombinant protein. Tags are attached to proteins for various purposes. They can be added to either end of the target protein, so they are either C-terminus or N-terminus specific or are both C-terminus and N-terminus specific. Some tags are also inserted at sites within the protein of interest; they are known as internal tags. Affinity tags are appended to proteins so that they can be purified from their crude biological source using an affinity technique. Affinity tags include chitin binding protein (CBP), maltose binding protein (MBP), Strep-tag and glutathione-S-transferase (GST). The poly(His) tag is a widely used protein tag, which binds to matrices bearing immobilized metal ions. Solubilization tags are used, especially for recombinant proteins expressed in species such as E. coli, to assist in the proper folding in proteins and keep them from aggregating in inclusion bodies. These tags include thioredoxin (TRX) and poly(NANP). Some affinity tags have a dual role as a solubilization agent, such as MBP and GST. Chromatography tags are used to alter chromatographic properties of the protein to afford different resolution across a particular separation technique. Often, these consist of polyanionic amino acids, such as FLAG-tag or polyglutamate tag. Epitope tags are short peptide sequences which are chosen because high-affinity antibodies can be reliably produced in many different species. These are usually derived from viral genes, which explain their high immunoreactivity. Epitope tags include ALFA-tag, V5-tag, Myc-tag, HA-tag, Spot-tag, T7-tag and NE-tag. These tags are particularly useful for western blotting, immunofluorescence and immunoprecipitation experiments, although they also find use in antibody purification. Fluorescence tags are used to give visual readout on a protein. Green fluorescent protein (GFP) and its variants are the most commonly used fluorescence tags. More ad
https://en.wikipedia.org/wiki/Free%20induction%20decay
In Fourier transform nuclear magnetic resonance spectroscopy, free induction decay (FID) is the observable NMR signal generated by non-equilibrium nuclear spin magnetization precessing about the magnetic field (conventionally along z). This non-equilibrium magnetization can be created generally by applying a pulse of radio-frequency close to the Larmor frequency of the nuclear spins. If the magnetization vector has a non-zero component in the xy plane, then the precessing magnetisation will induce a corresponding oscillating voltage in a detection coil surrounding the sample. This time-domain signal (a sinusoid) is typically digitised and then Fourier transformed in order to obtain a frequency spectrum of the NMR signal i.e. the NMR spectrum. The duration of the NMR signal is ultimately limited by T2 relaxation, but mutual interference of the different NMR frequencies present also causes the signal to be damped more quickly. When NMR frequencies are well-resolved, as is typically the case in the NMR of samples in solution, the overall decay of the FID is relaxation-limited and the FID is approximately exponential (with the time constant T2 changed, indicated by T2*). FID durations will then be of the order of seconds for nuclei such as 1H. Particularly if a limited number of frequency components are present, the FID may be analysed directly for quantitative determinations of physical properties, such as hydrogen content in aviation fuel, solid and liquid ratio in dairy products (time-domain NMR). Advances in the development of quantum-scale sensors, particularly NV centres, have enabled the observation of the FID of single nuclei. When measuring the precession of a single nucleus, quantum mechanical measurement back action has to be considered. In this special case, also the measurement itself contributes to the decay as predicted by quantum mechanics.
https://en.wikipedia.org/wiki/Clinical%20ecology
Clinical ecology was the name given by proponents in the 1960s to a claim that exposure to low levels of certain chemical agents harm susceptible people, causing multiple chemical sensitivity and other disorders. Clinical ecologists are people that support and promote this offshoot of conventional medicine. They often have a background in the field of allergy or otorhinolaryngology, and the theoretical approach is derived in part from classic concepts of allergic responses, first articulated by Theron Randolph and developed by Richard Mackarness. Clinical ecologists support a cause-and-effect relationship for non-specific symptoms reported by some people after low-dose exposure to chemical, biologic, or physical agents. This pattern of low-dose reaction is not generally accepted by toxicologists. Although some of the mainstream medical community continue to reject these claims, the concept is gaining some recognition under the modern and more clearly articulated classification of environmental medicine. Training and qualifications "Clinical Ecologist" is an environmental approach that is consistent with the practice of holistic medicine. Practitioners with this orientation do not use the term "Clinical Ecologist," although those opposed to this complementary medicine approach to illness often still do. Unlike terms such as physician or nurse, the term clinical ecologist is not legally regulated in any jurisdiction, which means that any person may legally claim to be a clinical ecologist. If wanted, they may obtain an extralegal certification or membership from the unregulated private organization American Academy of Environmental Medicine upon payment of a fee. Many clinical ecologists are traditionally licensed healthcare professionals who hold advanced traditional medical certifications. Others may have a more alternative training. History Randolph published a number of books to promote clinical ecology and environmental medicine, including: In 196
https://en.wikipedia.org/wiki/Quarantine%20%28antivirus%20program%29
Quarantine was an antivirus software from the early 90s that automatically isolated infected files on a computer's hard disk. Files put in quarantine were then no longer capable of infecting their hosting system. Development and release In December, 1988, shortly after the Morris Worm, work started on Quarantine, an anti-malware and file reliability product. Released in April, 1989, Quarantine was the first such product to use file signature instead of viral signature methods. The original Quarantine used Hunt's B-tree database of files with both their CRC16 and CRC-CCITT signatures. Doubling the signatures rendered useless, or at least immoderately difficult, attacks based on CRC invariant modifications. Release 2, April 1990, used a CRC-32 signature and one based on CRC-32 but with a few bits in each word shuffled. The subsequent MS-AV from Microsoft, designed by Check Point, apparently relied on only an eight bit checksum—at least out of a few thousand files there were hundreds with identical signatures. Functionality Quarantine allowed suspect files to be Deleted Moved to a quarantine area Flagged in a report Standard executables were scanned, or one could use up to twenty file matching patterns Twenty exclusion patterns were available Twenty directory paths could be included, or twenty excluded The 1990 version also allowed Background processing Checking of executables and libraries as a file is opened Timing of checks, e.g. if one opened a word file, WORD and all its libraries could be checked: Immediately Every half an hour Once a day or every ten days, etc. Quarantine allowed system managers to track all modifications of a selected files or file structures, hence Quarantine users also got early warnings of failing disks or disk interface cards. Achievements In 1990 Quarantine received the LAN Magazine, Best of Year, Security award. In that year "Quarantine" was reportedly responsible for finding the first stealth virus at the University o
https://en.wikipedia.org/wiki/Universal%20code%20%28data%20compression%29
In data compression, a universal code for integers is a prefix code that maps the positive integers onto binary codewords, with the additional property that whatever the true probability distribution on integers, as long as the distribution is monotonic (i.e., p(i) ≥ p(i + 1) for all positive i), the expected lengths of the codewords are within a constant factor of the expected lengths that the optimal code for that probability distribution would have assigned. A universal code is asymptotically optimal if the ratio between actual and optimal expected lengths is bounded by a function of the information entropy of the code that, in addition to being bounded, approaches 1 as entropy approaches infinity. In general, most prefix codes for integers assign longer codewords to larger integers. Such a code can be used to efficiently communicate a message drawn from a set of possible messages, by simply ordering the set of messages by decreasing probability and then sending the index of the intended message. Universal codes are generally not used for precisely known probability distributions, and no universal code is known to be optimal for any distribution used in practice. A universal code should not be confused with universal source coding, in which the data compression method need not be a fixed prefix code and the ratio between actual and optimal expected lengths must approach one. However, note that an asymptotically optimal universal code can be used on independent identically-distributed sources, by using increasingly large blocks, as a method of universal source coding. Universal and non-universal codes These are some universal codes for integers; an asterisk (*) indicates a code that can be trivially restated in lexicographical order, while a double dagger (‡) indicates a code that is asymptotically optimal: Elias gamma coding * Elias delta coding * ‡ Elias omega coding * ‡ Exp-Golomb coding *, which has Elias gamma coding as a special case. (Used in H
https://en.wikipedia.org/wiki/Multiferroics
Multiferroics are defined as materials that exhibit more than one of the primary ferroic properties in the same phase: ferromagnetism – a magnetisation that is switchable by an applied magnetic field ferroelectricity – an electric polarisation that is switchable by an applied electric field ferroelasticity – a deformation that is switchable by an applied stress While ferroelectric ferroelastics and ferromagnetic ferroelastics are formally multiferroics, these days the term is usually used to describe the magnetoelectric multiferroics that are simultaneously ferromagnetic and ferroelectric. Sometimes the definition is expanded to include nonprimary order parameters, such as antiferromagnetism or ferrimagnetism. In addition, other types of primary order, such as ferroic arrangements of magnetoelectric multipoles of which ferrotoroidicity is an example, were proposed. Besides scientific interest in their physical properties, multiferroics have potential for applications as actuators, switches, magnetic field sensors and new types of electronic memory devices. History A Web of Science search for the term multiferroic yields the year 2000 paper "Why are there so few magnetic ferroelectrics?" from N. A. Spaldin (then Hill) as the earliest result. This work explained the origin of the contraindication between magnetism and ferroelectricity and proposed practical routes to circumvent it, and is widely credited with starting the modern explosion of interest in multiferroic materials. The availability of practical routes to creating multiferroic materials from 2000 stimulated intense activity. Particularly key early works were the discovery of large ferroelectric polarization in epitaxially grown thin films of magnetic BiFeO3, the observation that the non-collinear magnetic ordering in orthorhombic TbMnO3 and TbMn2O5 causes ferroelectricity, and the identification of unusual improper ferroelectricity that is compatible with the coexistence of magnetism in hexagonal man
https://en.wikipedia.org/wiki/Chow%20variety
In mathematics, particularly in the field of algebraic geometry, a Chow variety is an algebraic variety whose points correspond to effective algebraic cycles of fixed dimension and degree on a given projective space. More precisely, the Chow variety is the fine moduli variety parametrizing all effective algebraic cycles of dimension and degree in . The Chow variety may be constructed via a Chow embedding into a sufficiently large projective space. This is a direct generalization of the construction of a Grassmannian variety via the Plücker embedding, as Grassmannians are the case of Chow varieties. Chow varieties are distinct from Chow groups, which are the abelian group of all algebraic cycles on a variety (not necessarily projective space) up to rational equivalence. Both are named for Wei-Liang Chow(周煒良), a pioneer in the study of algebraic cycles. Background on algebraic cycles If X is a closed subvariety of of dimension , the degree of X is the number of intersection points between X and a generic -dimensional projective subspace of . Degree is constant in families of subvarieties, except in certain degenerate limits. To see this, consider the following family parametrized by t. . Whenever , is a conic (an irreducible subvariety of degree 2), but degenerates to the line (which has degree 1). There are several approaches to reconciling this issue, but the simplest is to declare to be a line of multiplicity 2 (and more generally to attach multiplicities to subvarieties) using the language of algebraic cycles. A -dimensional algebraic cycle is a finite formal linear combination . in which s are -dimensional irreducible closed subvarieties in , and s are integers. An algebraic cycle is effective if each . The degree of an algebraic cycle is defined to be . A homogeneous polynomial or homogeneous ideal in n-many variables defines an effective algebraic cycle in , in which the multiplicity of each irreducible component is the order of vanishing at
https://en.wikipedia.org/wiki/Horn%20function
In the theory of special functions in mathematics, the Horn functions (named for Jakob Horn) are the 34 distinct convergent hypergeometric series of order two (i.e. having two independent variables), enumerated by (corrected by ). They are listed in . B. C. Carlson revealed a problem with the Horn function classification scheme. The total 34 Horn functions can be further categorised into 14 complete hypergeometric functions and 20 confluent hypergeometric functions. The complete functions, with their domain of convergence, are: while the confluent functions include: Notice that some of the complete and confluent functions share the same notation.
https://en.wikipedia.org/wiki/Unusual%20number
In number theory, an unusual number is a natural number n whose largest prime factor is strictly greater than . A k-smooth number has all its prime factors less than or equal to k, therefore, an unusual number is non--smooth. Relation to prime numbers All prime numbers are unusual. For any prime p, its multiples less than p2 are unusual, that is p, ... (p-1)p, which have a density 1/p in the interval (p, p2). Examples The first few unusual numbers are 2, 3, 5, 6, 7, 10, 11, 13, 14, 15, 17, 19, 20, 21, 22, 23, 26, 28, 29, 31, 33, 34, 35, 37, 38, 39, 41, 42, 43, 44, 46, 47, 51, 52, 53, 55, 57, 58, 59, 61, 62, 65, 66, 67, ... The first few non-prime (composite) unusual numbers are 6, 10, 14, 15, 20, 21, 22, 26, 28, 33, 34, 35, 38, 39, 42, 44, 46, 51, 52, 55, 57, 58, 62, 65, 66, 68, 69, 74, 76, 77, 78, 82, 85, 86, 87, 88, 91, 92, 93, 94, 95, 99, 102, ... Distribution If we denote the number of unusual numbers less than or equal to n by u(n) then u(n) behaves as follows: Richard Schroeppel stated in 1972 that the asymptotic probability that a randomly chosen number is unusual is ln(2). In other words: External links Integer sequences
https://en.wikipedia.org/wiki/Paleornithology
Paleornithology, also known as avian paleontology, is the scientific study of bird evolution and fossil birds. It is a hybrid of ornithology and paleontology. Paleornithology began with the discovery of Archaeopteryx. The reptilian relationship of birds and their ancestors, the theropod dinosaurs, are important aspects of paleornithological research. Other areas of interest to paleornithologists are the early sea-birds Ichthyornis, Hesperornis, and others. Notable paleornithologists are Storrs L. Olson, Alexander Wetmore, Alan Feduccia, Cécile Mourer-Chauviré, Philip Ashmole, Pierce Brodkorb, Trevor H. Worthy, Zhou Zhonghe, Yevgeny Kurochkin, Bradley C. Livezey, Gareth J. Dyke, Luis M. Chiappe, Gerald Mayr and David Steadman.
https://en.wikipedia.org/wiki/Transaction%20authentication%20number
A transaction authentication number (TAN) is used by some online banking services as a form of single use one-time passwords (OTPs) to authorize financial transactions. TANs are a second layer of security above and beyond the traditional single-password authentication. TANs provide additional security because they act as a form of two-factor authentication (2FA). If the physical document or token containing the TANs is stolen, it will be useless without the password. Conversely, if the login data are obtained, no transactions can be performed without a valid TAN. Classic TAN TANs often function as follows: The bank creates a set of unique TANs for the user. Typically, there are 50 TANs printed on a list, enough to last half a year for a normal user; each TAN being six or eight characters long. The user picks up the list from the nearest bank branch (presenting a passport, an ID card or similar document) or is sent the TAN list through mail. The password (PIN) is mailed separately. To log on to their account, the user must enter user name (often the account number) and password (PIN). This may give access to account information but the ability to process transactions is disabled. To perform a transaction, the user enters the request and authorizes the transaction by entering an unused TAN. The bank verifies the TAN submitted against the list of TANs they issued to the user. If it is a match, the transaction is processed. If it is not a match, the transaction is rejected. The TAN has now been used and will not be recognized for any further transactions. If the TAN list is compromised, the user may cancel it by notifying the bank. However, as any TAN can be used for any transaction, TANs are still prone to phishing attacks where the victim is tricked into providing both password/PIN and one or several TANs. Further, they provide no protection against man-in-the-middle attacks (where an attacker intercepts the transmission of the TAN, and uses it for a
https://en.wikipedia.org/wiki/Bulletproof%20hosting
Bulletproof hosting (BPH) is technical infrastructure service provided by an Internet hosting service that is resilient to complaints of illicit activities, which serves criminal actors as a basic building block for streamlining various cyberattacks. BPH providers allow online gambling, illegal pornography, botnet command and control servers, spam, copyrighted materials, hate speech and misinformation, despite takedown court orders and law enforcement subpoenas, allowing such material in their acceptable use policies. BPH providers usually operate in jurisdictions which have lenient laws against such conduct. Most non-BPH service providers prohibit transferring materials over their network that would be in violation of their terms of service and the local laws of the incorporated jurisdiction, and oftentimes any abuse reports would result in takedowns to avoid their autonomous system's IP address block being blacklisted by other providers and by Spamhaus. History BPH first became the subject of research in 2006 when security researchers from VeriSign revealed the Russian Business Network, an internet service provider that hosted a phishing group, was responsible for about $150 million in phishing-related scams. RBN also become known for identity thefts, child pornography, and botnets. The following year, McColo, the web hosting provider responsible for more than 75% of global spam was shut down and de-peered by Global Crossing and Hurricane Electric after the public disclosure by then-Washington Post reporter Brian Krebs on his Security Fix blog on that newspaper. Difficulties Since any abuse reports to the BPH will be disregarded, in most cases, the whole IP block ("netblock") assigned to the BPH's autonomous system will be blacklisted by other providers and third party spam filters. Additionally, BPH also have difficulty in finding network peering points for establishing Border Gateway Protocol sessions, since routing a BPH provider's network can affect the
https://en.wikipedia.org/wiki/Pocket%20protein%20family
Pocket protein family consists of three proteins: RB – Retinoblastoma protein p107 – Retinoblastoma-like protein 1 p130 – Retinoblastoma-like protein 2 They play crucial roles in the metazoan cell cycle through interaction with members of the E2F transcription factors family.
https://en.wikipedia.org/wiki/Zein
Zein is a class of prolamine protein found in corn (maize). It is usually manufactured as a powder from corn gluten meal. Zein is one of the best understood plant proteins. Pure zein is clear, odorless, tasteless, hard, water-insoluble, and edible, and it has a variety of industrial and food uses. Commercial uses Historically, zein has been used in the manufacture of a wide variety of commercial products, including coatings for paper cups, soda bottle cap linings, clothing fabric, buttons, adhesives, coatings and binders. The dominant historical use of zein was in the textile fibers market where it was produced under the name "Vicara". With the development of synthetic alternatives, the use of zein in this market eventually disappeared. By using electrospinning, zein fibers have again been produced in the lab, where additional research will be performed to re-enter the fiber market. It can be used as a water and grease coating for paperboards and allows recyclability. Zein's properties make it valuable in processed foods and pharmaceuticals, in competition with insect shellac. It is now used as a coating for candy, nuts, fruit, pills, and other encapsulated foods and drugs. In the United States, it may be labeled as "confectioner's glaze" (which may also refer to shellac-based glazes) and used as a coating on bakery products or as "vegetable protein." It is classified as Generally Recognized as Safe (GRAS) by the U.S. Food and Drug Administration. For pharmaceutical coating, zein is preferred over food shellac, since it is all natural and requires less testing per the USP monographs. Zein can be further processed into resins and other bioplastic polymers, which can be extruded or rolled into a variety of plastic products. With increasing environmental concerns about synthetic coatings (such as PFOA) and the current higher prices of hydrocarbon-based petrochemicals, there is increased focus on zein as a raw material for a variety of nontoxic and renewable polym
https://en.wikipedia.org/wiki/Sinodelphys
Sinodelphys is an extinct mammal from the Early Cretaceous, estimated to be 125 million years old. It was discovered and described in 2003 in rocks of the Yixian Formation in Liaoning Province, China, by a team of scientists including Zhe-Xi Luo and John Wible. While initially suggested to be the oldest known metatherian, later studies interpreted it as a eutherian. Description Only one fossil specimen is known, a slab and counterslab given catalog number CAGS00-IG03. It is in the collection of the Chinese Academy of Geological Sciences. Sinodelphys szalayi grew only 15 cm (5.9 in) long and possibly weighed about 30 g (1.05 oz). Its fossilized skeleton is surrounded by impressions of fur and soft tissue, thanks to the exceptional sediment that preserves such details. Luo et al. (2003) inferred from the foot structure of Sinodelphys that it was a scansorial tree-dweller, like the contemporary Eomaia and modern opossums such as Didelphis. Sinodelphys probably hunted worms and insects. Taxonomy Sinodelphys szalayi, living in China around 125 million years ago, was initially interpreted as the earliest known metatherian. This makes it almost contemporary to the eutherian Acristatherium, which has been found in the same area. However, Bi et al. (2018) reinterpreted Sinodelphys as an early member of Eutheria. See also Eomaia Evolution of mammals
https://en.wikipedia.org/wiki/Domain%20model
In software engineering, a domain model is a conceptual model of the domain that incorporates both behavior and data. In ontology engineering, a domain model is a formal representation of a knowledge domain with concepts, roles, datatypes, individuals, and rules, typically grounded in a description logic. Overview A domain model is a system of abstractions that describes selected aspects of a sphere of knowledge, influence or activity (a domain). The model can then be used to solve problems related to that domain. The domain model is a representation of meaningful real-world concepts pertinent to the domain that need to be modeled in software. The concepts include the data involved in the business and rules the business uses in relation to that data. A domain model leverages natural language of the domain. A domain model generally uses the vocabulary of the domain, thus allowing a representation of the model to be communicated to non-technical stakeholders. It should not refer to any technical implementations such as databases or software components that are being designed. Usage A domain model is generally implemented as an object model within a layer that uses a lower-level layer for persistence and "publishes" an API to a higher-level layer to gain access to the data and behavior of the model. In the Unified Modeling Language (UML), a class diagram is used to represent the domain model. See also Domain-driven design (DDD) Domain layer Feature-driven development Logical data model OntoUML
https://en.wikipedia.org/wiki/Hiroshi%20Okamura
was a Japanese mathematician who made contributions to analysis and the theory of differential equations. He was a professor at Kyoto University. He discovered the necessary and sufficient conditions on initial value problems of ordinary differential equations for the solution to be unique. He also refined the second mean value theorem of integration. Works (posthumous)
https://en.wikipedia.org/wiki/Bedpan
A bedpan or bed pan is a device used as a receptacle for the urine and/or feces of a person who is confined to a bed and therefore not able to use a toilet or chamber pot. Bedpans can be either reusable or disposable, and include several different types. Reusable bedpans must be emptied, cleaned, and sanitized after each use and allow for urination or defecation while either sitting or lying in bed, as they are placed beneath the buttocks for use. Disposable bedpans are made of recycled and/or biodegradable materials, and are disposed of after a single use. Disposable bedpans or liners rest inside a reusable bedpan, which is needed to support the user's weight during use. Regular bedpans look similar to a toilet seat and toilet bowl combined, and have the largest capacity. Fracture or slipper bedpans are smaller than standard-size bedpans and have one flat end. These bedpans are designed specifically for people who have had a hip fracture or are recovering from a hip replacement. This type of bedpan may be used for those who cannot raise their hips high enough or roll over onto a regular-size bedpan. Bedpans have a weight limit, which is different depending on the material and style of the bedpan. For people who are over those weight limits, a bariatric bedpan can be used, which includes tapered edges for durability. Bedpans differ from chamber pots in both size and function. Chamber pots are larger and usually have handles and a lid. A bedpan is smaller, since it is placed in the bed and positioned under the person for use. Bedpans can have lids, but most do not, as they are immediately emptied or disposed of after use. Bedpans have a long single handle that can double as a spout, either for urine entry or for emptying after use. History The word bedpan was first seen in the literature of John Higgins in 1572, and one of the oldest known bedpans is on display in the Science Museum of London. It is a green, glazed earthenware bedpan that has been dated to the
https://en.wikipedia.org/wiki/Marker%20gene
In biology, a marker gene may have several meanings. In nuclear biology and molecular biology, a marker gene is a gene used to determine if a nucleic acid sequence has been successfully inserted into an organism's DNA. In particular, there are two sub-types of these marker genes: a selectable marker and a marker for screening. In metagenomics and phylogenetics, a marker gene is an orthologous gene group which can be used to delineate between taxonomic lineages. Selectable marker A selectable marker protects the organism from a selective agent that would normally kill it or prevent its growth. In a transformation reaction, depending on the transformation efficiency, only one in several million to billion cells may take up DNA. Rather than checking every single cell, scientists use a selective agent to kill all cells that do not contain the foreign DNA, leaving only the desired ones. Antibiotics are the most common selective agents. In bacteria, antibiotics are used almost exclusively. In plants, antibiotics that kill the chloroplast are often used as well, although tolerance to salts and growth-inhibiting hormones is becoming more popular. In mammals, resistance to antibiotics that would kill the mitochondria is used as a selectable marker. Screenable marker A screenable marker will make cells containing the gene look different. There are three types of screening commonly used: Green fluorescent protein makes cells glow green under UV light. A specialized microscope is required to see individual cells. Yellow and red versions are also available, so scientists can look at multiple genes at once. It is commonly used to measure gene expression. GUS assay (using β-glucuronidase) is an excellent method for detecting a single cell by staining it blue without using any complicated equipment. The drawback is that the cells are killed in the process. It is particularly common in plant science. Blue white screen is used in both bacteria and eukaryotic cells. The bacte
https://en.wikipedia.org/wiki/Serial%20number%20arithmetic
Many protocols and algorithms require the serialization or enumeration of related entities. For example, a communication protocol must know whether some packet comes "before" or "after" some other packet. The IETF (Internet Engineering Task Force) attempts to define "serial number arithmetic" for the purposes of manipulating and comparing these sequence numbers. In short, when the absolute serial number value decreases by more than half of the maximum value (e.g. 128 in an 8-bit value), it is considered to be "after" the former, whereas other decreases are considered to be "before". This task is rather more complex than it might first appear, because most algorithms use fixed-size (binary) representations for sequence numbers. It is often important for the algorithm not to "break down" when the numbers become so large that they are incremented one last time and "wrap" around their maximum numeric ranges (go instantly from a large positive number to 0 or a large negative number). Some protocols choose to ignore these issues and simply use very large integers for their counters, in the hope that the program will be replaced (or they will retire) before the problem occurs (see Y2K). Many communication protocols apply serial number arithmetic to packet sequence numbers in their implementation of a sliding window protocol. Some versions of TCP use protection against wrapped sequence numbers (PAWS). PAWS applies the same serial number arithmetic to packet timestamps, using the timestamp as an extension of the high-order bits of the sequence number. Operations on sequence numbers Only addition of a small positive integer to a sequence number and comparison of two sequence numbers are discussed. Only unsigned binary implementations are discussed, with an arbitrary size in bits noted throughout the RFC (and below) as "SERIAL_BITS". Addition Adding an integer to a sequence number is simple unsigned integer addition, followed by unsigned modulo operation to bring the r
https://en.wikipedia.org/wiki/Zeitgeber
A zeitgeber () is any external or environmental cue that entrains or synchronizes an organism's biological rhythms, usually naturally occurring and serving to entrain to the Earth's 24-hour light/dark and 12-month cycles. History The term (; ) was first used by Jürgen Aschoff, one of the founders of the field of chronobiology. His work demonstrated the existence of endogenous (internal) biological clocks, which synchronize biological rhythms. In addition, he found that certain exogenous (external) cues, which he called zeitgeber, influence the timing of these internal clocks. Photic and nonphotic zeitgebers Light (light is a more important zeitgeber than social interactions). Atmospheric conditions Medication Temperature Social interactions Exercise Eating/drinking patterns Circadian rhythms Any biological process in the body that repeats itself over a period of approximately 24 hours and maintains this rhythm in the absence of external stimuli is considered a circadian rhythm. It is believed that the brain's suprachiasmatic nucleus (SCN), or internal pacemaker, is responsible for regulating the body's biological rhythms, influenced by a combination of internal and external cues. To maintain clock-environment synchrony, zeitgebers induce changes in the concentrations of the molecular components of the clock to levels consistent with the appropriate stage in the 24-hour cycle, a process termed entrainment. Early research into circadian rhythms suggested that most people preferred a day closer to 25–26 hours when isolated from external stimuli like daylight and timekeeping. However, this research was faulty because it failed to shield the participants from artificial light. Although subjects were shielded from time cues (like clocks) and daylight, the researchers were not aware of the phase-delaying effects of indoor electric lights. The subjects were allowed to turn on light when they were awake and to turn it off when they wanted to sleep. Electric light
https://en.wikipedia.org/wiki/Fixed%20action%20pattern
"Fixed action pattern" is an ethological term describing an instinctive behavioral sequence that is highly stereotyped and species-characteristic. Fixed action patterns are said to be produced by the innate releasing mechanism, a "hard-wired" neural network, in response to a sign/key stimulus or releaser. Once released, a fixed action pattern runs to completion. This term is often associated with Konrad Lorenz, who is the founder of the concept. Lorenz identified six characteristics of fixed action patterns. These characteristics state that fixed action patterns are stereotyped, complex, species-characteristic, released, triggered, and independent of experience. Fixed action patterns have been observed in many species, but most notably in fish and birds. Classic studies by Konrad Lorenz and Niko Tinbergen involve male stickleback mating behavior and greylag goose egg-retrieval behavior. Fixed action patterns have been shown to be evolutionarily advantageous, as they increase both fitness and speed. However, as a result of their predictability, they may also be used as a means of exploitation. An example of this exploitation would be brood parasitism. There are four exceptions to fixed action pattern rules: reduced response threshold, vacuum activity, displacement behavior, and graded response. Characteristics There are 6 characteristics of fixed action patterns. Fixed action patterns are said to be stereotyped, complex, species-characteristic, released, triggered, and independent of experience. Stereotyped: Fixed action patterns occur in rigid, predictable, and highly-structured sequences. Complex: Fixed action patterns are not a simple reflex. They are a complex pattern of behavior. Species-characteristic: Fixed action patterns occur in all members of a species of a certain sex and/or a given age when they have attained a specific level of arousal. Released: Fixed action patterns occur in response to a certain sign stimulus or releaser. Triggered: Once relea
https://en.wikipedia.org/wiki/Amanita%20caesarea
Amanita caesarea, commonly known as Caesar's mushroom, is a highly regarded edible mushroom in the genus Amanita, native to southern Europe and North Africa. While it was first described by Giovanni Antonio Scopoli in 1772, this mushroom was a known favorite of early rulers of the Roman Empire. It has a distinctive orange cap, yellow gills and stipe. Organic acids have been isolated from this species. Similar orange-capped species occur in North America and India. It was known to and valued by the Ancient Romans, who called it Boletus, a name now applied to a very different type of fungus. Although it is edible, the Caesar's mushroom is closely related to the psychoactive fly agaric, and to the deadly poisonous death cap and destroying angels. Taxonomy and naming Amanita caesarea was first described by Italian mycologist Giovanni Antonio Scopoli in 1772 as Agaricus caesareus, before later being placed in Amanita by Persoon in 1801. The common name comes from its being a favourite of the Roman emperors, who took the name Caesar (originally a family name) as a title. It was a personal favorite of Roman emperor Claudius. The Romans called it Bōlētus, derived from the Ancient Greek βωλίτης for this fungus as named by Galen. Several modern common names recognise this heritage with the English Caesar's mushroom and royal amanita, French impériale, Polish cesarski and German Kaiserling. In Italian, it is ovolo (pl. ovoli), due to its resemblance to an egg when very young. In Albanian it is kuqëlorja from its colour (< Albanian kuqe 'red'). Other common names include Amanite des Césars and Oronge. It has also been classified as A. umbonata. A. hemibapha is a similar species originally described from Sikkim, India. It is widely eaten in the Himalayas and the Tibetan areas. Also North American collections have been labeled in the past as A. hemibapha. The relationship of the similar North American species A. arkansana and A. jacksonii to A. caesarea is not clear. The ed
https://en.wikipedia.org/wiki/Sandhoff%20disease
Sandhoff disease is a lysosomal genetic, lipid storage disorder caused by the inherited deficiency to create functional beta-hexosaminidases A and B. These catabolic enzymes are needed to degrade the neuronal membrane components, ganglioside GM2, its derivative GA2, the glycolipid globoside in visceral tissues, and some oligosaccharides. Accumulation of these metabolites leads to a progressive destruction of the central nervous system and eventually to death. The rare autosomal recessive neurodegenerative disorder is clinically almost indistinguishable from Tay–Sachs disease, another genetic disorder that disrupts beta-hexosaminidases A and S. There are three subsets of Sandhoff disease based on when first symptoms appear: classic infantile, juvenile and adult late onset. Symptoms and signs Sandhoff disease symptoms are clinically indeterminable from Tay–Sachs disease. The classic infantile form of the disease has the most severe symptoms and is incredibly hard to diagnose at this early age. The first signs of symptoms begin before 6 months of age and the parents’ notice when the child begins regressing in their development. If the children had the ability to sit up by themselves or crawl they will lose this ability. This is caused by a slow deterioration of the muscles in the child's body from the buildup of GM2 gangliosides. Since the body is unable to create the enzymes it needs within the central nervous system, it is unable to attach to these gangliosides to break them apart and make them non-toxic. With this buildup there are several symptoms that begin to appear such as muscle/motor weakness, sharp reaction to loud noises, blindness, deafness, inability to react to stimulants, respiratory problems and infections, mental retardation, seizures, cherry red spots in the retina, enlarged liver and spleen (hepatosplenomegaly), pneumonia, or bronchopneumonia. The other two forms of Sandhoff disease have similar symptoms but to a lesser extent. Adult and juve
https://en.wikipedia.org/wiki/Cardiac%20cycle
The cardiac cycle is the performance of the human heart from the beginning of one heartbeat to the beginning of the next. It consists of two periods: one during which the heart muscle relaxes and refills with blood, called diastole, following a period of robust contraction and pumping of blood, called systole. After emptying, the heart relaxes and expands to receive another influx of blood returning from the lungs and other systems of the body, before again contracting to pump blood to the lungs and those systems. A normally performing heart must be fully expanded before it can efficiently pump again. Assuming a healthy heart and a typical rate of 70 to 75 beats per minute, each cardiac cycle, or heartbeat, takes about 0.8 second to complete the cycle. There are two atrial and two ventricle chambers of the heart; they are paired as the left heart and the right heart—that is, the left atrium with the left ventricle, the right atrium with the right ventricle—and they work in concert to repeat the cardiac cycle continuously (see cycle diagram at right margin). At the start of the cycle, during ventricular diastole–early, the heart relaxes and expands while receiving blood into both ventricles through both atria; then, near the end of ventricular diastole–late, the two atria begin to contract (atrial systole), and each atrium pumps blood into the ventricle below it. During ventricular systole the ventricles are contracting and vigorously pulsing (or ejecting) two separated blood supplies from the heart—one to the lungs and one to all other body organs and systems—while the two atria are relaxed (atrial diastole). This precise coordination ensures that blood is efficiently collected and circulated throughout the body. The mitral and tricuspid valves, also known as the atrioventricular, or AV valves, open during ventricular diastole to permit filling. Late in the filling period the atria begin to contract (atrial systole) forcing a final crop of blood into the ventric
https://en.wikipedia.org/wiki/XML%20Encryption
XML Encryption, also known as XML-Enc, is a specification, governed by a W3C recommendation, that defines how to encrypt the contents of an XML element. Although XML Encryption can be used to encrypt any kind of data, it is nonetheless known as "XML Encryption" because an XML element (either an EncryptedData or EncryptedKey element) contains or refers to the cipher text, keying information, and algorithms. Both XML Signature and XML Encryption use the KeyInfo element, which appears as the child of a SignedInfo, EncryptedData, or EncryptedKey element and provides information to a recipient about what keying material to use in validating a signature or decrypting encrypted data. The KeyInfo element is optional: it can be attached in the message, or be delivered through a secure channel. XML Encryption is different from and unrelated to Transport Layer Security, which is used to send encrypted messages (including xml content, both encrypted and otherwise) over the internet. It has been reported that this specification has severe security concerns.
https://en.wikipedia.org/wiki/Phenylacetic%20acid
Phenylacetic acid (conjugate base phenylacetate), also known by various synonyms, is an organic compound containing a phenyl functional group and a carboxylic acid functional group. It is a white solid with a strong honey-like odor. Endogenously, it is a catabolite of phenylalanine. As a commercial chemical, because it can be used in the illicit production of phenylacetone (used in the manufacture of substituted amphetamines), it is subject to controls in countries including the United States and China. Occurrence Phenylacetic acid has been found to be an active auxin (a type of plant hormone), found predominantly in fruits. However, its effect is much weaker than the effect of the basic auxin molecule indole-3-acetic acid. In addition the molecule is naturally produced by the metapleural gland of most ant species and used as an antimicrobial. It is also the oxidation product of phenethylamine in humans following metabolism by monoamine oxidase and subsequent metabolism of the intermediate product, phenylacetaldehyde, by the aldehyde dehydrogenase enzyme; these enzymes are also found in many other organisms. Preparation This compound may be prepared by the hydrolysis of benzyl cyanide: Applications Phenylacetic acid is used in some perfumes, as it possesses a honey-like odor even in low concentrations. It is also used in penicillin G production and diclofenac production. It is also employed to treat type II hyperammonemia to help reduce the amounts of ammonia in a patient's bloodstream by forming phenylacetyl-CoA, which then reacts with nitrogen-rich glutamine to form phenylacetylglutamine. This compound is then excreted from the patient's body. It's also used in the illicit production of phenylacetone, which is used in the manufacture of methamphetamine. The sodium salt of phenylacetic acid, sodium phenylacetate, is used as a pharmaceutical drug for the treatment of urea cycle disorders, including as the combination drug sodium phenylacetate/sodium benzoate (Am
https://en.wikipedia.org/wiki/Biomedical%20technology
Biomedical technology is the application of engineering and technology principles to the domain of living or biological systems, with an emphasis on human health and diseases. Biomedical engineering and Biotechnology alike are often loosely called Biomedical Technology or Bioengineering. The Biomedical technology field is currently growing at a rapid pace. Biomedical news has often been reported on various platforms, including the MediUnite Journal; and required jobs for the industry expect to grow 23% by 2024, and with the pay averaging over $86,000. Biomedical technology involves: Biomedical science Biomedical informatics Biomedical research Biomedical engineering Bioengineering Biotechnology Biomedical technologies: Cloning Therapeutic cloning
https://en.wikipedia.org/wiki/Negation%20as%20failure
Negation as failure (NAF, for short) is a non-monotonic inference rule in logic programming, used to derive (i.e. that is assumed not to hold) from failure to derive . Note that can be different from the statement of the logical negation of , depending on the completeness of the inference algorithm and thus also on the formal logic system. Negation as failure has been an important feature of logic programming since the earliest days of both Planner and Prolog. In Prolog, it is usually implemented using Prolog's extralogical constructs. More generally, this kind of negation is known as weak negation, in contrast with the strong (i.e. explicit, provable) negation. Planner semantics In Planner, negation as failure could be implemented as follows: if (not (goal p)), then (assert ¬p) which says that if an exhaustive search to prove p fails, then assert ¬p. This states that proposition p shall be assumed as "not true" in any subsequent processing. However, Planner not being based on a logical model, a logical interpretation of the preceding remains obscure. Prolog semantics In pure Prolog, NAF literals of the form can occur in the body of clauses and can be used to derive other NAF literals. For example, given only the four clauses NAF derives , and as well as and . Completion semantics The semantics of NAF remained an open issue until 1978, when Keith Clark showed that it is correct with respect to the completion of the logic program, where, loosely speaking, "only" and are interpreted as "if and only if", written as "iff" or "". For example, the completion of the four clauses above is The NAF inference rule simulates reasoning explicitly with the completion, where both sides of the equivalence are negated and negation on the right-hand side is distributed down to atomic formulae. For example, to show , NAF simulates reasoning with the equivalences In the non-propositional case, the completion needs to be augmented wit
https://en.wikipedia.org/wiki/Foundation%20Fieldbus
Foundation Fieldbus (styled Fieldbus) is an all-digital, serial, two-way communications system that serves as the base-level network in a plant or factory automation environment. It is an open architecture, developed and administered by FieldComm Group. It is targeted for applications using basic and advanced regulatory control, and for much of the discrete control associated with those functions. Foundation Fieldbus technology is mostly used in process industries, but has recently been implemented in powerplants. Two related implementations of Foundation Fieldbus have been introduced to meet different needs within the process automation environment. These two implementations use different physical media and communication speeds. Foundation Fieldbus H1 - Operates at 31.25 kbit/s and is generally used to connect to field devices and host systems. It provides communication and power over standard stranded twisted-pair wiring in both conventional and intrinsic safety applications. H1 is currently the most common implementation. HSE (High-speed Ethernet) - Operates at 100/1000 Mbit/s and generally connects input/output subsystems, host systems, linking devices and gateways. It doesn't currently provide power over the cable, although work is under way to address this using the IEEE802.3af Power over Ethernet (PoE) standard. Foundation Fieldbus was originally intended as a replacement for the 4-20 mA standard, and today it coexists alongside other technologies such as Modbus, Profibus, and Industrial Ethernet. Foundation Fieldbus today enjoys a growing installed base in many heavy process applications such as refining, petrochemicals, power generation, and even food and beverage, pharmaceuticals, and nuclear applications. Foundation Fieldbus was developed over a period of many years by the International Society of Automation, or ISA, as SP50. In 1996 the first H1 (31.25 kbit/s) specifications were released. In 1999 the first HSE (High Speed Ethernet) specifications
https://en.wikipedia.org/wiki/Ricinodendron
Ricinodendron is a plant genus in the family Euphorbiaceae first described as a genus in 1864. It includes only one known species, Ricinodendron heudelotii, native to tropical Africa from Senegal + Liberia east to Sudan and Tanzania and south to Mozambique and Angola. It produces an economically important oilseed. The tree is known as munguella (Angola), njangsa (Cameroon), bofeko (Zaire), wama (Ghana), okhuen (Nigeria), kishongo (Uganda), akpi (Ivory Coast), djansang, essang, ezezang and njasang. Two varieties of the tree species are recognized R. heudelotii var. heudelotii in Ghana and R. heudelotii var. africanum in Nigeria and westwards. Taxonomy The mongongo fruit (Schinziophyton rautanenii) was previously considered a member of this genus but has since been placed into a genus of its own. Subspecies and varieties Ricinodendron heudelotii subsp. africanum (Müll.Arg.) J.Léonard - tropical Africa from Nigeria east to Sudan and Tanzania and south to Mozambique and Angola Ricinodendron heudelotii var. tomentellum (Hutch. & E.A.Bruce) Radcl.-Sm. - Kenya, Tanzania Formerly included moved to Schinziophyton R. rautanenii - Schinziophyton rautanenii R. viticoides - Schinziophyton rautanenii Description The tree is fast growing and reaches a height between 20 and 50 m with a straight trunk which can have a diameter up to 2.7 m. Its crown is broad and the roots are big running. The bark is smooth with a grey colour. Inside, the bark is red when cut. Njangsa is a dioecious plant. The flowers are yellowish white, 5 mm long and form a long terminal panicle which measures between 15 and 40 cm. Flowering time is between April and May. Male panicles are larger and slender than female flowers. Njangsa trees produce a fruit that are typically two or three lobed and contain two cells in which the seeds lie. These seeds are red brown to black, rounded and some 1 cm in diameter. The seeds are oily in texture and can be bought either raw or dried. They have an odour remin
https://en.wikipedia.org/wiki/The%20Demon%20in%20the%20Freezer
The Demon in the Freezer is a 2002 nonfiction book on the biological weapon agents smallpox and anthrax and how the American government develops defensive measures against them. It was written by journalist Richard Preston, also author of the best-selling book The Hot Zone (1994), about ebolavirus outbreaks in Africa and Reston, Virginia and the U.S. government's response to them. The book is primarily an account of the Smallpox Eradication Program (1967–1980), the ongoing belief of the U.S. government that smallpox is still a potential bioterrorism agent, and the controversy over whether or not the remaining samples of smallpox virus in Atlanta and Moscow (the "demon" in the freezer) should be finally destroyed. However, the writer was overtaken by events—the 9/11 attacks and the anthrax letter incidents (called "Amerithrax"), both in 2001—and so much of the book interweaves the anthrax investigation with the smallpox material in a manner some critics have said is "awkward" and somewhat "disjointed". Synopsis Section 1, "Something in the Air", begins with a day-by-day account of the anthrax letter attacks in Florida and Washington, D.C., for the period 2 to 15 October 2001. Robert Stevens, a photo retoucher for the tabloid The Sun, was a victim and US Senator Tom Daschle was an intended victim. The facts of the FBI, the CDC and the U.S. Army Medical Research Institute of Infectious Diseases (USAMRIID) are detailed. Section 2, "The Dreaming Demon", looks back to an outbreak of smallpox at St Walburga Hospital in Meschede, Germany. The successful efforts organized by local public health authorities and the WHO—including a textbook example of ring vaccination containment—are described. Section 3, "To Bhola Island", describes the varieties and evolution of poxviruses and the history of smallpox in particular. The story of the SEP (Smallpox Eradication Program, referred to throughout as "the Eradication"), led by DA Henderson and others is recounted. The more pe
https://en.wikipedia.org/wiki/List%20of%20animal%20sounds
Certain words in the English language represent animal sounds: the noises and vocalizations of particular animals, especially noises used by animals for communication. The words can be used as verbs or interjections in addition to nouns, and many of them are also specifically onomatopoeic. List of animal sounds See also Animal communication Animal epithet Animal language Bioacoustics Cat organ & piganino Cross-linguistic onomatopoeias Field recording List of animal names List of onomatopoeias "Old MacDonald Had a Farm" "The Fox (What Does the Fox Say?)"
https://en.wikipedia.org/wiki/Fixed-pattern%20noise
Fixed-pattern noise (FPN) is the term given to a particular noise pattern on digital imaging sensors often noticeable during longer exposure shots where particular pixels are susceptible to giving brighter intensities above the average intensity. Overview FPN is a general term that identifies a temporally constant lateral non-uniformity (forming a constant pattern) in an imaging system with multiple detector or picture elements (pixels). It is characterised by the same pattern of variation in pixel-brightness occurring in images taken under the same illumination conditions in an imaging array. This problem arises from small differences in the individual responsitivity of the sensor array (including any local postamplification stages) that might be caused by variations in the pixel size, material or interference with the local circuitry. It might be affected by changes in the environment like different temperatures, exposure times, etc. The term "fixed pattern noise" usually refers to two parameters. One is the dark signal non-uniformity (DSNU), which is the offset from the average across the imaging array at a particular setting (temperature, integration time) but no external illumination and the photo response non-uniformity (PRNU), which describes the gain or ratio between optical power on a pixel versus the electrical signal output. The latter is often simplified as a single value measured at e.g. 50% saturation level, implying a linear approximation of the not perfectly linear photo response non-linearity (PRNL). Often PRNU as defined above is subdivided in pure "(offset) FPN" which is the part not dependent on temperature and integration time, and the integration time and temperature dependent "DSNU". Sometimes pixel noise as the average deviation from the array average under different illumination and temperature conditions is specified. Pixel noise therefore gives a number (commonly expressed in rms) that identifies FPN in all permitted imaging conditions
https://en.wikipedia.org/wiki/Wilhelm%20Normann
Wilhelm Normann (16 January 1870, in Petershagen – 1 May 1939, in Chemnitz) (sometimes also spelled Norman) was a German chemist who introduced the hydrogenation of fats in 1901. This invention, protected by German patent 141,029 in 1902, had a profound influence on the production of margarine and vegetable shortening. Early life and education His father, Julius Normann, was the principal of the elementary school and Selekta in Petershagen. His mother was Luise Normann, née Siveke. Normann attended primary school from 31 March 1877. At Easter of his sixth grade he moved to the Friedrichs Gymnasium in Herford. After his father applied for a teacher's job at the municipal secondary school in Kreuznach, Wilhelm changed to the Royal Secondary School in Kreuznach. He passed his examinations and left school at the age of 18. Career Normann began work at the Herford machine fat and oil factory Leprince & Siveke in 1888. The founder of that company was his uncle, Wilhelm Siveke. After running a branch of the company in Hamburg for two years, he started studying chemistry at the laboratory of Professor Carl Remigius Fresenius in Wiesbaden. From April 1892 Normann continued his studies at the department of oil analytics at the Berlin Institute of Technology under the supervision of Professor D. Holde. From 1895 to 1900 he studied chemistry under supervision of Prof. Claus and Prof. Willgerod and geology under supervision of Prof. Steinmann at the Albert Ludwigs University of Freiburg. There he received his doctorate in 1900 with a work about Beiträge zur Kenntnis der Reaktion zwischen unterchlorigsauren Salzen und primären aromatischen Aminen ("Contributions to the knowledge of the reactions of hypochlorite salts and primary aromatic amines"). In 1901 Normann was appointed as correspondent of the Federal Geological Institute. Primary work From 1901 to 1909 he was head of the laboratory at Leprince & Siveke, where conducted investigations of the properties of fats and oils
https://en.wikipedia.org/wiki/Cryoprotectant
A cryoprotectant is a substance used to protect biological tissue from freezing damage (i.e. that due to ice formation). Arctic and Antarctic insects, fish and amphibians create cryoprotectants (antifreeze compounds and antifreeze proteins) in their bodies to minimize freezing damage during cold winter periods. Cryoprotectants are also used to preserve living materials in the study of biology and to preserve food products. For years, glycerol has been used in cryobiology as a cryoprotectant for blood cells and bull sperm, allowing storage in liquid nitrogen at temperatures around −196°C. However, glycerol cannot be used to protect whole organs from damage. Instead, many biotechnology companies are researching the development of other cryoprotectants more suitable for such uses. A successful discovery may eventually make possible the bulk cryogenic storage (or "banking") of transplantable human and xenobiotic organs. A substantial step in that direction has already occurred. Twenty-First Century Medicine has vitrified a rabbit kidney to -135 °C with their proprietary vitrification cocktail. Upon rewarming, the kidney was successfully transplanted into a rabbit, with complete functionality and viability, able to sustain the rabbit indefinitely as the sole functioning kidney. Mechanism Cryoprotectants operate by increasing the solute concentration in cells. However, in order to be biologically viable they must easily penetrate and must not be toxic to cells. Glass transition temperature Some cryoprotectants function by lowering the glass transition temperature of a solution or of a material. In this way, the cryoprotectant prevents actual freezing, and the solution maintains some flexibility in a glassy phase. Many cryoprotectants also function by forming hydrogen bonds with biological molecules as water molecules are displaced. Hydrogen bonding in aqueous solutions is important for proper protein and DNA function. Thus, as the cryoprotectant replaces the water mo
https://en.wikipedia.org/wiki/Volatility%20%28chemistry%29
In chemistry, volatility is a material quality which describes how readily a substance vaporizes. At a given temperature and pressure, a substance with high volatility is more likely to exist as a vapour, while a substance with low volatility is more likely to be a liquid or solid. Volatility can also describe the tendency of a vapor to condense into a liquid or solid; less volatile substances will more readily condense from a vapor than highly volatile ones. Differences in volatility can be observed by comparing how fast substances within a group evaporate (or sublimate in the case of solids) when exposed to the atmosphere. A highly volatile substance such as rubbing alcohol (isopropyl alcohol) will quickly evaporate, while a substance with low volatility such as vegetable oil will remain condensed. In general, solids are much less volatile than liquids, but there are some exceptions. Solids that sublimate (change directly from solid to vapor) such as dry ice (solid carbon dioxide) or iodine can vaporize at a similar rate as some liquids under standard conditions. Description Volatility itself has no defined numerical value, but it is often described using vapor pressures or boiling points (for liquids). High vapor pressures indicate a high volatility, while high boiling points indicate low volatility. Vapor pressures and boiling points are often presented in tables and charts that can be used to compare chemicals of interest. Volatility data is typically found through experimentation over a range of temperatures and pressures. Vapor pressure Vapor pressure is a measurement of how readily a condensed phase forms a vapor at a given temperature. A substance enclosed in a sealed vessel initially at vacuum (no air inside) will quickly fill any empty space with vapor. After the system reaches equilibrium and the rate of evaporation matches the rate of condensation, the vapor pressure can be measured. Increasing the temperature increases the amount of vapor that is f
https://en.wikipedia.org/wiki/Species%20complex
In biology, a species complex is a group of closely related organisms that are so similar in appearance and other features that the boundaries between them are often unclear. The taxa in the complex may be able to hybridize readily with each other, further blurring any distinctions. Terms that are sometimes used synonymously but have more precise meanings are cryptic species for two or more species hidden under one species name, sibling species for two (or more) species that are each other's closest relative, and species flock for a group of closely related species that live in the same habitat. As informal taxonomic ranks, species group, species aggregate, macrospecies, and superspecies are also in use. Two or more taxa that were once considered conspecific (of the same species) may later be subdivided into infraspecific taxa (taxa within a species, such as bacterial strains or plant varieties), which may be a complex ranking but it is not a species complex. In most cases, a species complex is a monophyletic group of species with a common ancestor, but there are exceptions. It may represent an early stage after speciation in which the species were separated for a long time period without evolving morphological differences. Hybrid speciation can be a component in the evolution of a species complex. Species complexes exist in all groups of organisms and are identified by the rigorous study of differences between individual species that uses minute morphological details, tests of reproductive isolation, or DNA-based methods, such as molecular phylogenetics and DNA barcoding. The existence of extremely similar species may cause local and global species diversity to be underestimated. The recognition of similar-but-distinct species is important for disease and pest control and in conservation biology although the drawing of dividing lines between species can be inherently difficult. Definition A species complex is typically considered as a group of close, but distin
https://en.wikipedia.org/wiki/Banzhaf%20power%20index
The Banzhaf power index, named after John Banzhaf (originally invented by Lionel Penrose in 1946 and sometimes called Penrose–Banzhaf index; also known as the Banzhaf–Coleman index after James Samuel Coleman), is a power index defined by the probability of changing an outcome of a vote where voting rights are not necessarily equally divided among the voters or shareholders. To calculate the power of a voter using the Banzhaf index, list all the winning coalitions, then count the critical voters. A critical voter is a voter who, if he changed his vote from yes to no, would cause the measure to fail. A voter's power is measured as the fraction of all swing votes that he could cast. There are some algorithms for calculating the power index, e.g., dynamic programming techniques, enumeration methods and Monte Carlo methods. Examples Voting game Simple voting game A simple voting game, taken from Game Theory and Strategy by Philip D. Straffin: [6; 4, 3, 2, 1] The numbers in the brackets mean a measure requires 6 votes to pass, and voter A can cast four votes, B three votes, C two, and D one. The winning groups, with underlined swing voters, are as follows: AB, AC, ABC, ABD, ACD, BCD, ABCD There are 12 total swing votes, so by the Banzhaf index, power is divided thus: A = 5/12, B = 3/12, C = 3/12, D = 1/12 U.S. Electoral College Consider the United States Electoral College. Each state has different levels of voting power. There are a total of 538 electoral votes. A majority vote is 270 votes. The Banzhaf power index would be a mathematical representation of how likely a single state would be able to swing the vote. A state such as California, which is allocated 55 electoral votes, would be more likely to swing the vote than a state such as Montana, which has 3 electoral votes. Assume the United States is having a presidential election between a Republican (R) and a Democrat (D). For simplicity, suppose that only three states are participating: Californ
https://en.wikipedia.org/wiki/Serial%20over%20LAN
Serial over LAN (SOL) is a mechanism that enables the input and output of the serial port of a managed system to be redirected over IP. Details On some managed systems, notably blade server systems, the serial ports on the managed computers are not normally connected to a traditional serial port socket. To allow users to access applications on these computers via the serial port, the input/output of the serial port is redirected to the network. For example, a user wishing to access a blade server via the serial port can telnet to a network address and log in. On the blade server the login will be seen as coming through the serial port. SOL is implemented as a payload type under the RMCP+ protocol in IPMI. See also Console redirection Emergency Management Services (EMS) IPMI LAN Shell shoveling
https://en.wikipedia.org/wiki/Wetware%20computer
A wetware computer is an organic computer (which can also be known as an artificial organic brain or a neurocomputer) composed of organic material "wetware" such as "living" neurons. Wetware computers composed of neurons are different than conventional computers because they use biological materials, and offer the possibility of substantially more energy-efficient computing. While a wetware computer is still largely conceptual, there has been limited success with construction and prototyping, which has acted as a proof of the concept's realistic application to computing in the future. The most notable prototypes have stemmed from the research completed by biological engineer William Ditto during his time at the Georgia Institute of Technology. His work constructing a simple neurocomputer capable of basic addition from leech neurons in 1999 was a significant discovery for the concept. This research acted as a primary example driving interest in the creation of these artificially constructed, but still organic brains. Overview The concept of wetware is an application of specific interest to the field of computer manufacturing. Moore’s law, which states that the number of transistors which can be placed on a silicon chip is doubled roughly every two years, has acted as a goal for the industry for decades, but as the size of computers continues to decrease, the ability to meet this goal has become more difficult, threatening to reach a plateau. Due to the difficulty in reducing the size of computers because of size limitations of transistors and integrated circuits, wetware provides an unconventional alternative. A wetware computer composed of neurons is an ideal concept because, unlike conventional materials which operate in binary (on/off), a neuron can shift between thousands of states, constantly altering its chemical conformation, and redirecting electrical pulses through over 200,000 channels in any of its many synaptic connections. Because of this large differe
https://en.wikipedia.org/wiki/John%20Whitehead%20%28explorer%29
John Whitehead (30 June 1860 – 2 June 1899) was an English explorer, naturalist and professional collector of natural history specimens in Southeast Asia. He is the first documented person to reach the summit of Mount Kinabalu: this was in 1888, after annual attempts from 1885. Whitehead was born in Colney Hatch Lane, Muswell Hill, Middlesex to Jeffery Whitehead, a stockbroker, and his wife Jane Ashton Tinker. After education at Elstree, Hertfordshire and the Edinburgh Institution he faced health problems and was sent to recuperate to Engadine in Switzerland in 1881 and then to warm Corsica in 1882 where he discovered a bird new to science, the Corsican nuthatch. Whitehead travelled in Malacca, North Borneo, Java, and Palawan between 1885 and 1888, where he collected a number of zoological specimens new to science, including 45 new species of bird such as Whitehead's broadbill (Calyptomena whiteheadi), writing up his experiences in a book on his return. Between 1893 and 1896 he explored in the Philippines, again collecting many new species, including the Philippine eagle, the binomial name of which commemorates Whitehead's father Jeffery, who funded his expeditions. Several species are named after Whitehead: Whitehead's woolly bat Kerivoula whiteheadi Harpy fruit bat Harpyionycteris whiteheadi Whitehead's spiny rat Maxomys whiteheadi Luzon striped rat Chrotomys whiteheadi Tufted pygmy squirrel Exilisciurus whiteheadi Luzon striped rat Chrotomys whiteheadi Whitehead's Borneo frog - Meristogenys whiteheadi White-winged magpie Urocissa whiteheadi Whitehead's broadbill Calyptomena whiteheadi Whitehead's trogon Harpactes whiteheadi Whitehead's spiderhunter Arachnothera juliae Spotted Wood-owl Strix seloputo, formerly Surnia whiteheadi Whitehead's swiftlet Collocalia whiteheadi Bornean stubtail Urosphena whiteheadi Chestnut-faced babbler Zosterornis whiteheadi Corsican nuthatch Sitta whiteheadi Hainan silver pheasant Lophura nycthemera whiteh
https://en.wikipedia.org/wiki/Duncan%27s%20new%20multiple%20range%20test
In statistics, Duncan's new multiple range test (MRT) is a multiple comparison procedure developed by David B. Duncan in 1955. Duncan's MRT belongs to the general class of multiple comparison procedures that use the studentized range statistic qr to compare sets of means. David B. Duncan developed this test as a modification of the Student–Newman–Keuls method that would have greater power. Duncan's MRT is especially protective against false negative (Type II) error at the expense of having a greater risk of making false positive (Type I) errors. Duncan's test is commonly used in agronomy and other agricultural research. The result of the test is a set of subsets of means, where in each subset means have been found not to be significantly different from one another. This test is often followed by the Compact Letter Display (CLD) methodology that renders the output of such test much more accessible to non-statistician audiences. Definition Assumptions: 1.A sample of observed means , which have been drawn independently from n normal populations with "true" means, respectively. 2.A common standard error . This standard error is unknown, but there is available the usual estimate , which is independent of the observed means and is based on a number of degrees of freedom, denoted by . (More precisely, , has the property that is distributed as with degrees of freedom, independently of sample means). The exact definition of the test is: The difference between any two means in a set of n means is significant provided the range of each and every subset which contains the given means is significant according to an level range test where , and is the number of means in the subset concerned. Exception: The sole exception to this rule is that no difference between two means can be declared significant if the two means concerned are both contained in a subset of the means which has a non-significant range. Procedure The procedure consists of a series of pa
https://en.wikipedia.org/wiki/List%20of%20flags%20of%20Ireland
This is a list of flags which have been, or are still today, used in Ireland. Island of Ireland The following flags have been used to represent the island of Ireland as a whole, either officially or unofficially. Northern Ireland Republic of Ireland Defence Forces flags Naval Service Air Corps Army Defence Force Training Centre (DFTC) Coast Guard Traditional province flags City and town flags Sporting flags Ensigns Historical military flags University flags Organisations Political flags Religious flags Former national flag proposals Other former flag proposals See also Cross-border flag for Ireland GAA county colours List of flags used in Northern Ireland Northern Ireland flags issue
https://en.wikipedia.org/wiki/Abscission
Abscission () is the shedding of various parts of an organism, such as a plant dropping a leaf, fruit, flower, or seed. In zoology, abscission is the intentional shedding of a body part, such as the shedding of a claw, husk, or the autotomy of a tail to evade a predator. In mycology, it is the liberation of a fungal spore. In cell biology, abscission refers to the separation of two daughter cells at the completion of cytokinesis. In plants Function A plant will abscise a part either to discard a member that is no longer necessary, such as a leaf during autumn, or a flower following fertilisation, or for the purposes of reproduction. Most deciduous plants drop their leaves by abscission before winter, whereas evergreen plants continuously abscise their leaves. Another form of abscission is fruit drop, when a plant abscises fruit while still immature in order to conserve resources needed to bring the remaining fruit to maturity. If a leaf is damaged, a plant may also abscise it to conserve water or photosynthetic efficiency, depending on the 'costs' to the plant as a whole. The abscission layer is a greenish-greyish color. Abscission can also occur in premature leaves as a means of plant defense. Premature leaf abscission has been shown to occur in response to infestation by gall aphids. By abscising leaves that have been made host to aphid galls, plants have been shown to massively diminish the pest population, as 98% of aphids in abscised galls died. The abscission is selective, and the chance of dropping leaves increases as the number of galls increases. A leaf with three or more galls was four times more likely to abscise than a leaf with one, and 20 times as likely to be dropped as a leaf without any galls. Process Abscission occurs in a series of three events: 1) resorption, 2) protective layer formation, and 3) detachment. Steps 2 and 3 may occur in either order depending on the species. Resorption Resorption involves degrading chlorophyll to extract the
https://en.wikipedia.org/wiki/Patch%20test
A patch test is a diagnostic method used to determine which specific substances cause allergic inflammation of a patient's skin. Patch testing helps identify which substances may be causing a delayed-type allergic reaction in a patient and may identify allergens not identified by blood testing or skin prick testing. It is intended to produce a local allergic reaction on a small area of the patient's back, where the diluted chemicals were planted. The chemicals included in the patch test kit are the offenders in approximately 85–90 percent of contact allergic eczema and include chemicals present in metals (e.g., nickel), rubber, leather, formaldehyde, lanolin, fragrance, toiletries, hair dyes, medicine, pharmaceutical items, food, drink, preservative, and other additives. Mechanism A patch test relies on the principle of a type IV hypersensitivity reaction. The first step in becoming allergic is sensitization. When skin is exposed to an allergen, the antigen-presenting cells (APCs) – also known as Langerhans cell or Dermal Dendritic Cell – phagocytize the substance, break it down to smaller components and present them on their surface bound major histocompatibility complex type two (MHC-II) molecules. The APC then travels to a lymph node, where it presents the displayed allergen to a CD4+ T-cell, or T-helper cell. The T-cell undergoes clonal expansion and some clones of the newly formed antigen specific sensitized T-cells travel back to the site of antigen exposure. When the skin is again exposed to the antigen, the memory t-cells in the skin recognize the antigen and produce cytokines (chemical signals), which cause more T-cells to migrate from blood vessels. This starts a complex immune cascade leading to skin inflammation, itching, and the typical rash of contact dermatitis. In general, it takes 2–4 days for a response in patch testing to develop. The patch test is just induction of contact dermatitis in a small area. Process Application of the patch t
https://en.wikipedia.org/wiki/Phoenix-RTOS
Phoenix-RTOS is a real-time operating system designed for Internet of Things appliances. The main goal of the system is to facilitate the creation of "Software Defined Solutions". History Phoenix-RTOS is the successor to the Phoenix operating system, developed from 1999 to 2001 by Pawel Pisarczyk at the Department of Electronics and Information Technology at Warsaw University of Technology. Phoenix was originally implemented for IA-32 microprocessors and was adapted to the ARM7TDMI processor in 2003, and the PowerPC in 2004. The system is available under the GPL license. Phoenix-RTOS 2.0 The decision to abandon the development of Phoenix and write the Phoenix-RTOS from scratch was taken by its creator in 2004. In 2010, the Phoenix Systems company was established, aiming to commercialize the system. Phoenix-RTOS 2.0 is based on a monolithic kernel. Initially versions for the IA-32 processor and configurable eSi-RISC were developed. In cooperation with NXP Semiconductors, Phoenix-RTOS 2.0 was also adapted to the Vybrid (ARM Cortex-A5) platform. This version is equipped with PRIME (Phoenix-PRIME) and the G3-PLC (Phoenix-G3) protocol support, used in Smart Grid networks. Phoenix-RTOS runs applications designed and written for the Unix operating system. Phoenix-RTOS 3.0 Phoenix-RTOS version 3.0 is based on a microkernel. It is geared towards measuring devices with low power consumption. The main problem with the first implementation was low kernel modularity and difficulties with the management process of software development (device drivers, file system drivers). It is an open source operating system (on BSD license), available on GitHub. HaaS modules The Phoenix-RTOS can be equipped with HaaS (Hardware as a Software) modules that allow the implementation of rich devices functionality, e.g. modems. Existing HaaS modules include: Phoenix-PRIME - software implementation of PRIME PLC standard certified in 2014. Phoenix-G3 - a software implementation of the G3
https://en.wikipedia.org/wiki/Bob%20Moses%20%28activist%29
Robert Parris Moses (January 23, 1935 – July 25, 2021) was an American educator and civil rights activist known for his work as a leader of the Student Nonviolent Coordinating Committee (SNCC) on voter education and registration in Mississippi during the Civil Rights Movement, and his co-founding of the Mississippi Freedom Democratic Party. As part of his work with the Council of Federated Organizations (COFO), a coalition of the Mississippi branches of the four major civil rights organizations (SNCC, CORE, NAACP, SCLC), he was the main organizer for the Freedom Summer Project. Born and raised in Harlem, he was a graduate of Hamilton College and later earned a Master's degree in philosophy at Harvard University. He spent the 1960s working in the civil rights and anti-war movements, until he was drafted in 1966 and left the country, spending much of the following decade in Tanzania, teaching and working with the Ministry of Education. After returning to the US, in 1982, Moses received a MacArthur Fellowship and began developing the Algebra Project. The math literacy program emphasizes teaching algebra skills to minority students based on broad-based community organizing and collaboration with parents, teachers, and students, to improve college and job readiness. Early life Robert Parris Moses was born January 23, 1935, in New York City. His parents, Gregory H. Moses, a janitor, and Louise (Parris) Moses, a homemaker, raised their three children in the public housing complex, Harlem River Houses, with frequent visits to the public library. He graduated from Stuyvesant High School in 1952 and received his B.A. from Hamilton College in 1956. At Hamilton he majored in philosophy and French and played basketball. In 1957, he earned an M.A. in philosophy at Harvard, and was working toward a PhD but his mother's death and father's hospitalization brought him back to New York City, and in 1958 began teaching math at the Horace Mann School in the Bronx of New York City. Al
https://en.wikipedia.org/wiki/List%20of%20New%20Zealand%20flags
This is a list of flags of New Zealand. It includes flags that either have been in use or are currently used by institutions, local authorities, or the government of New Zealand. Some flags have historical or cultural (e.g. Māori culture) significance. National flags Royal and viceregal Ensigns Associated states and territories Regions and cities Māori flags Sporting flags Other New Zealand flags Proposed alternative flags Notes
https://en.wikipedia.org/wiki/David%20P.%20Anderson
David Pope Anderson (born 1955) is an American research scientist at the Space Sciences Laboratory, at the University of California, Berkeley, and an adjunct professor of computer science at the University of Houston. Anderson leads the SETI@home, BOINC, Bossa, and Bolt software projects. Education Anderson received a BA in mathematics from Wesleyan University, and MS and PhD degrees in mathematics and computer science from the University of Wisconsin–Madison. While in graduate school he published four research papers in computer graphics. His PhD research involved using enhanced attribute grammars to specify and implement communication protocols. Career From 1985 to 1992 he was an assistant professor in the UC Berkeley Computer Science Department, where he received the NSF Presidential Young Investigator and IBM Faculty Development awards. During this period he conducted several research projects: FORMULA (Forth Music Language), a parallel programming language and runtime system for computer music based on Forth. MOOD (Musical Object-Oriented Dialect), a parallel programming language and runtime system for computer music based on C++. A port for MS-DOS also exists. DASH, a distributed operating system with support for digital audio and video. Continuous Media File System (CMFS), a file system for digital audio and video Comet, an I/O server for digital audio and video. From 1992 to 1994 he worked at Sonic Solutions, where he developed Sonic System, the first distributed system for professional digital audio editing. Inventions In 1994 he invented "Virtual Reality Television", a television system allowing viewers to control their virtual position and orientation. He was awarded a patent for this invention in 1996. In 1994 he developed one of the first systems for collaborative filtering, and developed a web site, rare.com, that provided movie recommendations based on the user's movie ratings. From 1995 to 1998 he was chief technical officer of Tune
https://en.wikipedia.org/wiki/Givaudan
Givaudan () is a Swiss multinational manufacturer of flavours, fragrances and active cosmetic ingredients. As of 2008, it is the world's largest company in the flavour and fragrance industries. Overview The company's scents and flavours are developed for food and beverage makers, and also used in household goods, as well as grooming and personal care products and perfumes. The company has two business areas: Taste & Wellbeing offers flavours, taste, functional and nutritional solutions for the food industry (savoury, dairy, sweets and beverages). Fragrance & Beauty creates fragrances and develop beauty and wellbeing solutions for personal care, fabric care, hygiene, home care, fine fragrances, and beauty. Givaudan's flavours and fragrances are usually custom-made and sold under confidentiality agreements. Givaudan uses ScentTrek, a technology that captures the chemical makeup of smell from living plants. The company has locations in Europe, Africa and the Middle East, North America, Latin America and Asia Pacific. In 2022, Givaudan had sales of CHF 7.1 billion. It is one of Switzerland's 30 biggest listed companies in terms of market capitalization. In 2021, Givaudan placed first on FoodTalks' Global Top 50 Food Flavours and Fragrances Companies list. The company’s purpose of ‘Creating for happier, healthier lives with love for nature. Let’s imagine together’ is focused in four domains: creations, nature, people and communities. The company's ambitions include doubling its business through creations that contribute to happier, healthier lives by 2030, becoming climate positive before 2050, becoming a leading employer for inclusion before 2025 and sourcing all materials and services in a way that protects the environment and people by 2030. Givaudan’s purpose goal areas are in line with its strategy and ambitions for 2025. Givaudan is a member of the European Flavour Association. Major competitors include Firmenich, International Flavors and Fragrances a
https://en.wikipedia.org/wiki/Gibbons%E2%80%93Hawking%E2%80%93York%20boundary%20term
In general relativity, the Gibbons–Hawking–York boundary term is a term that needs to be added to the Einstein–Hilbert action when the underlying spacetime manifold has a boundary. The Einstein–Hilbert action is the basis for the most elementary variational principle from which the field equations of general relativity can be defined. However, the use of the Einstein–Hilbert action is appropriate only when the underlying spacetime manifold is closed, i.e., a manifold which is both compact and without boundary. In the event that the manifold has a boundary , the action should be supplemented by a boundary term so that the variational principle is well-defined. The necessity of such a boundary term was first realised by York and later refined in a minor way by Gibbons and Hawking. For a manifold that is not closed, the appropriate action is where is the Einstein–Hilbert action, is the Gibbons–Hawking–York boundary term, is the induced metric (see section below on definitions) on the boundary, its determinant, is the trace of the second fundamental form, is equal to where the normal to is spacelike and where the normal to is timelike, and are the coordinates on the boundary. Varying the action with respect to the metric , subject to the condition gives the Einstein equations; the addition of the boundary term means that in performing the variation, the geometry of the boundary encoded in the transverse metric is fixed (see section below). There remains ambiguity in the action up to an arbitrary functional of the induced metric . That a boundary term is needed in the gravitational case is because , the gravitational Lagrangian density, contains second derivatives of the metric tensor. This is a non-typical feature of field theories, which are usually formulated in terms of Lagrangians that involve first derivatives of fields to be varied over only. The GHY term is desirable, as it possesses a number of other key features. When passing to the Hamilton
https://en.wikipedia.org/wiki/Methanandamide
Methanandamide (AM-356) is a synthetically created stable chiral analog of anandamide. Its effects have been observed to act on the cannabinoid receptors (specifically on CB1 receptors, which are part of the central nervous system) found in different organisms such as mammals, fish, and certain invertebrates (e.g. Hydra).
https://en.wikipedia.org/wiki/Interrupt%20request
In a computer, an interrupt request (or IRQ) is a hardware signal sent to the processor that temporarily stops a running program and allows a special program, an interrupt handler, to run instead. Hardware interrupts are used to handle events such as receiving data from a modem or network card, key presses, or mouse movements. Interrupt lines are often identified by an index with the format of IRQ followed by a number. For example, on the Intel 8259 family of programmable interrupt controllers (PICs) there are eight interrupt inputs commonly referred to as IRQ0 through IRQ7. In x86 based computer systems that use two of these PICs, the combined set of lines are referred to as IRQ0 through IRQ15. Technically these lines are named IR0 through IR7, and the lines on the ISA bus to which they were historically attached are named IRQ0 through IRQ15 (although historically as the number of hardware devices increased, the total possible number of interrupts was increased by means of cascading requests, by making one of the IRQ numbers cascade to another set or sets of numbered IRQs, handled by one or more subsequent controllers). Newer x86 systems integrate an Advanced Programmable Interrupt Controller (APIC) that conforms to the Intel APIC Architecture. These APICs support a programming interface for up to 255 physical hardware IRQ lines per APIC, with a typical system implementing support for only around 24 total hardware lines. During the early years of personal computing, IRQ management was often of user concern. With the introduction of plug and play devices this has been alleviated through automatic configuration. Overview When working with personal computer hardware, installing and removing devices, the system relies on interrupt requests. There are default settings that are configured in the system BIOS and recognized by the operating system. These default settings can be altered by advanced users. Modern plug and play technology has not only reduced the need f
https://en.wikipedia.org/wiki/Social%20and%20Decision%20Sciences%20%28Carnegie%20Mellon%20University%29
The Department of Social and Decision Sciences (SDS) is an interdisciplinary academic department within the Dietrich College of Humanities and Social Sciences at Carnegie Mellon University. The Department of Social and Decision Sciences is headquartered in Porter Hall in Pittsburgh, Pennsylvania and is led by Department Head Gretchen Chapman. SDS has a world-class reputation for research and education programs in decision-making in public policy, economics, management, and the behavioral social sciences. History The Department of Social Sciences was established in 1976, as part of the Dietrich College of Humanities and Social Sciences under Dean John Patrick Crecine with approval from Heinz College Dean Otto Davis, which previously housed the program. The department was staffed by political scientists, sociologists, and economists from within the Dietrich College, the Heinz College, and the Tepper School of Business. In the 1980s, the department was led by Patrick D. Larkey and developed the undergraduate information systems program which became a huge success, eventually being spun off into an independent interdisciplinary program in the Dietrich College. In 1985, Robyn Dawes joined the department and began to re-focus it into its current form and expertise in behavioral decision-making and caused it to be renamed as the Department of Social and Decision Sciences. Carnegie Mellon's Institute for Politics and Strategy was spun off of the department in 2015. Education The department runs highly regarded undergraduate Bachelor of Science programs in Decision Science and Policy and Management and a Bachelor of Arts program in Behavioral Economics, Policy, and Organizations as well as minors in Decision Science and Policy and Management. Further, SDS is a partner in various interdisciplinary undergraduate programs such as the Sociology minor, Environmental Policy program, and the Quantitative Social Science Scholars program. At the master's degree level, SDS partner
https://en.wikipedia.org/wiki/Standard%20gravity
The standard acceleration of gravity or standard acceleration of free fall, often called simply standard gravity and denoted by or , is the nominal gravitational acceleration of an object in a vacuum near the surface of the Earth. It is a constant defined by standard as . This value was established by the 3rd General Conference on Weights and Measures (1901, CR 70) and used to define the standard weight of an object as the product of its mass and this nominal acceleration. The acceleration of a body near the surface of the Earth is due to the combined effects of gravity and centrifugal acceleration from the rotation of the Earth (but the latter is small enough to be negligible for most purposes); the total (the apparent gravity) is about 0.5% greater at the poles than at the Equator. Although the symbol is sometimes used for standard gravity, (without a suffix) can also mean the local acceleration due to local gravity and centrifugal acceleration, which varies depending on one's position on Earth (see Earth's gravity). The symbol should not be confused with , the gravitational constant, or g, the symbol for gram. The is also used as a unit for any form of acceleration, with the value defined as above; see g-force. The value of defined above is a nominal midrange value on Earth, originally based on the acceleration of a body in free fall at sea level at a geodetic latitude of 45°. Although the actual acceleration of free fall on Earth varies according to location, the above standard figure is always used for metrological purposes. In particular, since it is the ratio of the kilogram-force and the kilogram, its numeric value when expressed in coherent SI units is the ratio of the kilogram-force and the newton, two units of force. History Already in the early days of its existence, the International Committee for Weights and Measures (CIPM) proceeded to define a standard thermometric scale, using the boiling point of water. Since the boiling point varies with
https://en.wikipedia.org/wiki/X87
x87 is a floating-point-related subset of the x86 architecture instruction set. It originated as an extension of the 8086 instruction set in the form of optional floating-point coprocessors that work in tandem with corresponding x86 CPUs. These microchips have names ending in "87". This is also known as the NPX (Numeric Processor eXtension). Like other extensions to the basic instruction set, x87 instructions are not strictly needed to construct working programs, but provide hardware and microcode implementations of common numerical tasks, allowing these tasks to be performed much faster than corresponding machine code routines can. The x87 instruction set includes instructions for basic floating-point operations such as addition, subtraction and comparison, but also for more complex numerical operations, such as the computation of the tangent function and its inverse, for example. Most x86 processors since the Intel 80486 have had these x87 instructions implemented in the main CPU, but the term is sometimes still used to refer to that part of the instruction set. Before x87 instructions were standard in PCs, compilers or programmers had to use rather slow library calls to perform floating-point operations, a method that is still common in (low-cost) embedded systems. Description The x87 registers form an eight-level deep non-strict stack structure ranging from ST(0) to ST(7) with registers that can be directly accessed by either operand, using an offset relative to the top, as well as pushed and popped. (This scheme may be compared to how a stack frame may be both pushed/popped and indexed.) There are instructions to push, calculate, and pop values on top of this stack; unary operations (FSQRT, FPTAN etc.) then implicitly address the topmost ST(0), while binary operations (FADD, FMUL, FCOM, etc.) implicitly address ST(0) and ST(1). The non-strict stack model also allows binary operations to use ST(0) together with a direct memory operand or with an explicitly sp
https://en.wikipedia.org/wiki/Catalytic%20triad
A catalytic triad is a set of three coordinated amino acids that can be found in the active site of some enzymes. Catalytic triads are most commonly found in hydrolase and transferase enzymes (e.g. proteases, amidases, esterases, acylases, lipases and β-lactamases). An acid-base-nucleophile triad is a common motif for generating a nucleophilic residue for covalent catalysis. The residues form a charge-relay network to polarise and activate the nucleophile, which attacks the substrate, forming a covalent intermediate which is then hydrolysed to release the product and regenerate free enzyme. The nucleophile is most commonly a serine or cysteine amino acid, but occasionally threonine or even selenocysteine. The 3D structure of the enzyme brings together the triad residues in a precise orientation, even though they may be far apart in the sequence (primary structure). As well as divergent evolution of function (and even the triad's nucleophile), catalytic triads show some of the best examples of convergent evolution. Chemical constraints on catalysis have led to the same catalytic solution independently evolving in at least 23 separate superfamilies. Their mechanism of action is consequently one of the best studied in biochemistry. History The enzymes trypsin and chymotrypsin were first purified in the 1930s. A serine in each of trypsin and chymotrypsin was identified as the catalytic nucleophile (by diisopropyl fluorophosphate modification) in the 1950s. The structure of chymotrypsin was solved by X-ray crystallography in the 1960s, showing the orientation of the catalytic triad in the active site. Other proteases were sequenced and aligned to reveal a family of related proteases, now called the S1 family. Simultaneously, the structures of the evolutionarily unrelated papain and subtilisin proteases were found to contain analogous triads. The 'charge-relay' mechanism for the activation of the nucleophile by the other triad members was proposed in the late 1960s. As
https://en.wikipedia.org/wiki/Leucine%20zipper
A leucine zipper (or leucine scissors) is a common three-dimensional structural motif in proteins. They were first described by Landschulz and collaborators in 1988 when they found that an enhancer binding protein had a very characteristic 30-amino acid segment and the display of these amino acid sequences on an idealized alpha helix revealed a periodic repetition of leucine residues at every seventh position over a distance covering eight helical turns. The polypeptide segments containing these periodic arrays of leucine residues were proposed to exist in an alpha-helical conformation and the leucine side chains from one alpha helix interdigitate with those from the alpha helix of a second polypeptide, facilitating dimerization. Leucine zippers are a dimerization motif of the bZIP (Basic-region leucine zipper) class of eukaryotic transcription factors. The bZIP domain is 60 to 80 amino acids in length with a highly conserved DNA binding basic region and a more diversified leucine zipper dimerization region. The localization of the leucines are critical for the DNA binding to the proteins. Leucine zippers are present in both eukaryotic and prokaryotic regulatory proteins, but are mainly a feature of eukaryotes. They can also be annotated simply as ZIPs, and ZIP-like motifs have been found in proteins other than transcription factors and are thought to be one of the general protein modules for protein–protein interactions. Sequence and structure Leucine zipper is created by the dimerization of two specific alpha helix monomers bound to DNA. The leucine zipper is formed by amphipathic interaction between two ZIP domains. The ZIP domain is found in the alpha-helix of each monomer, and contains leucines, or leucine-like amino acids. These amino acids are spaced out in each region's polypeptide sequence in such a way that when the sequence is coiled in a 3D alpha-helix, the leucine residues line up on the same side of the helix. This region of the alpha-helix- cont
https://en.wikipedia.org/wiki/Padovan%20polynomials
In mathematics, Padovan polynomials are a generalization of Padovan sequence numbers. These polynomials are defined by: The first few Padovan polynomials are: The Padovan numbers are recovered by evaluating the polynomials Pn−3(x) at x = 1. Evaluating Pn−3(x) at x = 2 gives the nth Fibonacci number plus (−1)n. The ordinary generating function for the sequence is See also Polynomial sequences Polynomials
https://en.wikipedia.org/wiki/Neumann%20series
A Neumann series is a mathematical series of the form where is an operator and its times repeated application. This generalizes the geometric series. The series is named after the mathematician Carl Neumann, who used it in 1877 in the context of potential theory. The Neumann series is used in functional analysis. It forms the basis of the Liouville-Neumann series, which is used to solve Fredholm integral equations. It is also important when studying the spectrum of bounded operators. Properties Suppose that is a bounded linear operator on the normed vector space . If the Neumann series converges in the operator norm, then is invertible and its inverse is the series: , where is the identity operator in . To see why, consider the partial sums . Then we have This result on operators is analogous to geometric series in , in which we find that: One case in which convergence is guaranteed is when is a Banach space and in the operator norm or is convergent. However, there are also results which give weaker conditions under which the series converges. Example Let be given by: We need to show that C is smaller than unity in some norm. Therefore, we calculate: Thus, we know from the statement above that exists. Approximate matrix inversion A truncated Neumann series can be used for approximate matrix inversion. To approximate the inverse of an invertible matrix , we can assign the linear operator as: where is the identity matrix. If the norm condition on is satisfied, then truncating the series at , we get: The set of invertible operators is open A corollary is that the set of invertible operators between two Banach spaces and is open in the topology induced by the operator norm. Indeed, let be an invertible operator and let be another operator. If , then is also invertible. Since , the Neumann series is convergent. Therefore, we have Taking the norms, we get The norm of can be bounded by Applications The Neumann series has