source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Assay
An assay is an investigative (analytic) procedure in laboratory medicine, mining, pharmacology, environmental biology and molecular biology for qualitatively assessing or quantitatively measuring the presence, amount, or functional activity of a target entity. The measured entity is often called the analyte, the measurand, or the target of the assay. The analyte can be a drug, biochemical substance, chemical element or compound, or cell in an organism or organic sample. An assay usually aims to measure an analyte's intensive property and express it in the relevant measurement unit (e.g. molarity, density, functional activity in enzyme international units, degree of effect in comparison to a standard, etc.). If the assay involves exogenous reactants (the reagents), then their quantities are kept fixed (or in excess) so that the quantity and quality of the target are the only limiting factors. The difference in the assay outcome is used to deduce the unknown quality or quantity of the target in question. Some assays (e.g., biochemical assays) may be similar to chemical analysis and titration. However, assays typically involve biological material or phenomena that are intrinsically more complex in composition or behavior, or both. Thus, reading of an assay may be noisy and involve greater difficulties in interpretation than an accurate chemical titration. On the other hand, older generation qualitative assays, especially bioassays, may be much more gross and less quantitative (e.g., counting death or dysfunction of an organism or cells in a population, or some descriptive change in some body part of a group of animals). Assays have become a routine part of modern medical, environmental, pharmaceutical, and forensic technology. Other businesses may also employ them at the industrial, curbside, or field levels. Assays in high commercial demand have been well investigated in research and development sectors of professional industries. They have also undergone generatio
https://en.wikipedia.org/wiki/V%20%28operating%20system%29
The V operating system (sometimes written V-System) is a discontinued microkernel distributed operating system that was developed by faculty and students in the Distributed Systems Group at Stanford University from 1981 to 1988, led by Professors David Cheriton and Keith A. Lantz. V was the successor to the Thoth operating system and Verex kernel that Cheriton had developed in the 1970s. Despite similar names and close development dates, it is unrelated to UNIX System V. Features The key concepts in V are multithreading and synchronous message passing. The original V terminology uses process for what is now commonly called a thread, and team for what is now commonly called a process consisting of multiple threads sharing an address space. Communication between threads in V uses synchronous message passing, with short, fixed-length messages that can include access rights for the receiver to read or write part of the sender's address space before replying. The same message-passing interface is used both between threads within one process, between threads of different processes within one machine, and between threads on different machines connected by a local Ethernet. A thread receiving a message is not required to reply to it before receiving other messages; this distinguished the model from Ada rendezvous. One common pattern for using the messaging facility is for clients to send messages to a server requesting some form of service. From the client side, this looks much like a remote procedure call (RPC). The convenience of an automatic stub generator was lacking, but in contrast, the client can pass one parameter by reference, which is not possible with other RPC implementations. From the server side the model differs more from RPC, since by default all client requests are multiplexed onto one server thread. The server is free to explicitly fork threads to handle client requests in parallel, however; if this is done, the server-side model is much like RPC too.
https://en.wikipedia.org/wiki/Sound%20intensity
Sound intensity, also known as acoustic intensity, is defined as the power carried by sound waves per unit area in a direction perpendicular to that area. The SI unit of intensity, which includes sound intensity, is the watt per square meter (W/m2). One application is the noise measurement of sound intensity in the air at a listener's location as a sound energy quantity. Sound intensity is not the same physical quantity as sound pressure. Human hearing is sensitive to sound pressure which is related to sound intensity. In consumer audio electronics, the level differences are called "intensity" differences, but sound intensity is a specifically defined quantity and cannot be sensed by a simple microphone. Sound intensity level is a logarithmic expression of sound intensity relative to a reference intensity. Mathematical definition Sound intensity, denoted I, is defined by where p is the sound pressure; v is the particle velocity. Both I and v are vectors, which means that both have a direction as well as a magnitude. The direction of sound intensity is the average direction in which energy is flowing. The average sound intensity during time T is given by For a plane wave , Where, is frequency of sound, is the amplitude of the sound wave particle displacement, is density of medium in which sound is traveling, and is speed of sound. Inverse-square law For a spherical sound wave, the intensity in the radial direction as a function of distance r from the centre of the sphere is given by where P is the sound power; A(r) is the surface area of a sphere of radius r. Thus sound intensity decreases as 1/r2 from the centre of the sphere: This relationship is an inverse-square law. Sound intensity level Sound intensity level (SIL) or acoustic intensity level is the level (a logarithmic quantity) of the intensity of a sound relative to a reference value. It is denoted LI, expressed in nepers, bels, or decibels, and defined by where I is the sound
https://en.wikipedia.org/wiki/Pei-Yuan%20Wei
Pei-Yuan Wei () (d. 2023) was a Taiwanese-American businessman who created ViolaWWW, the first popular graphical web browser. Career Pei-Yuan Wei was born in Pingtung County, Taiwan. He graduated from Berkeley High School in 1986. He received his bachelor's degree from the University of California, Berkeley, and was a member of the student club, the eXperimental Computing Facility (XCF). In the 1990s, Wei was a founding employee of Global Network Navigator, one of the first Internet-based businesses. Later he worked for various Palm OS-related businesses. Since 2008, Perry lived in both Taiwan and the US, and has devoted most of his time to taking care of his ill family member. Controversy Pei-Yuan Wei was at the center of a controversy over patents relating to embedded objects in a web browser, which revolves around whether his browser, ViolaWWW, had the capability to launch embedded objects, prior to the date a patent was filed by Michael David Doyle of Eolas, and the University of California. If it did, it would constitute prior art, which may invalidate the patent issued to Eolas. If it did not, in addition to major financial penalties against such companies as Microsoft, the way the World Wide Web and the way browsers that surf it work may be forced to change. Eolas' claim was eventually found invalid by a Texas court. References External links Pei's Home Page (archive.org 2010-10-18) Viola Home Page (archive.org 2022-03-31) Living people Internet pioneers Computer programmers American people of Taiwanese descent Berkeley High School (Berkeley, California) alumni American academics of Chinese descent Year of birth missing (living people)
https://en.wikipedia.org/wiki/Sound%20power
Sound power or acoustic power is the rate at which sound energy is emitted, reflected, transmitted or received, per unit time. It is defined as "through a surface, the product of the sound pressure, and the component of the particle velocity, at a point on the surface in the direction normal to the surface, integrated over that surface." The SI unit of sound power is the watt (W). It relates to the power of the sound force on a surface enclosing a sound source, in air. For a sound source, unlike sound pressure, sound power is neither room-dependent nor distance-dependent. Sound pressure is a property of the field at a point in space, while sound power is a property of a sound source, equal to the total power emitted by that source in all directions. Sound power passing through an area is sometimes called sound flux or acoustic flux through that area. Sound power level LWA Regulations often specify a method for measurement that integrates sound pressure over a surface enclosing the source. LWA specifies the power delivered to that surface in decibels relative to one picowatt. Devices (e.g., a vacuum cleaner) often have labeling requirements and maximum amounts they are allowed to produce. The A-weighting scale is used in the calculation as the metric is concerned with the loudness as perceived by the human ear. Measurements in accordance with ISO 3744 are taken at 6 to 12 defined points around the device in a hemi-anechoic space. The test environment can be located indoors or outdoors. The required environment is on hard ground in a large open space or hemi-anechoic chamber (free-field over a reflecting plane.) Table of selected sound sources Here is a table of some examples, from an on-line source. For omnidirectional sources in free space, sound power in LwA is equal to sound pressure level in dB above 20 micropascals at a distance of 0.2821 m Mathematical definition Sound power, denoted P, is defined by where f is the sound force of unit vector u; v is the
https://en.wikipedia.org/wiki/CSNET
The Computer Science Network (CSNET) was a computer network that began operation in 1981 in the United States. Its purpose was to extend networking benefits, for computer science departments at academic and research institutions that could not be directly connected to ARPANET, due to funding or authorization limitations. It played a significant role in spreading awareness of, and access to, national networking and was a major milestone on the path to development of the global Internet. CSNET was funded by the National Science Foundation for an initial three-year period from 1981 to 1984. History Lawrence Landweber at the University of Wisconsin–Madison prepared the original CSNET proposal, on behalf of a consortium of universities (Georgia Tech, University of Minnesota, University of New Mexico, University of Oklahoma, Purdue University, University of California, Berkeley, University of Utah, University of Virginia, University of Washington, University of Wisconsin, and Yale University). The US National Science Foundation (NSF) requested a review from David J. Farber at the University of Delaware. Farber assigned the task to his graduate student Dave Crocker who was already active in the development of electronic mail. The project was deemed interesting but in need of significant refinement. The proposal eventually gained the support of Vinton Cerf and DARPA. In 1980, the NSF awarded $5 million to launch the network. It was an unusually large project for the NSF at the time. A stipulation for the award of the contract was that the network needed to become self-sufficient by 1986. The first management team consisted of Landweber (University of Wisconsin), Farber (University of Delaware), Peter J. Denning (Purdue University), Anthony C. Hearn (RAND Corporation), and Bill Kern from the NSF. Once CSNET was fully operational, the systems and ongoing network operations were transferred to a team led by Richard Edmiston at Bolt Beranek and Newman (BBN) of Cambridge,
https://en.wikipedia.org/wiki/Cohen%E2%80%93Macaulay%20ring
In mathematics, a Cohen–Macaulay ring is a commutative ring with some of the algebro-geometric properties of a smooth variety, such as local equidimensionality. Under mild assumptions, a local ring is Cohen–Macaulay exactly when it is a finitely generated free module over a regular local subring. Cohen–Macaulay rings play a central role in commutative algebra: they form a very broad class, and yet they are well understood in many ways. They are named for , who proved the unmixedness theorem for polynomial rings, and for , who proved the unmixedness theorem for formal power series rings. All Cohen–Macaulay rings have the unmixedness property. For Noetherian local rings, there is the following chain of inclusions. Definition For a commutative Noetherian local ring R, a finite (i.e. finitely generated) R-module is a Cohen-Macaulay module if (in general we have: , see Auslander–Buchsbaum formula for the relation between depth and dim of a certain kind of modules). On the other hand, is a module on itself, so we call a Cohen-Macaulay ring if it is a Cohen-Macaulay module as an -module. A maximal Cohen-Macaulay module is a Cohen-Macaulay module M such that . The above definition was for a Noetherian local rings. But we can expand the definition for a more general Noetherian ring: If is a commutative Noetherian ring, then an R-module M is called Cohen–Macaulay module if is a Cohen-Macaulay module for all maximal ideals . (This is a kind of circular definition unless we define zero modules as Cohen-Macaulay. So we define zero modules as Cohen-Macaulay modules in this definition.) Now, to define maximal Cohen-Macaulay modules for these rings, we require that to be such an -module for each maximal ideal of R. As in the local case, R is a Cohen-Macaulay ring if it is a Cohen-Macaulay module (as an -module on itself). Examples Noetherian rings of the following types are Cohen–Macaulay. Any regular local ring. This leads to various examples of Cohen–Macaulay rings,
https://en.wikipedia.org/wiki/Riemann%E2%80%93Hurwitz%20formula
In mathematics, the Riemann–Hurwitz formula, named after Bernhard Riemann and Adolf Hurwitz, describes the relationship of the Euler characteristics of two surfaces when one is a ramified covering of the other. It therefore connects ramification with algebraic topology, in this case. It is a prototype result for many others, and is often applied in the theory of Riemann surfaces (which is its origin) and algebraic curves. Statement For a compact, connected, orientable surface , the Euler characteristic is , where g is the genus (the number of handles), since the Betti numbers are . In the case of an (unramified) covering map of surfaces that is surjective and of degree , we have the formula That is because each simplex of should be covered by exactly in , at least if we use a fine enough triangulation of , as we are entitled to do since the Euler characteristic is a topological invariant. What the Riemann–Hurwitz formula does is to add in a correction to allow for ramification (sheets coming together). Now assume that and are Riemann surfaces, and that the map is complex analytic. The map is said to be ramified at a point P in S′ if there exist analytic coordinates near P and π(P) such that π takes the form π(z) = zn, and n > 1. An equivalent way of thinking about this is that there exists a small neighborhood U of P such that π(P) has exactly one preimage in U, but the image of any other point in U has exactly n preimages in U. The number n is called the ramification index at P and also denoted by eP. In calculating the Euler characteristic of S′ we notice the loss of eP − 1 copies of P above π(P) (that is, in the inverse image of π(P)). Now let us choose triangulations of S and S′ with vertices at the branch and ramification points, respectively, and use these to compute the Euler characteristics. Then S′ will have the same number of d-dimensional faces for d different from zero, but fewer than expected vertices. Therefore, we find a "corrected"
https://en.wikipedia.org/wiki/NESSIE
NESSIE (New European Schemes for Signatures, Integrity and Encryption) was a European research project funded from 2000 to 2003 to identify secure cryptographic primitives. The project was comparable to the NIST AES process and the Japanese Government-sponsored CRYPTREC project, but with notable differences from both. In particular, there is both overlap and disagreement between the selections and recommendations from NESSIE and CRYPTREC (as of the August 2003 draft report). The NESSIE participants include some of the foremost active cryptographers in the world, as does the CRYPTREC project. NESSIE was intended to identify and evaluate quality cryptographic designs in several categories, and to that end issued a public call for submissions in March 2000. Forty-two were received, and in February 2003 twelve of the submissions were selected. In addition, five algorithms already publicly known, but not explicitly submitted to the project, were chosen as "selectees". The project has publicly announced that "no weaknesses were found in the selected designs". Selected algorithms The selected algorithms and their submitters or developers are listed below. The five already publicly known, but not formally submitted to the project, are marked with a "*". Most may be used by anyone for any purpose without needing to seek a patent license from anyone; a license agreement is needed for those marked with a "#", but the licensors of those have committed to "reasonable non-discriminatory license terms for all interested", according to a NESSIE project press release. None of the six stream ciphers submitted to NESSIE were selected because every one fell to cryptanalysis. This surprising result led to the eSTREAM project. Block ciphers MISTY1: Mitsubishi Electric AES*: (Advanced Encryption Standard) (NIST, FIPS Pub 197) (aka Rijndael) Camellia: Nippon Telegraph and Telephone and Mitsubishi Electric SHACAL-2: Gemplus Collision-Resistant Hash Functions WHIRLPOOL: Scopus Tecnolog
https://en.wikipedia.org/wiki/IBM%208100
The IBM 8100 Information System, announced Oct. 3, 1978, was at one time IBM’s principal distributed processing engine, providing local processing capability under two incompatible operating systems (DPPX and DPCX) and was a follow-on to the IBM 3790. The 8100, when used with the Distributed Processing Programming Executive (DPPX), was intended to provide turnkey distributed processing capabilities in a centrally controlled and managed network. It never saw much success—one anonymous source, according to PC Magazine, called it a "boat anchor"—and became moribund when host-based networks went out of fashion. This, coupled with IBM's recognition that they had too many hardware and software systems with similar processing power and function, led to announcement in March 1986 that the 8100 line would not be expanded and a new System/370 compatible processor line, ES/9370, would be provided to replace it. In March 1987, IBM announced that it intended to provide in 1989 a version of DPPX/SP that would run on the new ES/9370. A formal announcement followed in March 1988 of DPPX/370, a version of DPPX that executed on the ES/9370 family of processors. DPPX/370 was made available to customers in December 1988. DPCX (Distributed Processing Control eXecutive) was mainly to support a word processing system, Distributed Office Support Facility (DOSF). Architecture The 8100 was a 32-bit processor, but its instruction set reveals its lineage as the culmination of a line of so-called Universal Controller processors internally designated UC0 (8-bit), UC.5 (16-bit) and UC1 (32-bit). Each processor carried along the instruction set and architecture of the smaller processors, allowing programs written for a smaller processor to run on a larger one without change. The 8100 had another interesting distinction in being one of the first commercially available systems to have a network with characteristics of what we now call local area networks, in particular the mechanism of packet
https://en.wikipedia.org/wiki/Hormesis
Hormesis is a characteristic of many biological processes, namely a biphasic or triphasic response to exposure to increasing amounts of a substance or condition. Within the hormetic zone, the biological response to low exposures to toxins and other stressors is generally favorable. The term "hormesis" comes from Greek hórmēsis "rapid motion, eagerness", itself from ancient Greek hormáein "to set in motion, impel, urge on", the same Greek root as the word hormone. The term 'hormetics' has been proposed for the study and science of hormesis. In toxicology, hormesis is a dose response phenomenon to xenobiotics or other stressors characterized by a low-dose stimulation, with zero dose and high-dose inhibition, thus resulting in a J-shaped or an inverted U-shaped dose response (e.g. the arms of the "U" are inhibitory or toxic concentrations whereas the curve region stimulates a beneficial response.) Generally speaking, hormesis pertains to the study of benefits of exposure to toxins such as radiation or mercury (perhaps analogous to health paradoxes such as the smoker's paradox, although differing by virtue of dose-dependent effects). In physiology and nutrition, hormesis can be visualized as a hormetic curve with regions of deficiency, homeostasis, and toxicity. Physiological concentrations deviating above or below homeostasis concentrations adversely affects an organism, thus in this context, the hormetic zone is synonymously known as the region of homeostasis. In pharmacology the hormetic zone is similar to the therapeutic window. Some psychological or environmental factors that would seem to produce positive responses have also been termed "eustress". In the context of toxicology, the hormesis model of dose response is vigorously debated. The biochemical mechanisms by which hormesis works (particularly in applied cases pertaining to behavior and toxins) remain under early laboratory research and are not well understood. The notion that hormesis is an important pol
https://en.wikipedia.org/wiki/Centered%20hexagonal%20number
In mathematics and combinatorics, a centered hexagonal number, or hex number, is a centered figurate number that represents a hexagon with a dot in the center and all other dots surrounding the center dot in a hexagonal lattice. The following figures illustrate this arrangement for the first four centered hexagonal numbers: {|style="min-width: 325px;"| ! 1 !! !! 7 !! !! 19 !! !! 37 |- style="text-align:center; color:red; vertical-align:middle;" | +1 || || +6 || || +12 || || +18 |- style="vertical-align:middle; text-align:center; line-height:1.1em;" | | |     | |               | |                               |} Centered hexagonal numbers should not be confused with cornered hexagonal numbers, which are figurate numbers in which the associated hexagons share a vertex. The sequence of hexagonal numbers starts out as follows : 1, 7, 19, 37, 61, 91, 127, 169, 217, 271, 331, 397, 469, 547, 631, 721, 817, 919. Formula The th centered hexagonal number is given by the formula Expressing the formula as shows that the centered hexagonal number for is 1 more than 6 times the th triangular number. In the opposite direction, the index corresponding to the centered hexagonal number can be calculated using the formula This can be used as a test for whether a number is centered hexagonal: it will be if and only if the above expression is an integer. Recurrence and generating function The centered hexagonal numbers satisfy the recurrence relation From this we can calculate the generating function . The generating function satisfies The latter term is the Taylor series of , so we get and end up at Properties In base 10 one can notice that the hexagonal numbers' rightmost (least significant) digits follow the pattern 1–7–9–7–1 (repeating with period 5). This follows from the last digit of the triangle numbers which repeat 0-1-3-1-0 when taken modulo 5. In base 6 the rightmost digit is always 1: 16, 116, 316, 1016, 1416, 2316, 3316, 4416... This follows from
https://en.wikipedia.org/wiki/MD5CRK
In cryptography, MD5CRK was a volunteer computing effort (similar to distributed.net) launched by Jean-Luc Cooke and his company, CertainKey Cryptosystems, to demonstrate that the MD5 message digest algorithm is insecure by finding a collision two messages that produce the same MD5 hash. The project went live on March 1, 2004. The project ended on August 24, 2004 after researchers independently demonstrated a technique for generating collisions in MD5 using analytical methods by Xiaoyun Wang, Feng, Xuejia Lai, and Yu. CertainKey awarded a 10,000 Canadian Dollar prize to Wang, Feng, Lai and Yu for their discovery. A technique called Floyd's cycle-finding algorithm was used to try to find a collision for MD5. The algorithm can be described by analogy with a random walk. Using the principle that any function with a finite number of possible outputs placed in a feedback loop will cycle, one can use a relatively small amount of memory to store outputs with particular structures and use them as "markers" to better detect when a marker has been "passed" before. These markers are called distinguished points, the point where two inputs produce the same output is called a collision point. MD5CRK considered any point whose first 32 bits were zeroes to be a distinguished point. Complexity The expected time to find a collision is not equal to where is the number of bits in the digest output. It is in fact , where is the number of function outputs collected. For this project, the probability of success after MD5 computations can be approximated by: . The expected number of computations required to produce a collision in the 128-bit MD5 message digest function is thus: To give some perspective to this, using Virginia Tech's System X with a maximum performance of 12.25 Teraflops, it would take approximately seconds or about 3 weeks. Or for commodity processors at 2 gigaflops it would take 6,000 machines approximately the same amount of time. See also List of volunt
https://en.wikipedia.org/wiki/2-satisfiability
In computer science, 2-satisfiability, 2-SAT or just 2SAT is a computational problem of assigning values to variables, each of which has two possible values, in order to satisfy a system of constraints on pairs of variables. It is a special case of the general Boolean satisfiability problem, which can involve constraints on more than two variables, and of constraint satisfaction problems, which can allow more than two choices for the value of each variable. But in contrast to those more general problems, which are NP-complete, 2-satisfiability can be solved in polynomial time. Instances of the 2-satisfiability problem are typically expressed as Boolean formulas of a special type, called conjunctive normal form (2-CNF) or Krom formulas. Alternatively, they may be expressed as a special type of directed graph, the implication graph, which expresses the variables of an instance and their negations as vertices in a graph, and constraints on pairs of variables as directed edges. Both of these kinds of inputs may be solved in linear time, either by a method based on backtracking or by using the strongly connected components of the implication graph. Resolution, a method for combining pairs of constraints to make additional valid constraints, also leads to a polynomial time solution. The 2-satisfiability problems provide one of two major subclasses of the conjunctive normal form formulas that can be solved in polynomial time; the other of the two subclasses is Horn-satisfiability. 2-satisfiability may be applied to geometry and visualization problems in which a collection of objects each have two potential locations and the goal is to find a placement for each object that avoids overlaps with other objects. Other applications include clustering data to minimize the sum of the diameters of the clusters, classroom and sports scheduling, and recovering shapes from information about their cross-sections. In computational complexity theory, 2-satisfiability provides an examp
https://en.wikipedia.org/wiki/Triad%20%28monitors%29
In cathode-ray tube (CRT) terms, a triad is a group of 3 phosphor dots coloured red, green, and blue on the inside of the CRT display of a computer monitor or television set. By directing differing intensities of cathode rays onto the 3 phosphor dots, the triad will display a colour by combining the red, green and blue elements. However, triads are not pixels, and multiple triads will form one logical pixel of the displayed image. In liquid-crystal displays (LCDs), colours are similarly composed of these 3 fundamental colours. See also Pixel Subpixel rendering Shadow mask Aperture grille Display technology 3 (number)
https://en.wikipedia.org/wiki/Vacuum%20fluorescent%20display
A vacuum fluorescent display (VFD) is a display device once commonly used on consumer electronics equipment such as video cassette recorders, car radios, and microwave ovens. A VFD operates on the principle of cathodoluminescence, roughly similar to a cathode ray tube, but operating at much lower voltages. Each tube in a VFD has a phosphor-coated carbon anode that is bombarded by electrons emitted from the cathode filament. In fact, each tube in a VFD is a triode vacuum tube because it also has a mesh control grid. Unlike liquid crystal displays, a VFD emits very bright light with high contrast and can support display elements of various colors. Standard illumination figures for VFDs are around 640 cd/m2 with high-brightness VFDs operating at 4,000 cd/m2, and experimental units as high as 35,000 cd/m2 depending on the drive voltage and its timing. The choice of color (which determines the nature of the phosphor) and display brightness significantly affect the lifetime of the tubes, which can range from as low as 1,500 hours for a vivid red VFD to 30,000 hours for the more common green ones. Cadmium was commonly used in the phosphors of VFDs in the past, but the current RoHS-compliant VFDs have eliminated this metal from their construction, using instead phosphors consisting of a matrix of alkaline earth and very small amounts of group III metals, doped with very small amounts of rare earth metals. VFDs can display seven-segment numerals, multi-segment alpha-numeric characters or can be made in a dot-matrix to display different alphanumeric characters and symbols. In practice, there is little limit to the shape of the image that can be displayed: it depends solely on the shape of phosphor on the anode(s). The first VFD was the single indication DM160 by Philips in 1959. The first multi-segment VFD was a 1967 Japanese single-digit, seven-segment device. The displays became common on calculators and other consumer electronics devices. In the late 1980s hundreds of
https://en.wikipedia.org/wiki/Atlas%20Supervisor
The Atlas Supervisor was the program which managed the allocation of processing resources of Manchester University's Atlas Computer so that the machine was able to act on many tasks and user programs concurrently. Its various functions included running the Atlas computer's virtual memory (Atlas Supervisor paper, section 3, Store Organisation) and is ‘considered by many to be the first recognisable modern operating system’. Brinch Hansen described it as "the most significant breakthrough in the history of operating systems." References Notes Bibliography External links The Atlas Supervisor paper (T Kilburn, R B Payne, D J Howarth, 1962) 1962 software Discontinued operating systems
https://en.wikipedia.org/wiki/Program%20synthesis
In computer science, program synthesis is the task to construct a program that provably satisfies a given high-level formal specification. In contrast to program verification, the program is to be constructed rather than given; however, both fields make use of formal proof techniques, and both comprise approaches of different degrees of automatization. In contrast to automatic programming techniques, specifications in program synthesis are usually non-algorithmic statements in an appropriate logical calculus. Origin During the Summer Institute of Symbolic Logic at Cornell University in 1957, Alonzo Church defined the problem to synthesize a circuit from mathematical requirements. Even though the work only refers to circuits and not programs, the work is considered to be one of the earliest descriptions of program synthesis and some researchers refer to program synthesis as "Church's Problem". In the 1960s, a similar idea for an "automatic programmer" was explored by researchers in artificial intelligence. Since then, various research communities considered the problem of program synthesis. Notable works include the 1969 automata-theoretic approach by Büchi and Landweber, and the works by Manna and Waldinger (c. 1980). The development of modern high-level programming languages can also be understood as a form of program synthesis. 21st century developments The early 21st century has seen a surge of practical interest in the idea of program synthesis in the formal verification community and related fields. Armando Solar-Lezama showed that it is possible to encode program synthesis problems in Boolean logic and use algorithms for the Boolean satisfiability problem to automatically find programs. In 2013, a unified framework for program synthesis problems was proposed by researchers at UPenn, UC Berkeley, and MIT. Since 2014 there has been a yearly program synthesis competition comparing the different algorithms for program synthesis in a competitive event, the Sy
https://en.wikipedia.org/wiki/Identification%20key
In biology, an identification key, taxonomic key, or biological key is a printed or computer-aided device that aids the identification of biological entities, such as plants, animals, fossils, microorganisms, and pollen grains. Identification keys are also used in many other scientific and technical fields to identify various kinds of entities, such as diseases, soil types, minerals, or archaeological and anthropological artifacts. Traditionally identification keys have most commonly taken the form of single-access keys. These work by offering a fixed sequence of identification steps, each with multiple alternatives, the choice of which determines the next step. If each step has only two alternatives, the key is said to be dichotomous, else it is polytomous. Modern multi-access or interactive keys allow the user to freely choose the identification steps and their order. At each step, the user must answer a question about one or more features (characters) of the entity to be identified. For example, a step in a botanical key may ask about the color of flowers, or the disposition of the leaves along the stems. A key for insect identification may ask about the number of bristles on the rear leg. Principles of good key design Identification errors may have serious consequences in both pure and applied disciplines, including ecology, medical diagnosis, pest control, forensics, etc. Therefore, identification keys must be constructed with great care in order to minimize the incidence of such errors. Whenever possible, the character used at each identification step should be diagnostic; that is, each alternative should be common to all members of a group of entities, and unique to that group. It should also be differential, meaning that the alternatives should separate the corresponding subgroups from each other. However, characters which are neither differential nor diagnostic may be included to increase comprehension (especially characters that are common to t
https://en.wikipedia.org/wiki/115%20%28number%29
115 (one hundred [and] fifteen) is the natural number following 114 and preceding 116. In mathematics 115 has a square sum of divisors: There are 115 different rooted trees with exactly eight nodes, 115 inequivalent ways of placing six rooks on a 6 × 6 chess board in such a way that no two of the rooks attack each other, and 115 solutions to the stamp folding problem for a strip of seven stamps. 115 is also a heptagonal pyramidal number. The 115th Woodall number, is a prime number. 115 is the sum of the first five heptagonal numbers. See also 115 (disambiguation) References Integers
https://en.wikipedia.org/wiki/116%20%28number%29
116 (one hundred [and] sixteen) is the natural number following 115 and preceding 117. In mathematics 116 is a noncototient, meaning that there is no solution to the equation , where stands for Euler's totient function. 116! + 1 is a factorial prime. There are 116 ternary Lyndon words of length six, and 116 irreducible polynomials of degree six over a three-element field, which form the basis of a free Lie algebra of dimension 116. There are 116 different ways of partitioning the numbers from 1 through 5 into subsets in such a way that, for every k, the union of the first k subsets is a consecutive sequence of integers. There are 116 different 6×6 Costas arrays. See also 116 (disambiguation) References Integers
https://en.wikipedia.org/wiki/117%20%28number%29
117 (one hundred [and] seventeen) is the natural number following 116 and preceding 118. In mathematics 117 is the smallest possible length of the longest edge of an integer Heronian tetrahedron (a tetrahedron whose edge lengths, face areas and volume are all integers). Its other edge lengths are 51, 52, 53, 80 and 84. 117 is a pentagonal number. In other fields 117 can be a substitute for the number 17, which is considered unlucky in Italy. When Renault exported the R17 to Italy, it was renamed R117. Chinese dragons are usually depicted as having 117 scales, subdivided into 81 associated with yang and 36 associated with yin. In the Danish language the number 117 () is often used as a hyperbolic term to represent an arbitrary but large number. See also 117 (disambiguation) References Integers
https://en.wikipedia.org/wiki/118%20%28number%29
118 (one hundred [and] eighteen) is the natural number following 117 and preceding 119. In mathematics There is no answer to the equation φ(x) = 118, making 118 a nontotient. Four expressions for 118 as the sum of three positive integers have the same product: 14 + 50 + 54 = 15 + 40 + 63 = 18 + 30 + 70 = 21 + 25 + 72 = 118 and 14 × 50 × 54 = 15 × 40 × 63 = 18 × 30 × 70 = 21 × 25 × 72 = 37800. 118 is the smallest number that can be expressed as four sums with the same product in this way. Because of its expression as , it is a Leyland number of the second kind. 118!! - 1 is a prime number, where !! denotes the double factorial (the product of even integers up to 118). In other fields There are 118 known elements on the Periodic Table, the 118th element being oganesson. See also 118 (disambiguation) References Integers
https://en.wikipedia.org/wiki/Lists%20of%20fictional%20species
There are a number of lists of fictional species: Extraterrestrial List of fictional extraterrestrials (by media type) Lists of fictional alien species: A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q, R, S, T, U, V, W, X, Y, Z Humanoid Lists of humanoids Literature Comics Television Film Video games Paleoanthropological hoaxes Cardiff Giant Nebraska Man Piltdown Man Freak show Humanzee List of fictional extraterrestrials Legendary List of legendary creatures List of legendary creatures by type List of giants in mythology and folklore Vampire folklore by region Mythical Legendary creatures of the Argentine Northwest region Mythical creatures in Burmese folklore List of Greek mythological creatures List of legendary creatures from Japan List of Philippine mythological creatures Supernatural beings in Slavic folklore Plants and fungi List of fictional plants Reptilian List of dragons List of dragons in mythology and folklore List of dragons in literature List of dragons in popular culture List of dragons in film and television List of dragons in games List of fictional dinosaurs Theological List of fictional angels List of fictional demons
https://en.wikipedia.org/wiki/Lindbladian
In quantum mechanics, the Gorini–Kossakowski–Sudarshan–Lindblad equation (GKSL equation, named after Vittorio Gorini, Andrzej Kossakowski, George Sudarshan and Göran Lindblad), master equation in Lindblad form, quantum Liouvillian, or Lindbladian is one of the general forms of Markovian master equations describing open quantum systems. It generalizes the Schrödinger equation to open quantum systems; that is, systems in contacts with their surroundings. The resulting dynamics is no longer unitary, but still satisfies the property of being trace-preserving and completely positive for any initial condition. The Schrödinger equation or, actually, the von Neumann equation, is a special case of the GKSL equation, which has led to some speculation that quantum mechanics may be productively extended and expanded through further application and analysis of the Lindblad equation. The Schrödinger equation deals with state vectors, which can only describe pure quantum states and are thus less general than density matrices, which can describe mixed states as well. Motivation In the canonical formulation of quantum mechanics, a system's time evolution is governed by unitary dynamics. This implies that there is no decay and phase coherence is maintained throughout the process, and is a consequence of the fact that all participating degrees of freedom are considered. However, any real physical system is not absolutely isolated, and will interact with its environment. This interaction with degrees of freedom external to the system results in dissipation of energy into the surroundings, causing decay and randomization of phase. More so, understanding the interaction of a quantum system with its environment is necessary for understanding many commonly observed phenomena like the spontaneous emission of light from excited atoms, or the performance of many quantum technological devices, like the laser. Certain mathematical techniques have been introduced to treat the interaction
https://en.wikipedia.org/wiki/Symbols%20of%20death
Symbols of death are the motifs, images and concepts associated with death throughout different cultures, religions and societies. Images Various images are used traditionally to symbolize death; these rank from blunt depictions of cadavers and their parts to more allusive suggestions that time is fleeting and all men are mortals. The human skull is an obvious and frequent symbol of death, found in many cultures and religious traditions. Human skeletons and sometimes non-human animal skeletons and skulls can also be used as blunt images of death; the traditional figures of the Grim Reaper – a black-hooded skeleton with a scythe – is one use of such symbolism. Within the Grim Reaper itself, the skeleton represents the decayed body whereas the robe symbolizes those worn by religious people conducting funeral services. The skull and crossbones motif (☠) has been used among Europeans as a symbol of both piracy and poison. The skull is also important as it remains the only "recognizable" aspect of a person once they have died. Decayed cadavers can also be used to depict death; in medieval Europe, they were often featured in artistic depictions of the danse macabre, or in cadaver tombs which depicted the living and decomposed body of the person entombed. Coffins also serve as blunt reminders of mortality. Europeans were also seen to use coffins and cemeteries to symbolize the wealth and status of the person who has died, serving as a reminder to the living and the deceased as well. Less blunt symbols of death frequently allude to the passage of time and the fragility of life, and can be described as memento mori; that is, an artistic or symbolic reminder of the inevitability of death. Clocks, hourglasses, sundials, and other timepieces both call to mind that time is passing. Similarly, a candle both marks the passage of time, and bears witness that it will eventually burn itself out as well as a symbol of hope of salvation. These sorts of symbols were often incorpora
https://en.wikipedia.org/wiki/RSA%20Factoring%20Challenge
The RSA Factoring Challenge was a challenge put forward by RSA Laboratories on March 18, 1991 to encourage research into computational number theory and the practical difficulty of factoring large integers and cracking RSA keys used in cryptography. They published a list of semiprimes (numbers with exactly two prime factors) known as the RSA numbers, with a cash prize for the successful factorization of some of them. The smallest of them, a 100-decimal digit number called RSA-100 was factored by April 1, 1991. Many of the bigger numbers have still not been factored and are expected to remain unfactored for quite some time, however advances in quantum computers make this prediction uncertain due to Shor's algorithm. In 2001, RSA Laboratories expanded the factoring challenge and offered prizes ranging from $10,000 to $200,000 for factoring numbers from 576 bits up to 2048 bits. The RSA Factoring Challenges ended in 2007. RSA Laboratories stated: "Now that the industry has a considerably more advanced understanding of the cryptanalytic strength of common symmetric-key and public-key algorithms, these challenges are no longer active." When the challenge ended in 2007, only RSA-576 and RSA-640 had been factored from the 2001 challenge numbers. The factoring challenge was intended to track the cutting edge in integer factorization. A primary application is for choosing the key length of the RSA public-key encryption scheme. Progress in this challenge should give an insight into which key sizes are still safe and for how long. As RSA Laboratories is a provider of RSA-based products, the challenge was used by them as an incentive for the academic community to attack the core of their solutions — in order to prove its strength. The RSA numbers were generated on a computer with no network connection of any kind. The computer's hard drive was subsequently destroyed so that no record would exist, anywhere, of the solution to the factoring challenge. The first RSA numbers g
https://en.wikipedia.org/wiki/Zenzizenzizenzic
Zenzizenzizenzic is an obsolete form of mathematical notation representing the eighth power of a number (that is, the zenzizenzizenzic of x is x8), dating from a time when powers were written out in words rather than as superscript numbers. This term was suggested by Robert Recorde, a 16th-century Welsh physician, mathematician and writer of popular mathematics textbooks, in his 1557 work The Whetstone of Witte (although his spelling was zenzizenzizenzike); he wrote that it "doeth represent the square of squares squaredly". History At the time Recorde proposed this notation, there was no easy way of denoting the powers of numbers other than squares and cubes. The root word for Recorde's notation is zenzic, which is a German spelling of the medieval Italian word , meaning 'squared'. Since the square of a square of a number is its fourth power, Recorde used the word zenzizenzic (spelled by him as zenzizenzike) to express it. Some of the terms had prior use in Latin , and . Similarly, as the sixth power of a number is equal to the square of its cube, Recorde used the word zenzicubike to express it; a more modern spelling, zenzicube, is found in Samuel Jeake's Arithmetick Surveighed and Reviewed. Finally, the word zenzizenzizenzic denotes the square of the square of a number's square, which is its eighth power: in modern notation, Samuel Jeake gives zenzizenzizenzizenzike (the square of the square of the square of the square, or 16th power) in a table in A Compleat Body of Arithmetick (1701): The word, as well as the system, is obsolete except as a curiosity; the Oxford English Dictionary (OED) has only one citation for it. As well as being a mathematical oddity, it survives as a linguistic oddity: zenzizenzizenzic has more Zs than any other word in the OED. Notation for other powers Recorde proposed three mathematical terms by which any power (that is, index or exponent) greater than 1 could be expressed: zenzic, i.e. squared; cubic; and sursolid, i.e. ra
https://en.wikipedia.org/wiki/Tensor%20calculus
In mathematics, tensor calculus, tensor analysis, or Ricci calculus is an extension of vector calculus to tensor fields (tensors that may vary over a manifold, e.g. in spacetime). Developed by Gregorio Ricci-Curbastro and his student Tullio Levi-Civita, it was used by Albert Einstein to develop his general theory of relativity. Unlike the infinitesimal calculus, tensor calculus allows presentation of physics equations in a form that is independent of the choice of coordinates on the manifold. Tensor calculus has many applications in physics, engineering and computer science including elasticity, continuum mechanics, electromagnetism (see mathematical descriptions of the electromagnetic field), general relativity (see mathematics of general relativity), quantum field theory, and machine learning. Working with a main proponent of the exterior calculus Elie Cartan, the influential geometer Shiing-Shen Chern summarizes the role of tensor calculus:In our subject of differential geometry, where you talk about manifolds, one difficulty is that the geometry is described by coordinates, but the coordinates do not have meaning. They are allowed to undergo transformation. And in order to handle this kind of situation, an important tool is the so-called tensor analysis, or Ricci calculus, which was new to mathematicians. In mathematics you have a function, you write down the function, you calculate, or you add, or you multiply, or you can differentiate. You have something very concrete. In geometry the geometric situation is described by numbers, but you can change your numbers arbitrarily. So to handle this, you need the Ricci calculus. Syntax Tensor notation makes use of upper and lower indexes on objects that are used to label a variable object as covariant (lower index), contravariant (upper index), or mixed covariant and contravariant (having both upper and lower indexes). In fact in conventional math syntax we make use of covariant indexes when dealing with Cartesia
https://en.wikipedia.org/wiki/D%27Alembert%27s%20principle
D'Alembert's principle, also known as the Lagrange–d'Alembert principle, is a statement of the fundamental classical laws of motion. It is named after its discoverer, the French physicist and mathematician Jean le Rond d'Alembert, and Italian-French mathematician Joseph Louis Lagrange. D'Alembert's principle generalizes the principle of virtual work from static to dynamical systems by introducing forces of inertia which, when added to the applied forces in a system, result in dynamic equilibrium. The principle does not apply for irreversible displacements, such as sliding friction, and more general specification of the irreversibility is required. D'Alembert's principle is more general than Hamilton's principle as it is not restricted to holonomic constraints that depend only on coordinates and time but not on velocities. Statement of the principle The principle states that the sum of the differences between the forces acting on a system of massive particles and the time derivatives of the momenta of the system itself projected onto any virtual displacement consistent with the constraints of the system is zero. Thus, in mathematical notation, d'Alembert's principle is written as follows, where: is an integer used to indicate (via subscript) a variable corresponding to a particular particle in the system, is the total applied force (excluding constraint forces) on the -th particle, is the mass of the -th particle, is the velocity of the -th particle, is the virtual displacement of the -th particle, consistent with the constraints. Newton's dot notation is used to represent the derivative with respect to time. The above equation is often called d'Alembert's principle, but it was first written in this variational form by Joseph Louis Lagrange. D'Alembert's contribution was to demonstrate that in the totality of a dynamic system the forces of constraint vanish. That is to say that the generalized forces need not include constraint forces. It is equival
https://en.wikipedia.org/wiki/MiniScribe
MiniScribe Corporation was a manufacturer of disk storage products, founded in Longmont, Colorado in 1980. MiniScribe designed and sold stepper motor-based hard disk drives with a large amount of onboard logic for the time. They eventually moved into higher-profile voice coil motor designs, and won major contracts with IBM. Massive financial fraud starting in 1987 led to bankruptcy in 1990, when their assets were bought by Maxtor and renamed Maxtor Colorado Corporation. Foundation, early slump, and recapitalization The company was started by Terry Johnson, who had a 20-year career in the hard drive business at such companies as IBM, Memorex and Storage Technology Corporation. MiniScribe became a major player when it won a series of contracts to supply IBM's PC division, and their subsequent rapid growth led to an initial public offering in late 1983, opening for trading in January 1984. However, slow sales of the IBM PC/XT led IBM to dramatically scale back their orders late that year, forcing MiniScribe to lay off 26% of its staff and causing the value of the stock to plummet. Johnson left the company; at the time he stated that he had been planning to do this for some time and that his departure had "absolutely nothing" to do with IBM. Johnson later said "It's a very low inertia industry, you can blow your way into it and get blown out of it very quickly". Roger Gower, who had been recently promoted to President, took over the CEO role as well. Shortly thereafter the company was recapitalized with a $20 million investment from Hambrecht & Quist (H&Q), a venture capital firm. One of H&Q's officers was Quentin Thomas ("QT") Wiles, a turnaround specialist nicknamed "Dr. Fix-It". Wiles took over the CEO position from Gower, running the company remotely from his office in Sherman Oaks, Los Angeles, with a management team made up primarily of accountants. The company soon returned to profitability, with sales increasing from $114 million at the height of IBM's orde
https://en.wikipedia.org/wiki/Sky%20island
Sky islands are isolated mountains surrounded by radically different lowland environments. The term originally referred to those found on the Mexican Plateau, and has extended to similarly isolated high-elevation forests. The isolation has significant implications for these natural habitats. The American Southwest region began warming up between and 10,000 years BP and atmospheric temperatures increased substantially, resulting in the formation of vast deserts that isolated the sky islands. Endemism, altitudinal migration, and relict populations are some of the natural phenomena to be found on sky islands. The complex dynamics of species richness on sky islands draws attention from the discipline of biogeography, and likewise the biodiversity is of concern to conservation biology. One of the key elements of a sky island is separation by physical distance from the other mountain ranges, resulting in a habitat island, such as a forest surrounded by desert. Some sky islands serve as refugia for boreal species stranded by warming climates since the last glacial period. In other cases, localized populations of plants and animals tend towards speciation, similar to oceanic islands such as the Galápagos Islands of Ecuador. Etymology Herpetologist Edward H. Taylor presented the concept of "Islands" on the Mexican Plateau in 1940 at the 8th American Scientific Congress in Washington, D. C. His abstract on the topic was published in 1942. The sky island concept was later applied in 1943 when Natt N. Dodge, in an article in Arizona Highways magazine, referred to the Chiricahua Mountains in southeastern Arizona as a "mountain island in a desert sea". In about the same era, the term was used to refer to high alpine, unglaciated, ancient topographic landform surfaces on the crest of the Sierra Nevada, California. The term was popularized by nature writer Weldon Heald, a resident of southeastern Arizona. In his 1967 book, Sky Island, he demonstrated the concept by describin
https://en.wikipedia.org/wiki/On-Line%20Encyclopedia%20of%20Integer%20Sequences
The On-Line Encyclopedia of Integer Sequences (OEIS) is an online database of integer sequences. It was created and maintained by Neil Sloane while researching at AT&T Labs. He transferred the intellectual property and hosting of the OEIS to the OEIS Foundation in 2009. Sloane is the chairman of the OEIS Foundation. OEIS records information on integer sequences of interest to both professional and amateur mathematicians, and is widely cited. , it contains over 360,000 sequences, making it the largest database of its kind. Each entry contains the leading terms of the sequence, keywords, mathematical motivations, literature links, and more, including the option to generate a graph or play a musical representation of the sequence. The database is searchable by keyword, by subsequence, or by any of 16 fields. History Neil Sloane started collecting integer sequences as a graduate student in 1964 to support his work in combinatorics. The database was at first stored on punched cards. He published selections from the database in book form twice: A Handbook of Integer Sequences (1973, ), containing 2,372 sequences in lexicographic order and assigned numbers from 1 to 2372. The Encyclopedia of Integer Sequences with Simon Plouffe (1995, ), containing 5,488 sequences and assigned M-numbers from M0000 to M5487. The Encyclopedia includes the references to the corresponding sequences (which may differ in their few initial terms) in A Handbook of Integer Sequences as N-numbers from N0001 to N2372 (instead of 1 to 2372.) The Encyclopedia includes the A-numbers that are used in the OEIS, whereas the Handbook did not. These books were well received and, especially after the second publication, mathematicians supplied Sloane with a steady flow of new sequences. The collection became unmanageable in book form, and when the database had reached 16,000 entries Sloane decided to go online—first as an email service (August 1994), and soon after as a website (1996). As a spin-off fro
https://en.wikipedia.org/wiki/Agonist
An agonist is a chemical that activates a receptor to produce a biological response. Receptors are cellular proteins whose activation causes the cell to modify what it is currently doing. In contrast, an antagonist blocks the action of the agonist, while an inverse agonist causes an action opposite to that of the agonist. Etymology From the Greek αγωνιστής (agōnistēs), contestant; champion; rival < αγων (agōn), contest, combat; exertion, struggle < αγω (agō), I lead, lead towards, conduct; drive Types of agonists Receptors can be activated by either endogenous agonists (such as hormones and neurotransmitters) or exogenous agonists (such as drugs), resulting in a biological response. A physiological agonist is a substance that creates the same bodily responses but does not bind to the same receptor. An endogenous agonist for a particular receptor is a compound naturally produced by the body that binds to and activates that receptor. For example, the endogenous agonist for serotonin receptors is serotonin, and the endogenous agonist for dopamine receptors is dopamine. Full agonists bind to and activate a receptor with the maximum response that an agonist can elicit at the receptor. One example of a drug that can act as a full agonist is isoproterenol, which mimics the action of adrenaline at β adrenoreceptors. Another example is morphine, which mimics the actions of endorphins at μ-opioid receptors throughout the central nervous system. However, a drug can act as a full agonist in some tissues and as a partial agonist in other tissues, depending upon the relative numbers of receptors and differences in receptor coupling. A co-agonist works with other co-agonists to produce the desired effect together. NMDA receptor activation requires the binding of both glutamate, glycine and D-serine co-agonists. Calcium can also act as a co-agonist at the IP3 receptor. A selective agonist is selective for a specific type of receptor. E.g. buspirone is a selective agonist
https://en.wikipedia.org/wiki/ASIMO
ASIMO (Advanced Step in Innovative Mobility) is a humanoid robot created by Honda in 2000. It is displayed in the Miraikan museum in Tokyo, Japan. On 8 July 2018, Honda posted the last update of Asimo through their official page stating that it would be ceasing all development and production of Asimo robots in order to focus on more practical applications using the technology developed through Asimo's lifespan. It made its last active appearance in March 2022, over 20 years after its first, as Honda announced that they are retiring the robot to concentrate on remote-controlled, avatar-style, robotic technology. There are four published models of the Asimo. A few years after the release in 2002 there were 20 units of the first Asimo model produced. As of February 2009, there were over 100 ASIMO units in existence. Development Honda began developing humanoid robots in the 1980s, including several prototypes that preceded ASIMO. It was the company's goal to create a walking robot. E0 was the first bipedal (two-legged) model produced as part of the Honda E series, which was an early experimental line of self-regulating, humanoid walking robot with wireless movements created between 1986 and 1993. This was followed by the Honda P series of robots produced from 1993 through 1997. The research made on the E- and P-series led to the creation of ASIMO. Development began at Honda's Wako Fundamental Technical Research Center in Japan in 1999 and ASIMO was unveiled in October 2000. ASIMO is an acronym which stands for Advanced Step in Innovative Mobility. The Japanese word Asi also stands for 'leg' and Mo for 'mobility'. ASIMO is pronounced as '' and means 'also legs'. In 2018, Honda ceased the commercial development of ASIMO, although it will continue to be developed as a research platform and make public appearances. Form ASIMO stands tall and weighs . Research conducted by Honda found that the ideal height for a mobility assistant robot was between 120 cm and the hei
https://en.wikipedia.org/wiki/Henderson%E2%80%93Hasselbalch%20equation
In chemistry and biochemistry, the Henderson–Hasselbalch equation relates the pH of a chemical solution of a weak acid to the numerical value of the acid dissociation constant, Ka, of acid and the ratio of the concentrations, of the acid and its conjugate base in an equilibrium. For example, the acid may be acetic acid The Henderson–Hasselbalch equation can be used to estimate the pH of a buffer solution by approximating the actual concentration ratio as the ratio of the analytical concentrations of the acid and of a salt, MA. The equation can also be applied to bases by specifying the protonated form of the base as the acid. For example, with an amine, Derivation, assumptions and limitations A simple buffer solution consists of a solution of an acid and a salt of the conjugate base of the acid. For example, the acid may be acetic acid and the salt may be sodium acetate. The Henderson–Hasselbalch equation relates the pH of a solution containing a mixture of the two components to the acid dissociation constant, Ka of the acid, and the concentrations of the species in solution. To derive the equation a number of simplifying assumptions have to be made. (pdf) Assumption 1: The acid, HA, is monobasic and dissociates according to the equations CA is the analytical concentration of the acid and CH is the concentration the hydrogen ion that has been added to the solution. The self-dissociation of water is ignored. A quantity in square brackets, [X], represents the concentration of the chemical substance X. It is understood that the symbol H+ stands for the hydrated hydronium ion. Ka is an acid dissociation constant. The Henderson–Hasselbalch equation can be applied to a polybasic acid only if its consecutive pK values differ by at least 3. Phosphoric acid is such an acid. Assumption 2. The self-ionization of water can be ignored. This assumption is not, strictly speaking, valid with pH values close to 7, half the value of pKw, the constant for self-ioniz
https://en.wikipedia.org/wiki/Mac%20OS%20X%2010.1
Mac OS X 10.1 (code named Puma) is the second major release of macOS, Apple's desktop and server operating system. It superseded Mac OS X 10.0 and preceded Mac OS X Jaguar. Mac OS X 10.1 was released on September 25, 2001, as a free update for Mac OS X 10.0 users. The operating system was handed out for no charge by Apple employees after Steve Jobs' keynote speech at the Seybold publishing conference in San Francisco. It was subsequently distributed to Mac users on October 25, 2001, at Apple Stores and other retail stores that carried Apple products. Mac OS X 10.1 was codenamed "Puma" because the internal team thought it was "one fast cat." System requirements Supported computers: Power Mac G3 Power Mac G4 Power Mac G4 Cube iMac G3 iMac G4, 2002 version only eMac, 2002 version only PowerBook G3, except for the original PowerBook G3 PowerBook G4 iBook RAM: 128 megabytes (MB) (unofficially 64 MB minimum) Hard Drive Space: 1.5 gigabytes (GB) Features Apple introduced many features that were missing from the previous version, as well as improving overall system performance. This system release brought some major new features to the Mac OS X platform: Performance enhancements — Mac OS X 10.1 introduced large performance increases throughout the system. Easier CD and DVD burning — better support in Finder as well as in iTunes DVD playback support — DVDs can be played in Apple DVD Player More printer support (200 printers supported out of the box) — One of the main complaints of version 10.0 users was the lack of printer drivers, and Apple attempted to remedy the situation by including more drivers, although many critics complained that there were still not enough. Faster 3D (OpenGL performs 20% faster) — The OpenGL drivers and handling were vastly improved in this version of Mac OS X, which created a large performance gap for 3D elements in the interface, and 3D applications. Improved AppleScript — The scripting interface now allows scripting access to many more syst
https://en.wikipedia.org/wiki/Mac%20OS%20X%2010.0
Mac OS X 10.0 (code named Cheetah) is the first major release of Mac OS X, Apple's desktop and server operating system. It was released on March 24, 2001, for a price of $129 after a public beta. Mac OS X was Apple's successor to the classic Mac OS. It was derived from NeXTSTEP and FreeBSD, and featured a new user interface called Aqua, as well as improved stability and security due to its new Unix foundations. It introduced the Quartz graphics rendering engine for hardware-accelerated animations. Many technologies were ported from the classic Mac OS, including Sherlock and the QuickTime framework. The core components of Mac OS X were open sourced as Darwin. Boxed releases of Mac OS X 10.0 also included a copy of Mac OS 9.1, which can be installed alongside Mac OS X 10.0, through the means of dual booting (which meant that reboots are required for switching between the two OSes). This was important for compatibility reasons; while many Mac OS 9 applications could be run under Mac OS X in the Classic environment, some, such as applications that directly accessed hardware, could only run under Mac OS 9. Six months after its release, Mac OS X 10.0 was succeeded by Mac OS X 10.1, code named Puma. Development The development of Mac OS X 10.0 began in 1998, after Apple acquired NeXT Computer, which was founded by Steve Jobs after he left Apple in the mid-1980s. The initial development of Mac OS X was led by Avie Tevanian, who had previously worked at NeXT and had played a key role in the development of NeXTSTEP. The development team faced significant challenges in merging the classic Mac OS with the new Unix-based architecture, as well as in creating a modern user interface that would be familiar to Mac users. Mac OS X 10.0 was released to the public on March 24, 2001, after several months of beta testing. The release was met with mixed reviews, with some users praising the new features and stability, while others criticized the lack of compatibility with older Mac
https://en.wikipedia.org/wiki/Invagination
Invagination is the process of a surface folding in on itself to form a cavity, pouch or tube. In developmental biology, invagination is a mechanism that takes place during gastrulation. This mechanism or cell movement happens mostly in the vegetal pole. Invagination consists of the folding of an area of the exterior sheet of cells towards the inside of the blastula. In each organism, the complexity will be different depending on the number of cells. Invagination can be referenced as one of the steps of the establishment of the body plan. The term, originally used in embryology, has been adopted in other disciplines as well. There is more than one type of movement for invagination. Two common types are axial and orthogonal. The difference between the production of the tube formed in the cytoskeleton and extracellular matrix. Axial can be formed at a single point along the axis of a surface. Orthogonal is linear and trough. Biology Invagination is the morphogenetic processes by which an embryo takes form, and is the initial step of gastrulation, the massive reorganization of the embryo from a simple spherical ball of cells, the blastula, into a multi-layered organism, with differentiated germ layers: endoderm, mesoderm, and ectoderm. More localized invaginations also occur later in embryonic development, The inner membrane of a mitochondrion invaginates to form cristae, thus providing a much greater surface area to accommodate the protein complexes and other participants that produce adenosine triphosphate (ATP). Invagination occurs during endocytosis and exocytosis when a vesicle forms within the cell and the membrane closes around it. Invagination of a part of the intestine into another part is called intussusception. Amphioxus The invagination in Amphioxus is the first cell movement of gastrulation. This process was first described by Conklin. During gastrulation, the blastula will be transformed by the invagination. The endoderm will fold towards the in
https://en.wikipedia.org/wiki/Clathrin
Clathrin is a protein that plays a major role in the formation of coated vesicles. Clathrin was first isolated and named by Barbara Pearse in 1976. It forms a triskelion shape composed of three clathrin heavy chains and three light chains. When the triskelia interact they form a polyhedral lattice that surrounds the vesicle, hence the protein's name, which is derived from the Latin clathrum meaning lattice. Coat-proteins, like clathrin, are used to build small vesicles in order to transport molecules within cells. The endocytosis and exocytosis of vesicles allows cells to communicate, to transfer nutrients, to import signaling receptors, to mediate an immune response after sampling the extracellular world, and to clean up the cell debris left by tissue inflammation. The endocytic pathway can be hijacked by viruses and other pathogens in order to gain entry to the cell during infection. Structure The clathrin triskelion is composed of three clathrin heavy chains interacting at their C-termini, each ~190 kDa heavy chain has a ~25 kDa light chain tightly bound to it. The three heavy chains provide the structural backbone of the clathrin lattice, and the three light chains are thought to regulate the formation and disassembly of a clathrin lattice. There are two forms of clathrin light chains, designated a and b. The main clathrin heavy chain, located on chromosome 17 in humans, is found in all cells. A second clathrin heavy chain gene, on chromosome 22, is expressed in muscle. Clathrin heavy chain is often described as a leg, with subdomains, representing the foot (the N-terminal domain), followed by the ankle, distal leg, knee, proximal leg, and trimerization domains. The N-terminal domain consists of a seven-bladed β-propeller structure. The other domains form a super-helix of short alpha helices. This was originally determined from the structure of the proximal leg domain that identified and is composed of a smaller structural module referred to as clathrin heav
https://en.wikipedia.org/wiki/Mucus
Mucus ( ) is a slippery aqueous secretion produced by, and covering, mucous membranes. It is typically produced from cells found in mucous glands, although it may also originate from mixed glands, which contain both serous and mucous cells. It is a viscous colloid containing inorganic salts, antimicrobial enzymes (such as lysozymes), immunoglobulins (especially IgA), and glycoproteins such as lactoferrin and mucins, which are produced by goblet cells in the mucous membranes and submucosal glands. Mucus serves to protect epithelial cells in the linings of the respiratory, digestive, and urogenital systems, and structures in the visual and auditory systems from pathogenic fungi, bacteria and viruses. Most of the mucus in the body is produced in the gastrointestinal tract. Amphibians, fish, snails, slugs, and some other invertebrates also produce external mucus from their epidermis as protection against pathogens, and to help in movement and is also produced in fish to line their gills. Plants produce a similar substance called mucilage that is also produced by some microorganisms. Respiratory system In the human respiratory system, mucus is part of the airway surface liquid (ASL), also known as epithelial lining fluid (ELF), that lines most of the respiratory tract. The airway surface liquid consists of a sol layer termed the periciliary liquid layer and an overlying gel layer termed the mucus layer. The periciliary liquid layer is so named as it surrounds the cilia and lies on top of the surface epithelium. The periciliary liquid layer surrounding the cilia consists of a gel meshwork of cell-tethered mucins and polysaccharides. The mucus blanket aids in the protection of the lungs by trapping foreign particles before they enter them, in particular through the nose during normal breathing. Mucus is made up of a fluid component of around 95% water, the mucin secretions from the goblet cells, and the submucosal glands (2–3% glycoproteins), proteoglycans (0.1–0.5%),
https://en.wikipedia.org/wiki/Nocodazole
Nocodazole is an antineoplastic agent which exerts its effect in cells by interfering with the polymerization of microtubules. Microtubules are one type of fibre which constitutes the cytoskeleton, and the dynamic microtubule network has several important roles in the cell, including vesicular transport, forming the mitotic spindle and in cytokinesis. Several drugs including vincristine and colcemid are similar to nocodazole in that they interfere with microtubule polymerization. Nocodazole has been shown to decrease the oncogenic potential of cancer cells via another microtubules-independent mechanisms. Nocodazole stimulates the expression of LATS2 which potently inhibits the Wnt signaling pathway by abrogating the interaction between the Wnt-dependent transcriptional co-factors beta-catenin and BCL9. It is related to mebendazole by replacement of the left most benzene ring by thiophene. Use in cell biology research As nocodazole affects the cytoskeleton, it is often used in cell biology experiments as a control: for example, some dominant negative Rho small GTPases cause a similar effect as nocodazole, and constitutively activated mutants often reverse or negate the effect. Nocodazole is frequently used in cell biology laboratories to synchronize the cell division cycle. Cells treated with nocodazole arrest with a G2- or M-phase DNA content when analyzed by flow cytometry. Microscopy of nocodazole-treated cells shows that they do enter mitosis but cannot form metaphase spindles because microtubules (of which the spindles are made) cannot polymerise. The absence of microtubule attachment to kinetochores activates the spindle assembly checkpoint, causing the cell to arrest in prometaphase. For cell synchronization experiments, nocodazole is usually used at a concentration of 40–100 ng/mL of culture medium for a duration of 12–18 hours. Prolonged arrest of cells in mitosis due to nocodazole treatment typically results in cell death by apoptosis. Another standard
https://en.wikipedia.org/wiki/Flow%20cytometry
Flow cytometry (FC) is a technique used to detect and measure physical and chemical characteristics of a population of cells or particles. In this process, a sample containing cells or particles is suspended in a fluid and injected into the flow cytometer instrument. The sample is focused to ideally flow one cell at a time through a laser beam, where the light scattered is characteristic to the cells and their components. Cells are often labeled with fluorescent markers so light is absorbed and then emitted in a band of wavelengths. Tens of thousands of cells can be quickly examined and the data gathered are processed by a computer. Flow cytometry is routinely used in basic research, clinical practice, and clinical trials. Uses for flow cytometry include: Cell counting Cell sorting Determining cell characteristics and function Detecting microorganisms Biomarker detection Protein engineering detection Diagnosis of health disorders such as blood cancers Measuring genome size A flow cytometry analyzer is an instrument that provides quantifiable data from a sample. Other instruments using flow cytometry include cell sorters which physically separate and thereby purify cells of interest based on their optical properties. History The first impedance-based flow cytometry device, using the Coulter principle, was disclosed in U.S. Patent 2,656,508, issued in 1953, to Wallace H. Coulter. Mack Fulwyler was the inventor of the forerunner to today's flow cytometers - particularly the cell sorter. Fulwyler developed this in 1965 with his publication in Science. The first fluorescence-based flow cytometry device (ICP 11) was developed in 1968 by Wolfgang Göhde from the University of Münster, filed for patent on 18 December 1968 and first commercialized in 1968/69 by German developer and manufacturer Partec through Phywe AG in Göttingen. At that time, absorption methods were still widely favored by other scientists over fluorescence methods. Soon after, flow cytometr
https://en.wikipedia.org/wiki/Territory%20%28animal%29
In ethology, territory is the sociographical area that an animal consistently defends against conspecific competition (or, occasionally, against animals of other species) using agonistic behaviors or (less commonly) real physical aggression. Animals that actively defend territories in this way are referred to as being territorial or displaying territorialism. Territoriality is only shown by a minority of species. More commonly, an individual or a group of animals occupies an area that it habitually uses but does not necessarily defend; this is called its home range. The home ranges of different groups of animals often overlap, and in these overlap areas the groups tend to avoid each other rather than seeking to confront and expel each other. Within the home range there may be a core area that no other individual group uses, but, again, this is as a result of avoidance. Function The ultimate function of animals inhabiting and defending a territory is to increase the individual fitness or inclusive fitness of the animals expressing the behaviour. Fitness in this biological sense relates to the ability of an animal to survive and raise young. The proximate functions of territory defense vary. For some animals, the reason for such protective behaviour is to acquire and protect food sources, nesting sites, mating areas, or to attract a mate. Types and size Among birds, territories have been classified as six types. Type A: An 'all-purpose territory' in which all activities occur, e.g. courtship, mating, nesting and foraging Type B: A mating and nesting territory, not including most of the area used for foraging. Type C: A nesting territory which includes the nest plus a small area around it. Common in colonial waterbirds. Type D: A pairing and mating territory. The type of territory defended by males in lekking species. Type E: Roosting territory. Type F: Winter territory which typically includes foraging areas and roost sites. May be equivalent (in terms of locat
https://en.wikipedia.org/wiki/Computer%20poker%20player
A computer poker player is a computer program designed to play the game of poker (generally the Texas hold 'em version), against human opponents or other computer opponents. It is commonly referred to as pokerbot or just simply bot. As of 2019, computers can beat any human player in poker. On the Internet These bots or computer programs are used often in online poker situations as either legitimate opponents for humans players or a form of cheating. As of 2020, all use of Real-Time Assistance (RTA) or automated bots is considered cheating by all online poker sites, although the level of enforcement from site operators varies considerably. Player bots Use of player bots or computer assistance while playing online poker is prohibited by most, if not all, online sites. Actions taken for breaches are a permanent ban and confiscation of winnings. One kind of bot can interface with the poker client (in other words, play by itself as an auto player) without the help of its human operator. Real-Time Assistance (RTA) is another method of using computer programs. RTA is when a human player uses program called an “solver” such as PioSOLVER or PokerSnowie, running on a different computer, to make their decisions. The issue of unfair advantage is twofold. For one, bots can play for many hours at a time without human weaknesses such as fatigue and can endure the natural variances of the game without being influenced by human emotion (or "tilt"). Secondly, since 2019, the computer program Pluribus (poker bot) is successful enough at reading bluffs, calculating odds, and adjusting to strategy that it consistently beats professional poker players at 6-player no-limit Hold’em. House enforcement While the terms and conditions of poker sites generally forbid the use of bots, the level of enforcement depends on the site operator. Some will seek out and ban bot users through the utilization of a variety of software tools. The poker client can be programmed to try to detect bot
https://en.wikipedia.org/wiki/Fractional%20part
The fractional part or decimal part of a non‐negative real number is the excess beyond that number's integer part. The latter is defined as the largest integer not greater than , called floor of or . Then, the fractional part can be formulated as a difference: . For a positive number written in a conventional positional numeral system (such as binary or decimal), its fractional part hence corresponds to the digits appearing after the radix point. The result is a real number in the half-open interval [0, 1). For negative numbers However, in case of negative numbers, there are various conflicting ways to extend the fractional part function to them: It is either defined in the same way as for positive numbers, i.e., by , or as the part of the number to the right of the radix point , or by the odd function: with as the smallest integer not less than , also called the ceiling of . By consequence, we may get, for example, three different values for the fractional part of just one : let it be −1.3, its fractional part will be 0.7 according to the first definition, 0.3 according to the second definition, and −0.3 according to the third definition, whose result can also be obtained in a straightforward way by . The and the "odd function" definitions permit for unique decomposition of any real number to the sum of its integer and fractional parts, where "integer part" refers to or respectively. These two definitions of fractional-part function also provide idempotence. The fractional part defined via difference from ⌊ ⌋ is usually denoted by curly braces: Relation to continued fractions Every real number can be essentially uniquely represented as a continued fraction, namely as the sum of its integer part and the reciprocal of its fractional part which is written as the sum of its integer part and the reciprocal of its fractional part, and so on. See also Circle group Equidistributed sequence One-parameter group Pisot–Vijayaraghavan number Significand R
https://en.wikipedia.org/wiki/Magnetocrystalline%20anisotropy
In physics, a ferromagnetic material is said to have magnetocrystalline anisotropy if it takes more energy to magnetize it in certain directions than in others. These directions are usually related to the principal axes of its crystal lattice. It is a special case of magnetic anisotropy. In other words, the excess energy required to magnetize a specimen in a particular direction over that required to magnetize it along the easy direction is called crystalline anisotropy energy. Causes The spin-orbit interaction is the primary source of magnetocrystalline anisotropy. It is basically the orbital motion of the electrons which couples with crystal electric field giving rise to the first order contribution to magnetocrystalline anisotropy. The second order arises due to the mutual interaction of the magnetic dipoles. This effect is weak compared to the exchange interaction and is difficult to compute from first principles, although some successful computations have been made. Practical relevance Magnetocrystalline anisotropy has a great influence on industrial uses of ferromagnetic materials. Materials with high magnetic anisotropy usually have high coercivity, that is, they are hard to demagnetize. These are called "hard" ferromagnetic materials and are used to make permanent magnets. For example, the high anisotropy of rare-earth metals is mainly responsible for the strength of rare-earth magnets. During manufacture of magnets, a powerful magnetic field aligns the microcrystalline grains of the metal such that their "easy" axes of magnetization all point in the same direction, freezing a strong magnetic field into the material. On the other hand, materials with low magnetic anisotropy usually have low coercivity, their magnetization is easy to change. These are called "soft" ferromagnets and are used to make magnetic cores for transformers and inductors. The small energy required to turn the direction of magnetization minimizes core losses, energy dissipat
https://en.wikipedia.org/wiki/COPII
The Coat Protein Complex II, or COPII, is a group of proteins that facilitate the formation of vesicles to transport proteins from the endoplasmic reticulum to the Golgi apparatus or endoplasmic-reticulum–Golgi intermediate compartment. This process is termed anterograde transport, in contrast to the retrograde transport associated with the COPI complex. COPII is assembled in two parts: first an inner layer of Sar1, Sec23, and Sec24 forms; then the inner coat is surrounded by an outer lattice of Sec13 and Sec31. Function The COPII coat is responsible for the formation of vesicles from the endoplasmic reticulum (ER). These vesicles transport cargo proteins to the Golgi apparatus (in yeast) or the endoplasmic-reticulum-Golgi intermediate compartment (ERGIC, in mammals). Coat assembly is initiated when the cytosolic Ras GTPase Sar1 is activated by its guanine nucleotide exchange factor Sec12. Activated Sar1-GTP inserts itself into the ER membrane, binding preferentially to areas of membrane curvature. As Sar1-GTP inserts into the membrane, it recruits Sec23 and Sec24 to make up the inner cage. Once the inner coat is assembled, the outer coat proteins Sec13 and Sec31 are recruited to the budding vesicle. Hydrolysis of the Sar1 GTP to GDP promotes disassembly of the coat. Some proteins are found to be responsible for selectively packaging cargos into COPII vesicles. More recent research suggests the Sec23/Sec24-Sar1 complex participates in cargo selection. For example, Erv29p in Saccharomyces cerevisiae is found to be necessary for packaging glycosylated pro-α-factor. Sec24 proteins recognize various cargo proteins, packaging them into the budding vesicles. Structure The COPII coat consists of an inner layer – a flexible meshwork of Sar1, Sec23, and Sec24 – and an outer layer made of Sec13 and Sec31. Sar1 resembles other Ras-family GTPases, with a core of six beta strands flanked by three alpha helices, and two flexible "switch domains". Unlike other Ras GTPases, S
https://en.wikipedia.org/wiki/COPI
COPI is a coatomer, a protein complex that coats vesicles transporting proteins from the cis end of the Golgi complex back to the rough endoplasmic reticulum (ER), where they were originally synthesized, and between Golgi compartments. This type of transport is retrograde transport, in contrast to the anterograde transport associated with the COPII protein. The name "COPI" refers to the specific coat protein complex that initiates the budding process on the cis-Golgi membrane. The coat consists of large protein subcomplexes that are made of seven different protein subunits, namely α, β, β', γ, δ, ε and ζ. Coat proteins Coat protein, or COPI, is an ADP ribosylation factor (ARF)-dependent protein involved in membrane traffic. COPI was first identified in retrograde traffic from the cis-Golgi to the rough endoplasmic reticulum (ER) and is the most extensively studied of ARF-dependent adaptors. COPI consists of seven subunits which compose the heteroheptameric protein complex. The primary function of adaptors is the selection of cargo proteins for their incorporation into nascent carriers. Cargo containing the sorting motifs KKXX and KXKXX interact with COPI to form carriers which are transported from the cis-Golgi to the ER. Current views suggest that ARFs are also involved in the selection of cargo for incorporation into carriers. Budding process ADP ribosylation factor (ARF) is a GTPase involved in membrane traffic. There are 6 mammalian ARFs which are regulated by over 30 guanine nucleotide exchange factors (GEFs) and GTPase activating proteins (GAPs). ARF is post-translationally modified at the N-terminus by the addition of the fatty acid myristate. ARF cycles between GTP and GDP-bound conformations. In the GTP-bound form, ARF conformation changes such that the myristate and hydrophobic N-terminal become more exposed and associate with the membrane. The interconversion between GTP and GDP bound states is mediated by ARF GEFs and ARF GAPs. At the membrane, A
https://en.wikipedia.org/wiki/Equals%20sign
The equals sign (British English) or equal sign (American English), also known as the equality sign, is the mathematical symbol , which is used to indicate equality in some well-defined sense. In an equation, it is placed between two expressions that have the same value, or for which one studies the conditions under which they have the same value. In Unicode and ASCII, it has the code point U+003D. It was invented in 1557 by Robert Recorde. History The etymology of the word "equal" is from the Latin word "æqualis", as meaning "uniform", "identical", or "equal", from aequus ("level", "even", or "just"). The symbol, now universally accepted in mathematics for equality, was first recorded by Welsh mathematician Robert Recorde in The Whetstone of Witte (1557). The original form of the symbol was much wider than the present form. In his book Recorde explains his design of the "Gemowe lines" (meaning twin lines, from the Latin gemellus) "The symbol was not immediately popular. The symbol was used by some and (or ), from the Latin word meaning equal, was widely used into the 1700s" (History of Mathematics, University of St Andrews). Usage in mathematics and computer programming In mathematics, the equal sign can be used as a simple statement of fact in a specific case (""), or to create definitions (""), conditional statements (""), or to express a universal equivalence (""). The first important computer programming language to use the equal sign was the original version of Fortran, FORTRAN I, designed in 1954 and implemented in 1957. In Fortran, serves as an assignment operator: sets the value of to 2. This somewhat resembles the use of in a mathematical definition, but with different semantics: the expression following is evaluated first, and may refer to a previous value of . For example, the assignment increases the value of by 2. A rival programming-language usage was pioneered by the original version of ALGOL, which was designed in 1958 and impleme
https://en.wikipedia.org/wiki/NICAM
Near Instantaneous Companded Audio Multiplex (NICAM) is an early form of lossy compression for digital audio. It was originally developed in the early 1970s for point-to-point links within broadcasting networks. In the 1980s, broadcasters began to use NICAM compression for transmissions of stereo TV sound to the public. History Near-instantaneous companding The idea was first described in 1964. In this, the 'ranging' was to be applied to the analogue signal before the analogue-to-digital converter (ADC) and after the digital-to-analogue converter (DAC). The application of this to broadcasting, in which the companding was to be done entirely digitally after the ADC and before the DAC, was described in a 1972 BBC Research Report. Point-to-point links NICAM was originally intended to provide broadcasters with six high-quality audio channels within a total bandwidth of 2048 kbit/s. This figure was chosen to match the E1 primary multiplex rate, and systems using this rate could make use of the planned PDH national and international telecommunications networks. Several similar systems had been developed in various countries, and in about 1977/78 the BBC Research Department conducted listening tests to evaluate them. The candidates were: A RAI system which used A-law companding to compress 14-bit linear PCM samples into 10 bits (14:10) A NICAM-type system proposed by Télédiffusion de France (14:9) NICAM-1 (13:10) NICAM-2 (14:11) NICAM-3 (14:10) It was found that NICAM-2 provided the best sound quality, but reduced programme-modulated noise to an unnecessarily low level at the expense of bit rate. NICAM-3, which had been proposed during the test to address this, was selected as the winner. Audio is encoded using 14 bit pulse-code modulation at a sampling rate of 32 kHz. Broadcasts to the public NICAM's second role – transmission to the public – was developed in the 80s by the BBC. This variant was known as NICAM-728, after the 728 kbit/s bitstream it is sent
https://en.wikipedia.org/wiki/Poisson%20summation%20formula
In mathematics, the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation. Forms of the equation Consider an aperiodic function with Fourier transform alternatively designated by and The basic Poisson summation formula is: Also consider periodic functions, where parameters and are in the same units as : Then is a special case (P=1, x=0) of this generalization: which is a Fourier series expansion with coefficients that are samples of the function Similarly: also known as the important Discrete-time Fourier transform. The Poisson summation formula can also be proved quite conceptually using the compatibility of Pontryagin duality with short exact sequences such as Applicability holds provided is a continuous integrable function which satisfies for some and every Note that such is uniformly continuous, this together with the decay assumption on , show that the series defining converges uniformly to a continuous function.   holds in the strong sense that both sides converge uniformly and absolutely to the same limit. holds in a pointwise sense under the strictly weaker assumption that has bounded variation and The Fourier series on the right-hand side of is then understood as a (conditionally convergent) limit of symmetric partial sums. As shown above, holds under the much less restrictive assumption that is in , but then it is necessary to interpret it in the sense that the right-hand side is the (possibly divergent) F
https://en.wikipedia.org/wiki/Thermal%20diode
The term "thermal diode" can refer to: a (possibly non-electrical) device which allows heat to flow preferentially in one direction; an electrical (semiconductor) diode in reference to a thermal effect or function; or it may describe both situations, where an electrical diode is used as a heat pump or thermoelectric cooler. One-way heat-flow A thermal diode in this sense is a device whose thermal resistance is different for heat flow in one direction than for heat flow in the other direction. I.e., when the thermal diode's first terminal is hotter than the second, heat will flow easily from the first to the second, but when the second terminal is hotter than the first, little heat will flow from the second to the first. Such an effect was first observed in a copper–cuprous-oxide interface by Chauncey Starr in the 1930s. Beginning in 2002, theoretical models were proposed to explain this effect. In 2006 the first microscopic solid-state thermal diodes were built. In April 2015 Italian researchers at CNR announced development of a working thermal diode, publishing results in Nature Nanotechnology. Thermal siphons can act as a one-way heat flow. Heat pipes operating in gravity may also have this effect. Electrical diode thermal effect or function A sensor device embedded on microprocessors used to monitor the temperature of the processor's die is also known as a "thermal diode". This application of thermal diode is based on the property of electrical diodes to change voltage across it linearly according to temperature. As the temperature increases, diodes' forward voltage decreases. Microprocessors having high clock rate encounter high thermal loads. To monitor the temperature limits thermal diodes are used. They are usually placed in that part of the processor core where highest temperature is encountered. Voltage developed across it varies with the temperature of the diode. All modern AMD and Intel CPUs, as well as AMD and Nvidia GPUs have on-chip thermal d
https://en.wikipedia.org/wiki/Broth
Broth, also known as bouillon (), is a savory liquid made of water in which meat, fish, or vegetables have been simmered for a short period of time. It can be eaten alone, but it is most commonly used to prepare other dishes, such as soups, gravies, and sauces. Commercially prepared liquid broths are available, typically chicken, beef, fish, and vegetable varieties. Dehydrated broth in the form of bouillon cubes were commercialized beginning in the early 20th century. Stock versus broth Many cooks and food writers use the terms broth and stock interchangeably. In 1974, James Beard (an American cook) wrote that stock, broth, and bouillon "are all the same thing". While many draw a distinction between stock and broth, the details of the distinction often differ. One possibility is that stocks are made primarily from animal bones, as opposed to meat, and therefore contain more gelatin, giving them a thicker texture. Another distinction that is sometimes made is that stock is cooked longer than broth and therefore has a more intense flavor. A third possible distinction is that stock is left unseasoned for use in other recipes, while broth is salted and otherwise seasoned and can be eaten alone. Scotch broth is a soup which includes solid pieces of meat and vegetables. Its name reflects an older usage of the term "broth" that did not distinguish between the complete soup and its liquid component. See also Canja de galinha Rosół Bouillon, a Haitian soup Court-bouillon, from the French court or "short broth" References External links Food ingredients Soups Stock (food)
https://en.wikipedia.org/wiki/Inverse%20kinematics
In computer animation and robotics, inverse kinematics is the mathematical process of calculating the variable joint parameters needed to place the end of a kinematic chain, such as a robot manipulator or animation character's skeleton, in a given position and orientation relative to the start of the chain. Given joint parameters, the position and orientation of the chain's end, e.g. the hand of the character or robot, can typically be calculated directly using multiple applications of trigonometric formulas, a process known as forward kinematics. However, the reverse operation is, in general, much more challenging. Inverse kinematics is also used to recover the movements of an object in the world from some other data, such as a film of those movements, or a film of the world as seen by a camera which is itself making those movements. This occurs, for example, where a human actor's filmed movements are to be duplicated by an animated character. Robotics In robotics, inverse kinematics makes use of the kinematics equations to determine the joint parameters that provide a desired configuration (position and rotation) for each of the robot's end-effectors. This is important because robot tasks are performed with the end effectors, while control effort applies to the joints. Determining the movement of a robot so that its end-effectors move from an initial configuration to a desired configuration is known as motion planning. Inverse kinematics transforms the motion plan into joint actuator trajectories for the robot. Similar formulas determine the positions of the skeleton of an animated character that is to move in a particular way in a film, or of a vehicle such as a car or boat containing the camera which is shooting a scene of a film. Once a vehicle's motions are known, they can be used to determine the constantly-changing viewpoint for computer-generated imagery of objects in the landscape such as buildings, so that these objects change in perspective while them
https://en.wikipedia.org/wiki/Complexity%20class
In computational complexity theory, a complexity class is a set of computational problems "of related resource-based complexity". The two most commonly analyzed resources are time and memory. In general, a complexity class is defined in terms of a type of computational problem, a model of computation, and a bounded resource like time or memory. In particular, most complexity classes consist of decision problems that are solvable with a Turing machine, and are differentiated by their time or space (memory) requirements. For instance, the class P is the set of decision problems solvable by a deterministic Turing machine in polynomial time. There are, however, many complexity classes defined in terms of other types of problems (e.g. counting problems and function problems) and using other models of computation (e.g. probabilistic Turing machines, interactive proof systems, Boolean circuits, and quantum computers). The study of the relationships between complexity classes is a major area of research in theoretical computer science. There are often general hierarchies of complexity classes; for example, it is known that a number of fundamental time and space complexity classes relate to each other in the following way: NL⊆P⊆NP⊆PSPACE⊆EXPTIME⊆EXPSPACE (where ⊆ denotes the subset relation). However, many relationships are not yet known; for example, one of the most famous open problems in computer science concerns whether P equals NP. The relationships between classes often answer questions about the fundamental nature of computation. The P versus NP problem, for instance, is directly related to questions of whether nondeterminism adds any computational power to computers and whether problems having solutions that can be quickly checked for correctness can also be quickly solved. Background Complexity classes are sets of related computational problems. They are defined in terms of the computational difficulty of solving the problems contained within them with respect t
https://en.wikipedia.org/wiki/Passwd
passwd is a command on Unix, Plan 9, Inferno, and most Unix-like operating systems used to change a user's password. The password entered by the user is run through a key derivation function to create a hashed version of the new password, which is saved. Only the hashed version is stored; the entered password is not saved for security reasons. When the user logs on, the password entered by the user during the log on process is run through the same key derivation function and the resulting hashed version is compared with the saved version. If the hashes are identical, the entered password is considered to be correct, and the user is authenticated. In theory, it is possible for two different passwords to produce the same hash. However, cryptographic hash functions are designed in such a way that finding any password that produces the same hash is very difficult and practically infeasible, so if the produced hash matches the stored one, the user can be authenticated. The passwd command may be used to change passwords for local accounts, and on most systems, can also be used to change passwords managed in a distributed authentication mechanism such as NIS, Kerberos, or LDAP. Password file The /etc/passwd file is a text-based database of information about users that may log into the system or other operating system user identities that own running processes. In many operating systems this file is just one of many possible back-ends for the more general passwd name service. The file's name originates from one of its initial functions as it contained the data used to verify passwords of user accounts. However, on modern Unix systems the security-sensitive password information is instead often stored in a different file using shadow passwords, or other database implementations. The /etc/passwd file typically has file system permissions that allow it to be readable by all users of the system (world-readable), although it may only be modified by the superuser or by us
https://en.wikipedia.org/wiki/Joule%20%28programming%20language%29
Joule is a capability-secure massively-concurrent dataflow programming language, designed for building distributed applications. It is so concurrent that the order of statements within a block is irrelevant to the operation of the block. Statements are executed whenever possible, based on their inputs. Everything in Joule happens by sending messages. There is no control flow. Instead, the programmer describes the flow of data, making it a dataflow programming language. Joule development started in 1994 at Agorics in Palo Alto, California. It is considered the precursor to the E programming language. Language syntax Numerals consist of ASCII digits 0–9; identifiers are Unicode sequences of digits, letters, and operator characters that begin with a letter. It is also possible to form identifiers by using Unicode sequences (including whitespace) enclosed by either straight (' ') or standard (‘ ’) single quotes, where the backslash is the escape character. Keywords have to start with a letter, except the • keyword to send information. Operators consist of Unicode sequences of digits, letters, and operator characters, beginning with an operator character. Labels are identifiers followed by a colon (':'). At the root, Joule is an imperative language and because of that a statement-based language. It has a rich expression syntax, which transforms easily to its relational syntax underneath. Complex expressions become separate statements, where the site of the original expression is replaced by a reference to the acceptor of the results channel. Therefore, nested expressions still compute completely concurrently with their embedding statement. If amount <= balance • account withdraw: amount else • account report-bounce: end An identifiers may name a channel to communicate with the server. If this is the case, it is said to be bound to that channel. References External links Joule: Distributed Application Foundations C2: Promise Pipelini
https://en.wikipedia.org/wiki/Thermal%20reservoir
A thermal reservoir, also thermal energy reservoir or thermal bath, is a thermodynamic system with a heat capacity so large that the temperature of the reservoir changes relatively little when a much more significant amount of heat is added or extracted. As a conceptual simplification, it effectively functions as an infinite pool of thermal energy at a given, constant temperature. Since it can act as an inertial source and sink of heat, it is often also referred to as a heat reservoir or heat bath. Lakes, oceans and rivers often serve as thermal reservoirs in geophysical processes, such as the weather. In atmospheric science, large air masses in the atmosphere often function as thermal reservoirs. Since the temperature of a thermal reservoir does not change during the heat transfer, the change of entropy in the reservoir is The microcanonical partition sum of a heat bath of temperature has the property where is the Boltzmann constant. It thus changes by the same factor when a given amount of energy is added. The exponential factor in this expression can be identified with the reciprocal of the Boltzmann factor. For an engineering application, see geothermal heat pump. See also Thermal battery Thermal energy storage References Thermodynamics
https://en.wikipedia.org/wiki/Remainder
In mathematics, the remainder is the amount "left over" after performing some computation. In arithmetic, the remainder is the integer "left over" after dividing one integer by another to produce an integer quotient (integer division). In algebra of polynomials, the remainder is the polynomial "left over" after dividing one polynomial by another. The modulo operation is the operation that produces such a remainder when given a dividend and divisor. Alternatively, a remainder is also what is left after subtracting one number from another, although this is more precisely called the difference. This usage can be found in some elementary textbooks; colloquially it is replaced by the expression "the rest" as in "Give me two dollars back and keep the rest." However, the term "remainder" is still used in this sense when a function is approximated by a series expansion, where the error expression ("the rest") is referred to as the remainder term. Integer division Given an integer a and a non-zero integer d, it can be shown that there exist unique integers q and r, such that and . The number q is called the quotient, while r is called the remainder. (For a proof of this result, see Euclidean division. For algorithms describing how to calculate the remainder, see division algorithm.) The remainder, as defined above, is called the least positive remainder or simply the remainder. The integer a is either a multiple of d, or lies in the interval between consecutive multiples of d, namely, q⋅d and (q + 1)d (for positive q). In some occasions, it is convenient to carry out the division so that a is as close to an integral multiple of d as possible, that is, we can write a = k⋅d + s, with |s| ≤ |d/2| for some integer k. In this case, s is called the least absolute remainder. As with the quotient and remainder, k and s are uniquely determined, except in the case where d = 2n and s = ± n. For this exception, we have: a = k⋅d + n = (k + 1)d − n. A unique remainder can be obta
https://en.wikipedia.org/wiki/PubMed
PubMed is a free search engine accessing primarily the MEDLINE database of references and abstracts on life sciences and biomedical topics. The United States National Library of Medicine (NLM) at the National Institutes of Health maintain the database as part of the Entrez system of information retrieval. From 1971 to 1997, online access to the MEDLINE database had been primarily through institutional facilities, such as university libraries. PubMed, first released in January 1996, ushered in the era of private, free, home- and office-based MEDLINE searching. The PubMed system was offered free to the public starting in June 1997. Content In addition to MEDLINE, PubMed provides access to: older references from the print version of Index Medicus, back to 1951 and earlier references to some journals before they were indexed in Index Medicus and MEDLINE, for instance Science, BMJ, and Annals of Surgery very recent entries to records for an article before it is indexed with Medical Subject Headings (MeSH) and added to MEDLINE a collection of books available full-text and other subsets of NLM records PMC citations NCBI Bookshelf Many PubMed records contain links to full text articles, some of which are freely available, often in PubMed Central and local mirrors, such as Europe PubMed Central. Information about the journals indexed in MEDLINE, and available through PubMed, is found in the NLM Catalog. , PubMed has more than 35 million citations and abstracts dating back to 1966, selectively to the year 1865, and very selectively to 1809. , 24.6 million of PubMed's records are listed with their abstracts, and 26.8 million records have links to full-text versions (of which 10.9 million articles are available, full-text for free). Over the last 10 years (ending 31 December 2019), an average of nearly one million new records were added each year. In 2016, NLM changed the indexing system so that publishers are able to directly correct typos and errors in PubMed ind
https://en.wikipedia.org/wiki/VISCII
VISCII is an unofficially-defined modified ASCII character encoding for using the Vietnamese language with computers. It should not be confused with the similarly-named officially registered VSCII encoding. VISCII keeps the 95 printable characters of ASCII unmodified, but it replaces 6 of the 33 control characters with printable characters. It adds 128 precomposed characters. Unicode and the Windows-1258 code page are now used for virtually all Vietnamese computer data, but legacy VSCII and VISCII files may need conversion. History and naming VISCII was designed by the Vietnamese Standardization Working Group (Viet-Std Group) led by Christopher Cuong T. Nguyen, Cuong M. Bui, and Hoc D. Ngo based in Silicon Valley, California in 1992 while they were working with the Unicode consortium to include pre-composed Vietnamese characters in the Unicode standard. VISCII, along with VIQR, was first published in a bilingual report in September 1992, in which it was dubbed the "Vietnamese Standard Code for Information Interchange". The report noted a proliferation in computer usage in Vietnam and the increasing volume of computer-based communications among Vietnamese abroad, that existing applications used vendor-specific encodings which were unable to interoperate with one another, and that standardisation between vendors was therefore necessary. The successful inclusion of composed and precomposed Vietnamese in Unicode 1.0 was the result of the lessons learned from the development of 8-bit VISCII and 7-bit VIQR. The next year, in 1993, Vietnam adopted TCVN 5712, its first national standard in the information technology domain. This defined a character encoding named VSCII, which had been developed by the TCVN Technical Committee on Information Technology (TCVN/TC1), and with its name standing for "Vietnamese Standard Code for Information Interchange". VSCII is incompatible with, and otherwise unrelated to, the earlier-published VISCII. Unlike VISCII, VSCII is a "Vietnamese
https://en.wikipedia.org/wiki/Spoofing%20attack
In the context of information security, and especially network security, a spoofing attack is a situation in which a person or program successfully identifies as another by falsifying data, to gain an illegitimate advantage. Internet Spoofing and TCP/IP Many of the protocols in the TCP/IP suite do not provide mechanisms for authenticating the source or destination of a message, leaving them vulnerable to spoofing attacks when extra precautions are not taken by applications to verify the identity of the sending or receiving host. IP spoofing and ARP spoofing in particular may be used to leverage man-in-the-middle attacks against hosts on a computer network. Spoofing attacks which take advantage of TCP/IP suite protocols may be mitigated with the use of firewalls capable of deep packet inspection or by taking measures to verify the identity of the sender or recipient of a message. Domain name spoofing The term 'Domain name spoofing' (or simply though less accurately, 'Domain spoofing') is used generically to describe one or more of a class of phishing attacks that depend on falsifying or misrepresenting an internet domain name. These are designed to persuade unsuspecting users into visiting a web site other than that intended, or opening an email that is not in reality from the address shown (or apparently shown). Although website and email spoofing attacks are more widely known, any service that relies on domain name resolution may be compromised. Referrer spoofing Some websites, especially pornographic paysites, allow access to their materials only from certain approved (login-) pages. This is enforced by checking the referrer header of the HTTP request. This referrer header, however, can be changed (known as "referrer spoofing" or "Ref-tar spoofing"), allowing users to gain unauthorized access to the materials. Poisoning of file-sharing networks "Spoofing" can also refer to copyright holders placing distorted or unlistenable versions of works on file-sh
https://en.wikipedia.org/wiki/Exotoxin
An exotoxin is a toxin secreted by bacteria. An exotoxin can cause damage to the host by destroying cells or disrupting normal cellular metabolism. They are highly potent and can cause major damage to the host. Exotoxins may be secreted, or, similar to endotoxins, may be released during lysis of the cell. Gram negative pathogens may secrete outer membrane vesicles containing lipopolysaccharide endotoxin and some virulence proteins in the bounding membrane along with some other toxins as intra-vesicular contents, thus adding a previously unforeseen dimension to the well-known eukaryote process of membrane vesicle trafficking, which is quite active at the host–pathogen interface. They may exert their effect locally or produce systemic effects. Well-known exotoxins include: botulinum toxin produced by Clostridium botulinum; Corynebacterium diphtheriae toxin, produced during life-threatening symptoms of diphtheria; tetanospasmin produced by Clostridium tetani. The toxic properties of most exotoxins can be inactivated by heat or chemical treatment to produce a toxoid. These retain their antigenic specificity and can be used to produce antitoxins and, in the case of diphtheria and tetanus toxoids, are used as vaccines. Exotoxins are susceptible to antibodies produced by the immune system, but some exotoxins are so toxic that they may be fatal to the host before the immune system has a chance to mount defenses against them. In such cases, antitoxin, anti-serum containing antibodies, can sometimes be injected to provide passive immunity. Types Many exotoxins have been categorized. This classification, while fairly exhaustive, is not the only system used. Other systems for classifying or identifying toxins include: By organism generating the toxin By organism susceptible to the toxin By secretion system used to release the toxin (for example, toxic effectors of type VI secretion system) By tissue target type susceptible to the toxin (neurotoxins affect the nervous
https://en.wikipedia.org/wiki/EMI%20%28protocol%29
External Machine Interface (EMI), an extension to Universal Computer Protocol (UCP), is a protocol primarily used to connect to short message service centres (SMSCs) for mobile telephones. The protocol was developed by CMG Wireless Data Solutions, now part of Mavenir. Syntax A typical EMI/UCP exchange looks like this : ^B01/00045/O/30/66677789///1//////68656C6C6F/CE^C ^B01/00041/R/30/A//66677789:180594141236/F3^C The start of the packet is signaled by ^B (STX, hex 02) and the end with ^C (ETX, hex 03). Fields within the packet are separated by / characters. The first four fields form the mandatory header. the third is the operation type (O for operation, R for result), and the fourth is the operation (here 30, "short message transfer"). The subsequent fields are dependent on the operation. In the first line above, '66677789' is the recipient's address (telephone number) and '68656C6C6F' is the content of the message, in this case the ASCII string "hello". The second line is the response with a matching transaction reference number, where 'A' indicates that the message was successfully acknowledged by the SMSC, and a timestamp is suffixed to the phone number to show time of delivery. The final field is the checksum, calculated simply by summing all bytes in the packet (including slashes) and taking the 8 least significant bits from the result. The full specification is available on the LogicaCMG website developers' forum, but registration is required. Technical limitations The two-digit transaction reference number means that an entity sending text messages can only have 100 outstanding messages (per session); this can limit performance, but only over a slow network and with incorrectly configured applications on one's SMSC (for example one session, with number of windows greater than 100). In practice it does not have any impact on delivery throughput. The EMI UCP documentation does not specify a default alphabet for alphanumeric messages after
https://en.wikipedia.org/wiki/Non-vascular%20plant
Non-vascular plants are plants without a vascular system consisting of xylem and phloem. Instead, they may possess simpler tissues that have specialized functions for the internal transport of water. Non-vascular plants include two distantly related groups: Bryophytes, an informal group that taxonomists treat as three separate land-plant divisions, namely: Bryophyta (mosses), Marchantiophyta (liverworts), and Anthocerotophyta (hornworts). In all bryophytes, the primary plants are the haploid gametophytes, with the only diploid portion being the attached sporophyte, consisting of a stalk and sporangium. Because these plants lack lignified water-conducting tissues, they cannot become as tall as most vascular plants. Algae, especially green algae. The algae consist of several unrelated groups. Only the groups included in the Viridiplantae are still considered relatives of land plants. These groups are sometimes called "lower plants", referring to their status as the earliest plant groups to evolve, but the usage is imprecise since both groups are polyphyletic and may be used to include vascular cryptogams, such as the ferns and fern allies that reproduce using spores. Non-vascular plants are often among the first species to move into new and inhospitable territories, along with prokaryotes and protists, and thus function as pioneer species. Non-vascular plants do not have a wide variety of specialized tissue types. Mosses and leafy liverworts have structures called phyllids that resemble leaves, but only consist of single sheets of cells with no internal air spaces, no cuticle or stomata, and no xylem or phloem. Consequently, phyllids are unable to control the rate of water loss from their tissues and are said to be poikilohydric. Some liverworts, such as Marchantia, have a cuticle, and the sporophytes of mosses have both cuticles and stomata, which were important in the evolution of land plants. All land plants have a life cycle with an alternation of generatio
https://en.wikipedia.org/wiki/Digital%20image
A digital image is an image composed of picture elements, also known as pixels, each with finite, discrete quantities of numeric representation for its intensity or gray level that is an output from its two-dimensional functions fed as input by its spatial coordinates denoted with x, y on the x-axis and y-axis, respectively. Depending on whether the image resolution is fixed, it may be of vector or raster type. Raster Raster images have a finite set of digital values, called picture elements or pixels. The digital image contains a fixed number of rows and columns of pixels. Pixels are the smallest individual element in an image, holding quantized values that represent the brightness of a given color at any specific point. Typically, the pixels are stored in computer memory as a raster image or raster map, a two-dimensional array of small integers. These values are often transmitted or stored in a compressed form. Raster images can be created by a variety of input devices and techniques, such as digital cameras, scanners, coordinate-measuring machines, seismographic profiling, airborne radar, and more. They can also be synthesized from arbitrary non-image data, such as mathematical functions or three-dimensional geometric models; the latter being a major sub-area of computer graphics. The field of digital image processing is the study of algorithms for their transformation. Raster file formats Most users come into contact with raster images through digital cameras, which use any of several image file formats. Some digital cameras give access to almost all the data captured by the camera, using a raw image format. The Universal Photographic Imaging Guidelines (UPDIG) suggests these formats be used when possible since raw files produce the best quality images. These file formats allow the photographer and the processing agent the greatest level of control and accuracy for output. Their use is inhibited by the prevalence of proprietary information (trade secrets)
https://en.wikipedia.org/wiki/CIMD
Computer Interface to Message Distribution (CIMD) is a proprietary short message service centre protocol developed by Nokia for their SMSC (now: Nokia Networks). Syntax An example CIMD exchange looks like the following: <STX>03:007<TAB>021:12345678<TAB>033:hello<TAB><ETX> <STX>53:007<TAB>021:12345678<TAB>060:971107131212<TAB><ETX> Each packet starts with STX (hex 02) and ends with ETX (hex 03). The content of the packet consists of fields separated by TAB (hex 09). Each field, in turn, consists of a parameter type, a colon (:), and the parameter value. Note that the last field must also be terminated with a TAB before the ETX. Two-digit parameter types are operation codes and each message must have exactly one. The number after the operation code is the sequence number used to match an operation to its response. The response code (acknowledgement) of the message is equal to the operation code plus 50. In the example above, the operation code 03 means submit message. Field 021 defines the destination address (telephone number), with field 033 is the user data (content) of the message. Response code 53 with a field 060 time stamp indicates that the message was accepted; if the message failed, the SMSC would reply with field 900 (error code) instead. A good amount of supporting software to implement CIMD is available from Nokia's Website to build CIMD client. You can fire SMS from message center with the help of CIMD client tools. See also Universal Computer Protocol/External Machine Interface (UCP/EMI) Short message peer-to-peer protocol (SMPP) External links Nokia: CIMD specification for SC v7.0 Nokia: CIMD specification for SC v8.0 Software Kannel, Open-Source WAP and SMS Gateway with CIMD 1.3 and CIMD 2.0 support. Ixonos MISP CIMD simulator, Open-Source CIMD v2 compliant server for testing CIMD client applications GSM standard Mobile technology Network protocols
https://en.wikipedia.org/wiki/Minkowski%20addition
In geometry, the Minkowski sum of two sets of position vectors A and B in Euclidean space is formed by adding each vector in A to each vector in B: The Minkowski difference (also Minkowski subtraction, Minkowski decomposition, or geometric difference) is the corresponding inverse, where produces a set that could be summed with B to recover A. This is defined as the complement of the Minkowski sum of the complement of A with the reflection of B about the origin. This definition allows a symmetrical relationship between the Minkowski sum and difference. Note that alternately taking the sum and difference with B is not necessarily equivalent. The sum can fill gaps which the difference may not re-open, and the difference can erase small islands which the sum cannot recreate from nothing. In 2D image processing the Minkowski sum and difference are known as dilation and erosion. An alternative definition of the Minkowski difference is sometimes used for computing intersection of convex shapes. This is not equivalent to the previous definition, and is not an inverse of the sum operation. Instead it replaces the vector addition of the Minkowski sum with a vector subtraction. If the two convex shapes intersect, the resulting set will contain the origin. The concept is named for Hermann Minkowski. Example For example, if we have two sets A and B, each consisting of three position vectors (informally, three points), representing the vertices of two triangles in , with coordinates and then their Minkowski sum is which comprises the vertices of a hexagon. For Minkowski addition, the , containing only the zero vector, 0, is an identity element: for every subset S of a vector space, The empty set is important in Minkowski addition, because the empty set annihilates every other subset: for every subset S of a vector space, its sum with the empty set is empty: For another example, consider the Minkowski sums of open or closed balls in the field which
https://en.wikipedia.org/wiki/Hausdorff%20measure
In mathematics, Hausdorff measure is a generalization of the traditional notions of area and volume to non-integer dimensions, specifically fractals and their Hausdorff dimensions. It is a type of outer measure, named for Felix Hausdorff, that assigns a number in [0,∞] to each set in or, more generally, in any metric space. The zero-dimensional Hausdorff measure is the number of points in the set (if the set is finite) or ∞ if the set is infinite. Likewise, the one-dimensional Hausdorff measure of a simple curve in is equal to the length of the curve, and the two-dimensional Hausdorff measure of a Lebesgue-measurable subset of is proportional to the area of the set. Thus, the concept of the Hausdorff measure generalizes the Lebesgue measure and its notions of counting, length, and area. It also generalizes volume. In fact, there are d-dimensional Hausdorff measures for any d ≥ 0, which is not necessarily an integer. These measures are fundamental in geometric measure theory. They appear naturally in harmonic analysis or potential theory. Definition Let be a metric space. For any subset , let denote its diameter, that is Let be any subset of and a real number. Define where the infimum is over all countable covers of by sets satisfying . Note that is monotone nonincreasing in since the larger is, the more collections of sets are permitted, making the infimum not larger. Thus, exists but may be infinite. Let It can be seen that is an outer measure (more precisely, it is a metric outer measure). By Carathéodory's extension theorem, its restriction to the σ-field of Carathéodory-measurable sets is a measure. It is called the -dimensional Hausdorff measure of . Due to the metric outer measure property, all Borel subsets of are measurable. In the above definition the sets in the covering are arbitrary. However, we can require the covering sets to be open or closed, or in normed spaces even convex, that will yield the same numbers, hence the same
https://en.wikipedia.org/wiki/LISTSERV
The term Listserv (styled by the registered trademark licensee, L-Soft International, Inc., as LISTSERV) has been used to refer to electronic mailing list software applications in general, but is more properly applied to a few early instances of such software, which allows a sender to send one email to a list, which then transparently sends it on to the addresses of the subscribers to the list. The original Listserv software, the Bitnic Listserv (also known as BITNIC LISTSERV) (1984–1986), allowed mailing lists to be implemented on IBM VM mainframes and was developed by Ira Fuchs, Daniel Oberst, and Ricky Hernandez in 1984. This mailing list service was known as Listserv@Bitnic (also known as LISTSERV@BITNIC) and quickly became a key service on the BITNET network. It provided functionality similar to a UNIX Sendmail alias and, as with Sendmail, subscriptions were managed manually. In 1986, Éric Thomas developed an independent application, originally named "Revised Listserv" (also known as "Revised LISTSERV"), which was the first automated mailing list management application. Prior to Revised Listserv, email lists were managed manually. To join or leave a list, people would write to a list administrator and ask to be added or removed, a process that became more time-consuming as discussion lists grew in popularity. By 1987, the users of the Bitnic Listserv had migrated to Thomas' version. Listserv was freeware from 1986 through 1993 and is now a commercial product developed by L-Soft, a company founded by Thomas in 1994. A free version limited to ten lists of up to 500 subscribers each can be downloaded from the company's web site. Several other list-management tools were subsequently developed, such as Lyris ListManager in 1997 (now Aurea Email Marketing), Sympa in 1997, GNU Mailman in 1998, and Gaggle in 2015. Automated mailing list management In 1986, Éric Thomas developed the concept of an automated mailing list manager. Whilst a student at École Centrale
https://en.wikipedia.org/wiki/Phantom%20Entertainment
Phantom Entertainment, Inc. (known as Infinium Labs, Inc. until 2006) was a company founded in 2002 by Tim Roberts which made computer keyboards. However, Phantom was best known for the Phantom, a video game console advertised for Internet gaming on demand in 2004; it was never marketed, leading to suggestions that it was vaporware. The company's website was last updated in late 2011. History Infinium Labs was founded by Tim Roberts in 2002 as a private company. In January 2003 it issued a press release saying that it would soon release a "revolutionary new gaming platform" with an on-demand video-game service, delivering games through an online subscription. The press release had no specific information, but included a computer-generated prototype design. Due to the use of buzzwords and the lack of details, the product was derided nearly from the beginning by news sites such as IGN and Slashdot and in the Penny Arcade webcomic. The hardware and gaming site HardOCP researched and wrote an extensive article on the company and its operation, and was sued in turn. The Phantom placed first in Wired Newss "Vaporware 2004". In 2004, Infinium Labs went public. Roberts left the company in summer 2005 (with millions of shares of stock) before any products had been delivered. He later rejoined as chairman of the board, but in a July 2007 press release he again resigned from the company. Subsequent CEOs included Kevin Bachus (who took the post in August 2005), Greg Koler (in January 2006) and John Landino, who was appointed CEO and interim chief financial officer in July 2008. In September 2006 the company (which had changed its name from Infinium Labs) promised to introduce its Phantom Lapboard product in November 2006, with a gaming service to follow in March 2007. In June 2008, the company released the Lapboard. In August 2007, Phantom Entertainment signed an agreement with ProGames Network to provide Lapboards and "game-service content" in hotels worldwide. The Phantom
https://en.wikipedia.org/wiki/Concyclic%20points
In geometry, a set of points are said to be concyclic (or cocyclic) if they lie on a common circle. A polygon whose vertices are concyclic is called a cyclic polygon, and the circle is called its circumscribing circle or circumcircle. All concyclic points are equidistant from the center of the circle. Three points in the plane that do not all fall on a straight line are concyclic, so every triangle is a cyclic polygon, with a well-defined circumcircle. However, four or more points in the plane are not necessarily concyclic. After triangles, the special case of cyclic quadrilaterals has been most extensively studied. Perpendicular bisectors In general the centre O of a circle on which points P and Q lie must be such that OP and OQ are equal distances. Therefore O must lie on the perpendicular bisector of the line segment PQ. For n distinct points there are n(n − 1)/2 bisectors, and the concyclic condition is that they all meet in a single point, the centre O. Triangles The vertices of every triangle fall on a circle called the circumcircle. (Because of this, some authors define "concyclic" only in the context of four or more points on a circle.) Several other sets of points defined from a triangle are also concyclic, with different circles; see Nine-point circle and Lester's theorem. The radius of the circle on which lie a set of points is, by definition, the radius of the circumcircle of any triangle with vertices at any three of those points. If the pairwise distances among three of the points are a, b, and c, then the circle's radius is The equation of the circumcircle of a triangle, and expressions for the radius and the coordinates of the circle's center, in terms of the Cartesian coordinates of the vertices are given here and here. Other concyclic points In any triangle all of the following nine points are concyclic on what is called the nine-point circle: the midpoints of the three edges, the feet of the three altitudes, and the points halfway betwe
https://en.wikipedia.org/wiki/Point%20in%20polygon
In computational geometry, the point-in-polygon (PIP) problem asks whether a given point in the plane lies inside, outside, or on the boundary of a polygon. It is a special case of point location problems and finds applications in areas that deal with processing geometrical data, such as computer graphics, computer vision, geographic information systems (GIS), motion planning, and computer-aided design (CAD). An early description of the problem in computer graphics shows two common approaches (ray casting and angle summation) in use as early as 1974. An attempt of computer graphics veterans to trace the history of the problem and some tricks for its solution can be found in an issue of the Ray Tracing News. Ray casting algorithm One simple way of finding whether the point is inside or outside a simple polygon is to test how many times a ray, starting from the point and going in any fixed direction, intersects the edges of the polygon. If the point is on the outside of the polygon the ray will intersect its edge an even number of times. If the point is on the inside of the polygon then it will intersect the edge an odd number of times. The status of a point on the edge of the polygon depends on the details of the ray intersection algorithm. This algorithm is sometimes also known as the crossing number algorithm or the even–odd rule algorithm, and was known as early as 1962. The algorithm is based on a simple observation that if a point moves along a ray from infinity to the probe point and if it crosses the boundary of a polygon, possibly several times, then it alternately goes from the outside to inside, then from the inside to the outside, etc. As a result, after every two "border crossings" the moving point goes outside. This observation may be mathematically proved using the Jordan curve theorem. Limited precision If implemented on a computer with finite precision arithmetics, the results may be incorrect if the point lies very close to that boundary, bec
https://en.wikipedia.org/wiki/Probabilistically%20checkable%20proof
In computational complexity theory, a probabilistically checkable proof (PCP) is a type of proof that can be checked by a randomized algorithm using a bounded amount of randomness and reading a bounded number of bits of the proof. The algorithm is then required to accept correct proofs and reject incorrect proofs with very high probability. A standard proof (or certificate), as used in the verifier-based definition of the complexity class NP, also satisfies these requirements, since the checking procedure deterministically reads the whole proof, always accepts correct proofs and rejects incorrect proofs. However, what makes them interesting is the existence of probabilistically checkable proofs that can be checked by reading only a few bits of the proof using randomness in an essential way. Probabilistically checkable proofs give rise to many complexity classes depending on the number of queries required and the amount of randomness used. The class PCP[r(n),q(n)] refers to the set of decision problems that have probabilistically checkable proofs that can be verified in polynomial time using at most r(n) random bits and by reading at most q(n) bits of the proof. Unless specified otherwise, correct proofs should always be accepted, and incorrect proofs should be rejected with probability greater than 1/2. The PCP theorem, a major result in computational complexity theory, states that PCP[O(log n),O(1)] = NP. Definition Given a decision problem L (or a language L with its alphabet set Σ), a probabilistically checkable proof system for L with completeness c(n) and soundness s(n), where 0 ≤ s(n) ≤ c(n) ≤ 1, consists of a prover and a verifier. Given a claimed solution x with length n, which might be false, the prover produces a proof π which states x solves L (x ∈ L, the proof is a string ∈ Σ*). And the verifier is a randomized oracle Turing Machine V (the verifier) that checks the proof π for the statement that x solves L(or x ∈ L) and decides whether to accept the
https://en.wikipedia.org/wiki/Public%20address%20system
A public address system (or PA system) is an electronic system comprising microphones, amplifiers, loudspeakers, and related equipment. It increases the apparent volume (loudness) of a human voice, musical instrument, or other acoustic sound source or recorded sound or music. PA systems are used in any public venue that requires that an announcer, performer, etc. be sufficiently audible at a distance or over a large area. Typical applications include sports stadiums, public transportation vehicles and facilities, and live or recorded music venues and events. A PA system may include multiple microphones or other sound sources, a mixing console to combine and modify multiple sources, and multiple amplifiers and loudspeakers for louder volume or wider distribution. Simple PA systems are often used in small venues such as school auditoriums, churches, and small bars. PA systems with many speakers are widely used to make announcements in public, institutional and commercial buildings and locations—such as schools, stadiums, and passenger vessels and aircraft. Intercom systems, installed in many buildings, have both speakers throughout a building, and microphones in many rooms so occupants can respond to announcements. PA and Intercom systems are commonly used as part of an emergency communication system. The term sound reinforcement system generally means a PA system used specifically for live music or other performances. In Britain any PA system is sometimes colloquially referred to as a Tannoy, after the company of that name, now owned by TC Electronic Group, which supplied many of the PA systems used previously in Britain. Early systems Megaphone From the Ancient Greek era to the nineteenth century, before the invention of electric loudspeakers and amplifiers, megaphone cones were used by people speaking to a large audience, to make their voice project more to a large space or group. Megaphones are typically portable, usually hand-held, cone-shaped acoustic horns
https://en.wikipedia.org/wiki/Quantum%20Corporation
Quantum Corporation is a data storage, management, and protection company that provides technology to store, manage, archive, and protect video and unstructured data throughout the data lifecycle. Their products are used by enterprises, media and entertainment companies, government agencies, big data companies, and life science organizations. Quantum is headquartered in San Jose, California and has offices around the world, supporting customers globally in addition to working with a network of distributors, VARs, DMRs, OEMs and other suppliers. History Quantum was founded in 1980 as Quantum Software Systems Inc. By 1984, it led the market for mid-capacity 5.25-inch drives. That year, a subsidiary was launched called Plus Development to focus on the development of hardcards. Plus Development became a successful designer of 3.5-inch drives with Matsushita Kotobuki Electronics (now Panasonic) as the contract manufacturer. By 1989, Quantum led the compact drive market. The company had 11 new models of 3.5-inch and 2.5-inch drives. It signed distribution agreements with Rein Electronik in Germany and Inelco Peripheriques in France. It also merged its subsidiary Plus Development Corporation into its Commercial Products Division. Quantum was the largest drive producer worldwide in 1994. In 2000, Maxtor agreed to acquire Quantum’s hard disk drive group. In 2004, Quantum became a member of the LTO Consortium after acquiring Certance. In 2012, Quantum announced Q-Cloud, which combines on-premise storage with cloud storage. In 2015, the company released a multi-tier storage product, StorNext 5.3, which supports Q-Cloud and powers the company’s Xcellis workflow storage technology. In 2018, Jamie Lerner became CEO and Quantum shifted focus from hard drives/tape to providing data storage, management, and protection for video and other unstructured data. In 2019, the company added a subscription for cloud-based device management and product, calling it Distributed Cloud S
https://en.wikipedia.org/wiki/Touch%20typing
Touch typing (also called blind typing, or touch keyboarding) is a style of typing. Although the phrase refers to typing without using the sense of sight to find the keys—specifically, a touch typist will know their location on the keyboard through muscle memory—the term is often used to refer to a specific form of touch typing that involves placing the eight fingers in a horizontal row along the middle of the keyboard (the home row) and having them reach for specific other keys. (Under this usage, typists who do not look at the keyboard but do not use home row either are referred to as hybrid typists.) Both two-handed touch typing and one-handed touch typing are possible. Frank Edward McGurrin, a court stenographer from Salt Lake City, Utah who taught typing classes, reportedly invented home row touch typing in 1888. On a standard QWERTY keyboard for English speakers the home row keys are: "ASDF" for the left hand and "JKL;" for the right hand. Most modern computer keyboards have a raised dot or bar on the home keys for the index fingers to help touch typists maintain and rediscover the correct positioning of the fingers on the keyboard keys. History Original layouts for the first few mechanical typewriters were in alphabetical order (ABCDE etc.) Changes were made, mostly responding to suggestions from telegraphists that were among the first users. Common letters were moved towards the center and into the upper row. Z and S are close to each other because the American Morse codes of Z and a common digram SE (both ) are near the same, so the telegraphist often needs to wait for more signals before understanding the content. The view that the layout was intentionally redesigned to slow down the operator, to prevent jamming the mechanism, is widespread but not correct. The calculations for keyboard layout were based on the language being typed and this meant different keyboard layouts would be needed for each language. In English-speaking countries, for example,
https://en.wikipedia.org/wiki/Perfect%20field
In algebra, a field k is perfect if any one of the following equivalent conditions holds: Every irreducible polynomial over k has distinct roots. Every irreducible polynomial over k is separable. Every finite extension of k is separable. Every algebraic extension of k is separable. Either k has characteristic 0, or, when k has characteristic , every element of k is a pth power. Either k has characteristic 0, or, when k has characteristic , the Frobenius endomorphism is an automorphism of k. The separable closure of k is algebraically closed. Every reduced commutative k-algebra A is a separable algebra; i.e., is reduced for every field extension F/k. (see below) Otherwise, k is called imperfect. In particular, all fields of characteristic zero and all finite fields are perfect. Perfect fields are significant because Galois theory over these fields becomes simpler, since the general Galois assumption of field extensions being separable is automatically satisfied over these fields (see third condition above). Another important property of perfect fields is that they admit Witt vectors. More generally, a ring of characteristic p (p a prime) is called perfect if the Frobenius endomorphism is an automorphism. (When restricted to integral domains, this is equivalent to the above condition "every element of k is a pth power".) Examples Examples of perfect fields are: every field of characteristic zero, so and every finite extension, and ; every finite field ; every algebraically closed field; the union of a set of perfect fields totally ordered by extension; fields algebraic over a perfect field. Most fields that are encountered in practice are perfect. The imperfect case arises mainly in algebraic geometry in characteristic . Every imperfect field is necessarily transcendental over its prime subfield (the minimal subfield), because the latter is perfect. An example of an imperfect field is the field , since the Frobenius sends and therefore it is n
https://en.wikipedia.org/wiki/Lysis%20buffer
A lysis buffer is a buffer solution used for the purpose of breaking open cells for use in molecular biology experiments that analyze the labile macromolecules of the cells (e.g. western blot for protein, or for DNA extraction). Most lysis buffers contain buffering salts (e.g. Tris-HCl) and ionic salts (e.g. NaCl) to regulate the pH and osmolarity of the lysate. Sometimes detergents (such as Triton X-100 or SDS) are added to break up membrane structures. For lysis buffers targeted at protein extraction, protease inhibitors are often included, and in difficult cases may be almost required. Lysis buffers can be used on both animal and plant tissue cells. Choosing a buffer The primary purpose of lysis buffer is isolating the molecules of interest and keeping them in a stable environment. For proteins, for some experiments, the target proteins should be completely denatured, while in some other experiments the target protein should remain folded and functional. Different proteins also have different properties and are found in different cellular environments. Thus, it is essential to choose the best buffer based on the purpose and design of the experiments. The important factors to be considered are: pH, ionic strength, usage of detergent, protease inhibitors to prevent proteolytic processes. For example, detergent addition is necessary when lysing Gram-negative bacteria, but not for Gram-positive bacteria. It is common that a protease inhibitor is added to lysis buffer, along with other enzyme inhibitors of choice, such as a phosphatase inhibitor when studying proteins with phosphorylation. Components Buffer Buffer creates an environment for isolated proteins. Each buffer choice has a specific pH range, so the buffer should be chosen based on whether the experiment's target protein is stable under a certain pH. Also, for buffers with similar pH ranges, it is important to consider whether the buffer is compatible with the experiment's target protein. The table bel
https://en.wikipedia.org/wiki/Regular%20sequence
In commutative algebra, a regular sequence is a sequence of elements of a commutative ring which are as independent as possible, in a precise sense. This is the algebraic analogue of the geometric notion of a complete intersection. Definitions For a commutative ring R and an R-module M, an element r in R is called a non-zero-divisor on M if r m = 0 implies m = 0 for m in M. An M-regular sequence is a sequence r1, ..., rd in R such that ri is a not a zero-divisor on M/(r1, ..., ri-1)M for i = 1, ..., d. Some authors also require that M/(r1, ..., rd)M is not zero. Intuitively, to say that r1, ..., rd is an M-regular sequence means that these elements "cut M down" as much as possible, when we pass successively from M to M/(r1)M, to M/(r1, r2)M, and so on. An R-regular sequence is called simply a regular sequence. That is, r1, ..., rd is a regular sequence if r1 is a non-zero-divisor in R, r2 is a non-zero-divisor in the ring R/(r1), and so on. In geometric language, if X is an affine scheme and r1, ..., rd is a regular sequence in the ring of regular functions on X, then we say that the closed subscheme {r1=0, ..., rd=0} ⊂ X is a complete intersection subscheme of X. Being a regular sequence may depend on the order of the elements. For example, x, y(1-x), z(1-x) is a regular sequence in the polynomial ring C[x, y, z], while y(1-x), z(1-x), x is not a regular sequence. But if R is a Noetherian local ring and the elements ri are in the maximal ideal, or if R is a graded ring and the ri are homogeneous of positive degree, then any permutation of a regular sequence is a regular sequence. Let R be a Noetherian ring, I an ideal in R, and M a finitely generated R-module. The depth of I on M, written depthR(I, M) or just depth(I, M), is the supremum of the lengths of all M-regular sequences of elements of I. When R is a Noetherian local ring and M is a finitely generated R-module, the depth of M, written depthR(M) or just depth(M), means depthR(m, M); that is, it is
https://en.wikipedia.org/wiki/Register%20machine
In mathematical logic and theoretical computer science, a register machine is a generic class of abstract machines used in a manner similar to a Turing machine. All the models are Turing equivalent. Overview The register machine gets its name from its use of one or more "registers". In contrast to the tape and head used by a Turing machine, the model uses multiple, uniquely addressed registers, each of which holds a single positive integer. There are at least four sub-classes found in literature, here listed from most primitive to the most like a computer: Counter machine – the most primitive and reduced theoretical model of a computer hardware. Lacks indirect addressing. Instructions are in the finite state machine in the manner of the Harvard architecture. Pointer machine – a blend of counter machine and RAM models. Less common and more abstract than either model. Instructions are in the finite state machine in the manner of the Harvard architecture. Random-access machine (RAM) – a counter machine with indirect addressing and, usually, an augmented instruction set. Instructions are in the finite state machine in the manner of the Harvard architecture. Random-access stored-program machine model (RASP) – a RAM with instructions in its registers analogous to the Universal Turing machine; thus it is an example of the von Neumann architecture. But unlike a computer, the model is idealized with effectively infinite registers (and if used, effectively infinite special registers such as an accumulator). Compared to a computer, the instruction set is much reduced in number and complexity. Any properly defined register machine model is Turing equivalent. Computational speed is very dependent on the model specifics. In practical computer science, a similar concept known as a virtual machine is sometimes used to minimise dependencies on underlying machine architectures. Such machines are also used for teaching. The term "register machine" is sometimes used to refer
https://en.wikipedia.org/wiki/Maidenhead%20Locator%20System
The Maidenhead Locator System (a.k.a. QTH Locator and IARU Locator) is a geocode system used by amateur radio operators to succinctly describe their geographic coordinates, which replaced the deprecated QRA locator, which was limited to European contacts. Its purpose is to be concise, accurate, and robust in the face of interference and other adverse transmission conditions. The Maidenhead Locator System can describe locations anywhere in the world. Maidenhead locators are also commonly referred to as QTH locators, grid locators or grid squares, although the "squares" are distorted on any non-equirectangular cartographic projection. Use of the terms QTH locator and QRA locator was initially discouraged, as it caused confusion with the older QRA locator system. The only abbreviation recommended to indicate a Maidenhead reference in Morse code and radio teleprinter transmission was LOC, as in LOC KN28LH. John Morris G4ANB originally devised the system and it was adopted at a meeting of the IARU VHF Working Group in Maidenhead, England in 1980. History Amateur radio contests on VHF and UHF are often scored based on the distance of contacts, typically 1 point per kilometre, so there is a need for amateurs to exchange their locations over the air. To facilitate this, following the growth of the sport in the 1950s, the German QRA locator system was adopted in 1959. The QRA locator system was limited to describing European coordinates, and by the mid-1970s there was growing need for a global locator system. By the time of their April 1980 meeting, in Maidenhead, England, the VHF Working Group had received twenty different proposals to replace the QRA locator grid. That devised by John Morris (G4ANB) was deemed to be the best. At the 1999 IARU Conference in Lillehammer it was decided that the latitude and longitude to be used as a reference for the determining of locators should be based on the World Geodetic System 1984 (WGS-84). Description of the system A Maidenhe
https://en.wikipedia.org/wiki/Sign%20function
In mathematics, the sign function or signum function (from signum, Latin for "sign") is a function that returns the sign of a real number. In mathematical notation the sign function is often represented as . Definition The signum function of a real number is a piecewise function which is defined as follows: Properties Any real number can be expressed as the product of its absolute value and its sign function: It follows that whenever is not equal to 0 we have Similarly, for any real number , We can also ascertain that: The signum function is the derivative of the absolute value function, up to (but not including) the indeterminacy at zero. More formally, in integration theory it is a weak derivative, and in convex function theory the subdifferential of the absolute value at 0 is the interval , "filling in" the sign function (the subdifferential of the absolute value is not single-valued at 0). Note, the resultant power of is 0, similar to the ordinary derivative of . The numbers cancel and all we are left with is the sign of . The signum function is differentiable with derivative 0 everywhere except at 0. It is not differentiable at 0 in the ordinary sense, but under the generalised notion of differentiation in distribution theory, the derivative of the signum function is two times the Dirac delta function, which can be demonstrated using the identity where is the Heaviside step function using the standard formalism. Using this identity, it is easy to derive the distributional derivative: The Fourier transform of the signum function is where means taking the Cauchy principal value. The signum can also be written using the Iverson bracket notation: The signum can also be written using the floor and the absolute value functions: The signum function has a very simple definition if is accepted to be equal to 1. Then signum can be written for all real numbers as The signum function coincides with the limits and as well as, Here, is the Hyperbo
https://en.wikipedia.org/wiki/DNase%20footprinting%20assay
A DNase footprinting assay is a DNA footprinting technique from molecular biology/biochemistry that detects DNA-protein interaction using the fact that a protein bound to DNA will often protect that DNA from enzymatic cleavage. This makes it possible to locate a protein binding site on a particular DNA molecule. The method uses an enzyme, deoxyribonuclease (DNase, for short), to cut the radioactively end-labeled DNA, followed by gel electrophoresis to detect the resulting cleavage pattern. For example, the DNA fragment of interest may be PCR amplified using a 32P 5' labeled primer, with the result being many DNA molecules with a radioactive label on one end of one strand of each double stranded molecule. Cleavage by DNase will produce fragments. The fragments which are smaller with respect to the 32P-labelled end will appear further on the gel than the longer fragments. The gel is then used to expose a special photographic film. The cleavage pattern of the DNA in the absence of a DNA binding protein, typically referred to as free DNA, is compared to the cleavage pattern of DNA in the presence of a DNA binding protein. If the protein binds DNA, the binding site is protected from enzymatic cleavage. This protection will result in a clear area on the gel which is referred to as the "footprint". By varying the concentration of the DNA-binding protein, the binding affinity of the protein can be estimated according to the minimum concentration of protein at which a footprint is observed. This technique was developed by David J. Galas and Albert Schmitz at Geneva in 1977 See also DNA footprinting DNase I Toeprinting assay References Biochemistry detection methods Molecular biology Laboratory techniques
https://en.wikipedia.org/wiki/HAKMEM
HAKMEM, alternatively known as AI Memo 239, is a February 1972 "memo" (technical report) of the MIT AI Lab containing a wide variety of hacks, including useful and clever algorithms for mathematical computation, some number theory and schematic diagrams for hardware – in Guy L. Steele's words, "a bizarre and eclectic potpourri of technical trivia". Contributors included about two dozen members and associates of the AI Lab. The title of the report is short for "hacks memo", abbreviated to six upper case characters that would fit in a single PDP-10 machine word (using a six-bit character set). History HAKMEM is notable as an early compendium of algorithmic technique, particularly for its practical bent, and as an illustration of the wide-ranging interests of AI Lab people of the time, which included almost anything other than AI research. HAKMEM contains original work in some fields, notably continued fractions. Introduction Compiled with the hope that a record of the random things people do around here can save some duplication of effort -- except for fun. Here is some little known data which may be of interest to computer hackers. The items and examples are so sketchy that to decipher them may require more sincerity and curiosity than a non-hacker can muster. Doubtless, little of this is new, but nowadays it's hard to tell. So we must be content to give you an insight, or save you some cycles, and to welcome further contributions of items, new or used. See also Hacker's Delight AI Memo References External links HAKMEM facsimile (PDF) (searchable version) Algorithms Computer science papers 1972 in Massachusetts Memoranda February 1972 events in the United States History of the Massachusetts Institute of Technology
https://en.wikipedia.org/wiki/Phosphodiester%20bond
In chemistry, a phosphodiester bond occurs when exactly two of the hydroxyl groups () in phosphoric acid react with hydroxyl groups on other molecules to form two ester bonds. The "bond" involves this linkage . Discussion of phosphodiesters is dominated by their prevalence in DNA and RNA, but phosphodiesters occur in other biomolecules, e.g. acyl carrier proteins. Phosphodiester bonds make up the backbones of DNA and RNA. The phosphate is attached to the 5' carbon. The 3' carbon of one sugar is bonded to the 5' phosphate of the adjacent sugar. Specifically, the phosphodiester bond links the 3' carbon atom of one sugar molecule and the 5' carbon atom of another (hence the name, 3', 5' phosphodiester linkage). These saccharide groups are derived from deoxyribose in DNA and ribose in RNA. Phosphodiesters are negatively charged at pH 7. Repulsion between these negative charges influences the conformation of the polynucleic acids. The negative charge attracts histones, metal cations such as magnesium, and polyamines. In order for the phosphodiester bond to be formed and the nucleotides to be joined, the tri-phosphate or di-phosphate forms of the nucleotide building blocks are broken apart to give off energy required to drive the enzyme-catalyzed reaction. Hydrolysis of phosphodiester bonds is catalyzed by phosphodiesterases, which are involved in repairing DNA sequences. The phosphodiester linkage between two ribonucleotides can be broken by alkaline hydrolysis, whereas the linkage between two deoxyribonucleotides is more stable under these conditions. The relative ease of RNA hydrolysis is an effect of the presence of the 2' hydroxyl group. Enzyme activity A phosphodiesterase is an enzyme that catalyzes the hydrolysis of phosphodiester bonds, for instance a bond in a molecule of cyclic AMP or cyclic GMP. An enzyme that plays an important role in the repair of oxidative DNA damage is the 3'-phosphodiesterase. During the replication of DNA, there is a hole bet
https://en.wikipedia.org/wiki/Image%20segmentation
In digital image processing and computer vision, image segmentation is the process of partitioning a digital image into multiple image segments, also known as image regions or image objects (sets of pixels). The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze. Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain characteristics. The result of image segmentation is a set of segments that collectively cover the entire image, or a set of contours extracted from the image (see edge detection). Each of the pixels in a region are similar with respect to some characteristic or computed property , such as color, intensity, or texture. Adjacent regions are significantly different color respect to the same characteristic(s). When applied to a stack of images, typical in medical imaging, the resulting contours after image segmentation can be used to create 3D reconstructions with the help of geometry reconstruction algorithms like marching cubes. Applications Some of the practical applications of image segmentation are: Content-based image retrieval Machine vision Medical imaging, including volume rendered images from computed tomography and magnetic resonance imaging. Locate tumors and other pathologies Measure tissue volumes Diagnosis, study of anatomical structure Surgery planning Virtual surgery simulation Intra-surgery navigation Radiotherapy Object detection Pedestrian detection Face detection Brake light detection Locate objects in satellite images (roads, forests, crops, etc.) Recognition Tasks Face recognition Fingerprint recognition Iris recognition Prohibited Item at Airport security checkpoints Traffic control systems Video surveillance Video object
https://en.wikipedia.org/wiki/Particle%20velocity
Particle velocity (denoted or ) is the velocity of a particle (real or imagined) in a medium as it transmits a wave. The SI unit of particle velocity is the metre per second (m/s). In many cases this is a longitudinal wave of pressure as with sound, but it can also be a transverse wave as with the vibration of a taut string. When applied to a sound wave through a medium of a fluid like air, particle velocity would be the physical speed of a parcel of fluid as it moves back and forth in the direction the sound wave is travelling as it passes. Particle velocity should not be confused with the speed of the wave as it passes through the medium, i.e. in the case of a sound wave, particle velocity is not the same as the speed of sound. The wave moves relatively fast, while the particles oscillate around their original position with a relatively small particle velocity. Particle velocity should also not be confused with the velocity of individual molecules, which depends mostly on the temperature and molecular mass. In applications involving sound, the particle velocity is usually measured using a logarithmic decibel scale called particle velocity level. Mostly pressure sensors (microphones) are used to measure sound pressure which is then propagated to the velocity field using Green's function. Mathematical definition Particle velocity, denoted , is defined by where is the particle displacement. Progressive sine waves The particle displacement of a progressive sine wave is given by where is the amplitude of the particle displacement; is the phase shift of the particle displacement; is the angular wavevector; is the angular frequency. It follows that the particle velocity and the sound pressure along the direction of propagation of the sound wave x are given by where is the amplitude of the particle velocity; is the phase shift of the particle velocity; is the amplitude of the acoustic pressure; is the phase shift of the acoustic pressure. Taking the La
https://en.wikipedia.org/wiki/Acoustic%20impedance
Acoustic impedance and specific acoustic impedance are measures of the opposition that a system presents to the acoustic flow resulting from an acoustic pressure applied to the system. The SI unit of acoustic impedance is the pascal-second per cubic metre (), or in the MKS system the rayl per square metre (), while that of specific acoustic impedance is the pascal-second per metre (), or in the MKS system the rayl. There is a close analogy with electrical impedance, which measures the opposition that a system presents to the electric current resulting from a voltage applied to the system. Mathematical definitions Acoustic impedance For a linear time-invariant system, the relationship between the acoustic pressure applied to the system and the resulting acoustic volume flow rate through a surface perpendicular to the direction of that pressure at its point of application is given by: or equivalently by where p is the acoustic pressure; Q is the acoustic volume flow rate; is the convolution operator; R is the acoustic resistance in the time domain; G = R −1 is the acoustic conductance in the time domain (R −1 is the convolution inverse of R). Acoustic impedance, denoted Z, is the Laplace transform, or the Fourier transform, or the analytic representation of time domain acoustic resistance: where is the Laplace transform operator; is the Fourier transform operator; subscript "a" is the analytic representation operator; Q −1 is the convolution inverse of Q. Acoustic resistance, denoted R, and acoustic reactance, denoted X, are the real part and imaginary part of acoustic impedance respectively: where i is the imaginary unit; in Z(s), R(s) is not the Laplace transform of the time domain acoustic resistance R(t), Z(s) is; in Z(ω), R(ω) is not the Fourier transform of the time domain acoustic resistance R(t), Z(ω) is; in Z(t), R(t) is the time domain acoustic resistance and X(t) is the Hilbert transform of the time domain acoustic resist
https://en.wikipedia.org/wiki/Index%20of%20fractal-related%20articles
This is a list of fractal topics, by Wikipedia page, See also list of dynamical systems and differential equations topics. 1/f noise Apollonian gasket Attractor Box-counting dimension Cantor distribution Cantor dust Cantor function Cantor set Cantor space Chaos theory Coastline Constructal theory Dimension Dimension theory Dragon curve Fatou set Fractal Fractal antenna Fractal art Fractal compression Fractal flame Fractal landscape Fractal transform Fractint Graftal Iterated function system Horseshoe map How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension Julia set Koch snowflake L-system Lebesgue covering dimension Lévy C curve Lévy flight List of fractals by Hausdorff dimension Lorenz attractor Lyapunov fractal Mandelbrot set Menger sponge Minkowski–Bouligand dimension Multifractal analysis Olbers' paradox Perlin noise Power law Rectifiable curve Scale-free network Self-similarity Sierpinski carpet Sierpiński curve Sierpinski triangle Space-filling curve T-square (fractal) Topological dimension Fractals
https://en.wikipedia.org/wiki/Equivalent%20dose
Equivalent dose is a dose quantity H representing the stochastic health effects of low levels of ionizing radiation on the human body which represents the probability of radiation-induced cancer and genetic damage. It is derived from the physical quantity absorbed dose, but also takes into account the biological effectiveness of the radiation, which is dependent on the radiation type and energy. In the SI system of units, the unit of measure is the sievert (Sv). Application To enable consideration of stochastic health risk, calculations are performed to convert the physical quantity absorbed dose into equivalent dose, the details of which depend on the radiation type. For applications in radiation protection and dosimetry assessment, the International Commission on Radiological Protection (ICRP) and the International Commission on Radiation Units and Measurements (ICRU) have published recommendations and data on how to calculate equivalent dose from absorbed dose. Equivalent dose is designated by the ICRP as a "limiting quantity"; to specify exposure limits to ensure that "the occurrence of stochastic health effects is kept below unacceptable levels and that tissue reactions are avoided". This is a calculated value, as equivalent dose cannot be practically measured, and the purpose of the calculation is to generate a value of equivalent dose for comparison with observed health effects. Calculation Equivalent dose HT is calculated using the mean absorbed dose deposited in body tissue or organ T, multiplied by the radiation weighting factor WR which is dependent on the type and energy of the radiation R. The radiation weighting factor represents the relative biological effectiveness of the radiation and modifies the absorbed dose to take account of the different biological effects of various types and energies of radiation. The ICRP has assigned radiation weighting factors to specified radiation types dependent on their relative biological effectiveness, whic
https://en.wikipedia.org/wiki/Okazaki%20fragments
Okazaki fragments are short sequences of DNA nucleotides (approximately 150 to 200 base pairs long in eukaryotes) which are synthesized discontinuously and later linked together by the enzyme DNA ligase to create the lagging strand during DNA replication. They were discovered in the 1960s by the Japanese molecular biologists Reiji and Tsuneko Okazaki, along with the help of some of their colleagues. During DNA replication, the double helix is unwound and the complementary strands are separated by the enzyme DNA helicase, creating what is known as the DNA replication fork. Following this fork, DNA primase and DNA polymerase begin to act in order to create a new complementary strand. Because these enzymes can only work in the 5’ to 3’ direction, the two unwound template strands are replicated in different ways. One strand, the leading strand, undergoes a continuous replication process since its template strand has 3’ to 5’ directionality, allowing the polymerase assembling the leading strand to follow the replication fork without interruption. The lagging strand, however, cannot be created in a continuous fashion because its template strand has 5’ to 3’ directionality, which means the polymerase must work backwards from the replication fork. This causes periodic breaks in the process of creating the lagging strand. The primase and polymerase move in the opposite direction of the fork, so the enzymes must repeatedly stop and start again while the DNA helicase breaks the strands apart. Once the fragments are made, DNA ligase connects them into a single, continuous strand. The entire replication process is considered "semi-discontinuous" since one of the new strands is formed continuously and the other is not. During the 1960s, Reiji and Tsuneko Okazaki conducted experiments involving DNA replication in the bacterium Escherichia coli. Before this time, it was commonly thought that replication was a continuous process for both strands, but the discoveries involving E.
https://en.wikipedia.org/wiki/List%20of%20noise%20topics
This is a list of noise topics. Engineering and physics 1/f noise A-weighting Ambient noise level Antenna noise temperature Artificial noise Audio noise reduction Audio system measurements Black noise Blue noise Burst noise Carrier-to-receiver noise density Channel noise level Circuit noise level Colors of noise Comfort noise Comfort noise generator Cosmic noise Crackling noise DBa DBrn Decibel Detection theory Dither Dynamic range Effective input noise temperature Environmental noise Equivalent noise resistance Equivalent pulse code modulation noise Errors and residuals in statistics Fixed pattern noise Flicker noise Gaussian noise Generation-recombination noise Image noise Image noise reduction Intermodulation noise Internet background noise ITU-R 468 noise weighting Jansky noise Johnson–Nyquist noise, Johnson noise Line noise Mode partition noise Neuronal noise Noise Noise (audio) Noise (economic) Noise (electronic) Noise (environmental) Noise (physics) Noise (radio) Noise (video) Noise current Noise-equivalent power Noise figure Noise floor Noise gate Noise generator Noise level Noise measurement Noise power Noise print Noise shaping Noise temperature Noise wall Noise weighting Noisy black Noisy white Peak signal-to-noise ratio Perlin noise Phase noise Photon noise Pink noise Pseudonoise=pseudorandom noise Quantization noise Quantum 1/f noise Radio noise source Random noise Received noise power Red noise Reference noise Salt and pepper noise Shot noise Signal-to-noise ratio Statistical noise Stochastic resonance Tape hiss Thermal noise Underwater acoustics White noise White noise machine Environmental Acoustic noise Artificial noise Aircraft noise Background noise Impulse noise Industrial noise Noise barrier Noise control Noise health effects Noise pollution Noise regulation Roadway noise Train noise Noise reduction Active noise control = anti-noise DBX
https://en.wikipedia.org/wiki/Cryptosystem
In cryptography, a cryptosystem is a suite of cryptographic algorithms needed to implement a particular security service, such as confidentiality (encryption). Typically, a cryptosystem consists of three algorithms: one for key generation, one for encryption, and one for decryption. The term cipher (sometimes cypher) is often used to refer to a pair of algorithms, one for encryption and one for decryption. Therefore, the term cryptosystem is most often used when the key generation algorithm is important. For this reason, the term cryptosystem is commonly used to refer to public key techniques; however both "cipher" and "cryptosystem" are used for symmetric key techniques. Formal definition Mathematically, a cryptosystem or encryption scheme can be defined as a tuple with the following properties. is a set called the "plaintext space". Its elements are called plaintexts. is a set called the "ciphertext space". Its elements are called ciphertexts. is a set called the "key space". Its elements are called keys. is a set of functions . Its elements are called "encryption functions". is a set of functions . Its elements are called "decryption functions". For each , there is such that for all . Note; typically this definition is modified in order to distinguish an encryption scheme as being either a symmetric-key or public-key type of cryptosystem. Examples A classical example of a cryptosystem is the Caesar cipher. A more contemporary example is the RSA cryptosystem. Another example of a cryptosystem is the Advanced Encryption Standard (AES). AES is a widely used symmetric encryption algorithm that has become the standard for securing data in various applications. References Cryptography
https://en.wikipedia.org/wiki/Fluid%20power
Fluid power is the use of fluids under pressure to generate, control, and transmit power. Fluid power is conventionally subdivided into hydraulics (using a liquid such as mineral oil or water) and pneumatics (using a gas such as compressed air or other gases). Although steam is also a fluid, steam power is usually classified separately from fluid power (implying hydraulics or pneumatics). Compressed-air and water-pressure systems were once used to transmit power from a central source to industrial users over extended geographic areas; fluid power systems today are usually within a single building or mobile machine. Fluid power systems perform work by a pressurized fluid bearing directly on a piston in a cylinder or in a fluid motor. A fluid cylinder produces a force resulting in linear motion, whereas a fluid motor produces torque resulting in rotary motion. Within a fluid power system, cylinders and motors (also called actuators) do the desired work. Control components such as valves regulate the system. Elements A fluid power system has a pump driven by a prime mover (such as an electric motor or internal combustion engine) that converts mechanical energy into fluid energy, Pressurized fluid is controlled and directed by valves into an actuator device such as a hydraulic cylinder or pneumatic cylinder, to provide linear motion, or a hydraulic motor or pneumatic motor, to provide rotary motion or torque. Rotary motion may be continuous or confined to less than one revolution. Hydraulic pumps Dynamic (non positive displacement) pumps This type is generally used for low-pressure, high volume flow applications. Since they are not capable of withstanding high pressures, there is little use in the fluid power field. Their maximum pressure is limited to 250-300 psi (1.7 - 2.0 MPa). This type of pump is primarily used for transporting fluids from one location to another. Centrifugal and axial flow propeller pumps are the two most common types of dynamic pumps. Po
https://en.wikipedia.org/wiki/Interesting%20number%20paradox
The interesting number paradox is a humorous paradox which arises from the attempt to classify every natural number as either "interesting" or "uninteresting". The paradox states that every natural number is interesting. The "proof" is by contradiction: if there exists a non-empty set of uninteresting natural numbers, there would be a smallest uninteresting number – but the smallest uninteresting number is itself interesting because it is the smallest uninteresting number, thus producing a contradiction. "Interestingness" concerning numbers is not a formal concept in normal terms, but an innate notion of "interestingness" seems to run among some number theorists. Famously, in a discussion between the mathematicians G. H. Hardy and Srinivasa Ramanujan about interesting and uninteresting numbers, Hardy remarked that the number 1729 of the taxicab he had ridden seemed "rather a dull one", and Ramanujan immediately answered that it is interesting, being the smallest number that is the sum of two cubes in two different ways. Paradoxical nature Attempting to classify all numbers this way leads to a paradox or an antinomy of definition. Any hypothetical partition of natural numbers into interesting and uninteresting sets seems to fail. Since the definition of interesting is usually a subjective, intuitive notion, it should be understood as a semi-humorous application of self-reference in order to obtain a paradox. The paradox is alleviated if "interesting" is instead defined objectively: for example, the smallest natural number that does not appear in an entry of the On-Line Encyclopedia of Integer Sequences (OEIS) was originally found to be 11630 on 12 June 2009. The number fitting this definition later became 12407 from November 2009 until at least November 2011, then 13794 as of April 2012, until it appeared in sequence as of 3 November 2012. Since November 2013, that number was 14228, at least until 14 April 2014. In May 2021, the number was 20067. (This definition
https://en.wikipedia.org/wiki/Spectral%20theory
In mathematics, spectral theory is an inclusive term for theories extending the eigenvector and eigenvalue theory of a single square matrix to a much broader theory of the structure of operators in a variety of mathematical spaces. It is a result of studies of linear algebra and the solutions of systems of linear equations and their generalizations. The theory is connected to that of analytic functions because the spectral properties of an operator are related to analytic functions of the spectral parameter. Mathematical background The name spectral theory was introduced by David Hilbert in his original formulation of Hilbert space theory, which was cast in terms of quadratic forms in infinitely many variables. The original spectral theorem was therefore conceived as a version of the theorem on principal axes of an ellipsoid, in an infinite-dimensional setting. The later discovery in quantum mechanics that spectral theory could explain features of atomic spectra was therefore fortuitous. Hilbert himself was surprised by the unexpected application of this theory, noting that "I developed my theory of infinitely many variables from purely mathematical interests, and even called it 'spectral analysis' without any presentiment that it would later find application to the actual spectrum of physics." There have been three main ways to formulate spectral theory, each of which find use in different domains. After Hilbert's initial formulation, the later development of abstract Hilbert spaces and the spectral theory of single normal operators on them were well suited to the requirements of physics, exemplified by the work of von Neumann. The further theory built on this to address Banach algebras in general. This development leads to the Gelfand representation, which covers the commutative case, and further into non-commutative harmonic analysis. The difference can be seen in making the connection with Fourier analysis. The Fourier transform on the real line is in one sen
https://en.wikipedia.org/wiki/Regular%20local%20ring
In commutative algebra, a regular local ring is a Noetherian local ring having the property that the minimal number of generators of its maximal ideal is equal to its Krull dimension. In symbols, let A be a Noetherian local ring with maximal ideal m, and suppose a1, ..., an is a minimal set of generators of m. Then by Krull's principal ideal theorem n ≥ dim A, and A is defined to be regular if n = dim A. The appellation regular is justified by the geometric meaning. A point x on an algebraic variety X is nonsingular if and only if the local ring of germs at x is regular. (See also: regular scheme.) Regular local rings are not related to von Neumann regular rings. For Noetherian local rings, there is the following chain of inclusions: Characterizations There are a number of useful definitions of a regular local ring, one of which is mentioned above. In particular, if is a Noetherian local ring with maximal ideal , then the following are equivalent definitions: Let where is chosen as small as possible. Then is regular if , where the dimension is the Krull dimension. The minimal set of generators of are then called a regular system of parameters. Let be the residue field of . Then is regular if , where the second dimension is the Krull dimension. Let be the global dimension of (i.e., the supremum of the projective dimensions of all -modules.) Then is regular if , in which case, . Multiplicity one criterion states: if the completion of a Noetherian local ring A is unimixed (in the sense that there is no embedded prime divisor of the zero ideal and for each minimal prime p, ) and if the multiplicity of A is one, then A is regular. (The converse is always true: the multiplicity of a regular local ring is one.) This criterion corresponds to a geometric intuition in algebraic geometry that a local ring of an intersection is regular if and only if the intersection is a transversal intersection. In the positive characteristic case, there is the following i
https://en.wikipedia.org/wiki/Address%20space
In computing, an address space defines a range of discrete addresses, each of which may correspond to a network host, peripheral device, disk sector, a memory cell or other logical or physical entity. For software programs to save and retrieve stored data, each datum must have an address where it can be located. The number of address spaces available depends on the underlying address structure, which is usually limited by the computer architecture being used. Often an address space in a system with virtual memory corresponds to a highest level translation table, e.g., a segment table in IBM System/370. Address spaces are created by combining enough uniquely identified qualifiers to make an address unambiguous within the address space. For a person's physical address, the address space would be a combination of locations, such as a neighborhood, town, city, or country. Some elements of a data address space may be the same, but if any element in the address is different, addresses in said space will reference different entities. For example, there could be multiple buildings at the same address of "32 Main Street" but in different towns, demonstrating that different towns have different, although similarly arranged, street address spaces. An address space usually provides (or allows) a partitioning to several regions according to the mathematical structure it has. In the case of total order, as for memory addresses, these are simply chunks. Like the hierarchical design of postal addresses, some nested domain hierarchies appear as a directed ordered tree, such as with the Domain Name System or a directory structure. In the Internet, the Internet Assigned Numbers Authority (IANA) allocates ranges of IP addresses to various registries so each can manage their parts of the global Internet address space. Examples Uses of addresses include, but are not limited to the following: Memory addresses for main memory, memory-mapped I/O, as well as for virtual memory; Device