source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Invariant%20polynomial
In mathematics, an invariant polynomial is a polynomial that is invariant under a group acting on a vector space . Therefore, is a -invariant polynomial if for all and . Cases of particular importance are for Γ a finite group (in the theory of Molien series, in particular), a compact group, a Lie group or algebraic group. For a basis-independent definition of 'polynomial' nothing is lost by referring to the symmetric powers of the given linear representation of Γ.
https://en.wikipedia.org/wiki/T-structure
In the branch of mathematics called homological algebra, a t-structure is a way to axiomatize the properties of an abelian subcategory of a derived category. A t-structure on consists of two subcategories of a triangulated category or stable infinity category which abstract the idea of complexes whose cohomology vanishes in positive, respectively negative, degrees. There can be many distinct t-structures on the same category, and the interplay between these structures has implications for algebra and geometry. The notion of a t-structure arose in the work of Beilinson, Bernstein, Deligne, and Gabber on perverse sheaves. Definition Fix a triangulated category with translation functor . A t-structure on is a pair of full subcategories, each of which is stable under isomorphism, which satisfy the following three axioms. If X is an object of and Y is an object of , then If X is an object of , then X[1] is also an object of . Similarly, if Y is an object of , then Y[-1] is also an object of . If A is an object of , then there exists a distinguished triangle such that X is an object of and Y is an object of . It can be shown that the subcategories and are closed under extensions in . In particular, they are stable under finite direct sums. Suppose that is a t-structure on . In this case, for any integer n, we define to be the full subcategory of whose objects have the form , where is an object of . Similarly, is the full subcategory of objects , where is an object of . More briefly, we define With this notation, the axioms above may be rewritten as: If X is an object of and Y is an object of , then and . If A is an object of , then there exists a distinguished triangle such that X is an object of and Y is an object of . The heart or core of the t-structure is the full subcategory consisting of objects contained in both and , that is, The heart of a t-structure is an abelian category (whereas a triangulated category is additive b
https://en.wikipedia.org/wiki/All-pairs%20testing
In computer science, all-pairs testing or pairwise testing is a combinatorial method of software testing that, for each pair of input parameters to a system (typically, a software algorithm), tests all possible discrete combinations of those parameters. Using carefully chosen test vectors, this can be done much faster than an exhaustive search of all combinations of all parameters, by "parallelizing" the tests of parameter pairs. Computer scientists and mathematicians both work on algorithms to generate pairwise test suites. Numerous exist to generate such test suites as there is no efficient exact solution for every possible input and constraints scenarios. An early researcher in this area created a short one-hour Combinatorial Testing course that covers the theory of combinatorial testing (of which pairwise testing is a special case) and shows learners how to use a free tool from NIST to generate their own combinatorial test suites quickly. Rationale The most common bugs in a program are generally triggered by either a single input parameter or an interaction between pairs of parameters. Bugs involving interactions between three or more parameters are both progressively less common and also progressively more expensive to find---such testing has as its limit the testing of all possible inputs. Thus, a combinatorial technique for picking test cases like all-pairs testing is a useful cost-benefit compromise that enables a significant reduction in the number of test cases without drastically compromising functional coverage. More rigorously, if we assume that a test case has parameters given in a set . The range of the parameters are given by . Let's assume that . We note that the number of all possible test cases is a . Imagining that the code deals with the conditions taking only two parameters at a time, might reduce the number of needed test cases. To demonstrate, suppose there are X,Y,Z parameters. We can use a predicate of the form of order 3, which
https://en.wikipedia.org/wiki/Soribada
Soribada () was the first Korean peer-to-peer file-sharing service, launched in 2000 by Sean Yang. The name 'Soribada' means "Ocean of Sound" or "Receiving (downloading) Sound". It was closed in 2002 by court order but continued to be distributed with a stipulation that its users were responsible for any of the files downloaded. On November 5, 2003, Soribada was relaunched as and in July 2004, the website was renewed as a P2P search portal with a paid MP3 service in December 2004. It remains the most widely used P2P system in Korea. The most recent version of Soribada is Soribada 6, which is downloadable on their website. In 2017, the site held their first Soribada Best K-Music Awards, after 17 years since the site's launch. Charges of 2002 Soribada was indicted on copyright infringement charges for the first time. The case was filed by the Korean Association of Phonographic Producers (KAPP), presently the Recording Industry Association of Korea (RIAK). Soribada 2.0 Soribada 2.0 allowed users to swap files without having to establish a link to a centralized server. This mechanism was put in place in order to minimize the risk of legal prosecution. However, KAPP's reply to this solution was that every Soribada 2.0-user was sued instead of the developers. Yang Jung-hwan responded to KAPP's approach by saying, “In a situation where voluminous e-mail services handling over 100MB are being sustained, netizens will find other ways to share music files even with Soribada out of the market.” Paid service From December 2004 to June 2005, Soribada sold nearly 5 million songs through its servers. Searches returned both tracks for sale and free downloads, with the first ones appearing higher on search results. Service stopped: September 2005 Upon being sued again, this time by 30 record labels (led by YBM Seoul Records (now Kakao Entertainment) and JYP Entertainment) and some musicians, Soribada stopped its service in 2005. Yang Jung-hwan and his brother Il-hwan, the cre
https://en.wikipedia.org/wiki/Cardiolipin
Cardiolipin (IUPAC name 1,3-bis(sn-3’-phosphatidyl)-sn-glycerol, "sn" designating stereospecific numbering) is an important component of the inner mitochondrial membrane, where it constitutes about 20% of the total lipid composition. It can also be found in the membranes of most bacteria. The name "cardiolipin" is derived from the fact that it was first found in animal hearts. It was first isolated from the beef heart in the early 1940s by Mary C. Pangborn. In mammalian cells, but also in plant cells, cardiolipin (CL) is found almost exclusively in the inner mitochondrial membrane, where it is essential for the optimal function of numerous enzymes that are involved in mitochondrial energy metabolism. Structure Cardiolipin (CL) is a kind of diphosphatidylglycerol lipid. Two phosphatidic acid moieties connect with a glycerol backbone in the center to form a dimeric structure. So it has four alkyl groups and potentially carries two negative charges. As there are four distinct alkyl chains in cardiolipin, the potential for complexity of this molecule species is enormous. However, in most animal tissues, cardiolipin contains 18-carbon fatty alkyl chains with 2 unsaturated bonds on each of them. It has been proposed that the (18:2)4 acyl chain configuration is an important structural requirement for the high affinity of CL to inner membrane proteins in mammalian mitochondria. However, studies with isolated enzyme preparations indicate that its importance may vary depending on the protein examined. Since there are two phosphates in the molecule, each of them can catch one proton. Although it has a symmetric structure, ionizing one phosphate happens at a very different levels of acidity than ionizing both: pK1 = 3 and pK2 > 7.5. So under normal physiological conditions (wherein pH is around 7), the molecule may carry only one negative charge. The hydroxyl groups (–OH and –O−) on phosphate would form a stable intramolecular hydrogen bond with the centered glycerol's hydr
https://en.wikipedia.org/wiki/Architecture%20description%20language
Architecture description languages (ADLs) are used in several disciplines: system engineering, software engineering, and enterprise modelling and engineering. The system engineering community uses an architecture description language as a language and/or a conceptual model to describe and represent system architectures. The software engineering community uses an architecture description language as a computer language to create a description of a software architecture. In the case of a so-called technical architecture, the architecture must be communicated to software developers; a functional architecture is communicated to various stakeholders and users. Some ADLs that have been developed are: Acme (developed by CMU), AADL (standardized by the SAE), C2 (developed by UCI), SBC-ADL (developed by National Sun Yat-Sen University), Darwin (developed by Imperial College London), and Wright (developed by CMU). Overview The ISO/IEC/IEEE 42010 document, Systems and software engineering—Architecture description, defines an architecture description language as "any form of expression for use in architecture descriptions" and specifies minimum requirements on ADLs. The enterprise modelling and engineering community have also developed architecture description languages catered for at the enterprise level. Examples include ArchiMate (now a standard of The Open Group), DEMO, ABACUS (developed by the University of Technology, Sydney). These languages do not necessarily refer to software components, etc. Most of them, however, refer to an application architecture as the architecture that is communicated to the software engineers. Most of the writing below refers primarily to the perspective from the software engineering community. A standard notation (ADL) for representing architectures helps promote mutual communication, the embodiment of early design decisions, and the creation of a transferable abstraction of a system. Architectures in the past were largely represented by
https://en.wikipedia.org/wiki/Phase-change%20material
A phase-change material (PCM) is a substance which releases/absorbs sufficient energy at phase transition to provide useful heat or cooling. Generally the transition will be from one of the first two fundamental states of matter - solid and liquid - to the other. The phase transition may also be between non-classical states of matter, such as the conformity of crystals, where the material goes from conforming to one crystalline structure to conforming to another, which may be a higher or lower energy state. The energy released/absorbed by phase transition from solid to liquid, or vice versa, the heat of fusion is generally much higher than the sensible heat. Ice, for example, requires 333.55 J/g to melt, but then water will rise one degree further with the addition of just 4.18 J/g. Water/ice is therefore a very useful phase change material and has been used to store winter cold to cool buildings in summer since at least the time of the Achaemenid Empire. By melting and solidifying at the phase-change temperature (PCT), a PCM is capable of storing and releasing large amounts of energy compared to sensible heat storage. Heat is absorbed or released when the material changes from solid to liquid and vice versa or when the internal structure of the material changes; PCMs are accordingly referred to as latent heat storage (LHS) materials. There are two principal classes of phase-change material: organic (carbon-containing) materials derived either from petroleum, from plants or from animals; and salt hydrates, which generally either use natural salts from the sea or from mineral deposits or are by-products of other processes. A third class is solid to solid phase change. PCMs are used in many different commercial applications where energy storage and/or stable temperatures are required, including, among others, heating pads, cooling for telephone switching boxes, and clothing. By far the biggest potential market is for building heating and cooling. In this ap
https://en.wikipedia.org/wiki/Immunoprecipitation
Immunoprecipitation (IP) is the technique of precipitating a protein antigen out of solution using an antibody that specifically binds to that particular protein. This process can be used to isolate and concentrate a particular protein from a sample containing many thousands of different proteins. Immunoprecipitation requires that the antibody be coupled to a solid substrate at some point in the procedure. Types Individual protein immunoprecipitation (IP) Involves using an antibody that is specific for a known protein to isolate that particular protein out of a solution containing many different proteins. These solutions will often be in the form of a crude lysate of a plant or animal tissue. Other sample types could be body fluids or other samples of biological origin. Protein complex immunoprecipitation (Co-IP) Immunoprecipitation of intact protein complexes (i.e. antigen along with any proteins or ligands that are bound to it) is known as co-immunoprecipitation (Co-IP). Co-IP works by selecting an antibody that targets a known protein that is believed to be a member of a larger complex of proteins. By targeting this known member with an antibody it may become possible to pull the entire protein complex out of solution and thereby identify unknown members of the complex. This works when the proteins involved in the complex bind to each other tightly, making it possible to pull multiple members of the complex out of the solution by latching onto one member with an antibody. This concept of pulling protein complexes out of solution is sometimes referred to as a "pull-down". Co-IP is a powerful technique that is used regularly by molecular biologists to analyze protein–protein interactions. A particular antibody often selects for a subpopulation of its target protein that has the epitope exposed, thus failing to identify any proteins in complexes that hide the epitope. This can be seen in that it is rarely possible to precipitate even half of a given pro
https://en.wikipedia.org/wiki/Programming%20in%20the%20large%20and%20programming%20in%20the%20small
In software engineering, programming in the large and programming in the small refer to two different aspects of writing software, namely, designing a larger system as a composition of smaller parts, and creating those smaller parts by writing lines of code in a programming language, respectively. The terms were coined by Frank DeRemer and Hans Kron in their 1975 paper "Programming-in-the-large versus programming-in-the-small", in which they argue that the two are essentially different activities, and that typical programming languages, and the practice of structured programming, provide good support for the latter, but not for the former. This may be compared to the later Ousterhout's dichotomy, which distinguishes between system programming languages (for components) and scripting languages (for glue code, connecting components). Description Fred Brooks identifies that the way an individual program is created is different from how a programming systems product is created. The former likely does one relatively simple task well. It is probably coded by a single engineer, is complete in itself, and is ready to run on the system on which it was developed. The programming activity was probably fairly short-lived as simple tasks are quick and easy to complete. This is the endeavor that DeRemer and Kron describe as programming in the small. Compare with the activities associated with a programming systems project, again as identified by Brooks. Such a project is typified by medium-sized or large industrial teams working on the project for many months to several years. The project is likely to be split up into several or hundreds of separate modules which individually are of a similar complexity to the individual programs described above. However, each module will define an interface to its surrounding modules. Brooks describes how programming systems projects are typically run as formal projects that follow industry best practices and will comprise testing, do
https://en.wikipedia.org/wiki/E8%20lattice
In mathematics, the E lattice is a special lattice in R. It can be characterized as the unique positive-definite, even, unimodular lattice of rank 8. The name derives from the fact that it is the root lattice of the E root system. The norm of the E lattice (divided by 2) is a positive definite even unimodular quadratic form in 8 variables, and conversely such a quadratic form can be used to construct a positive-definite, even, unimodular lattice of rank 8. The existence of such a form was first shown by H. J. S. Smith in 1867, and the first explicit construction of this quadratic form was given by Korkin and Zolotarev in 1873. The E lattice is also called the Gosset lattice after Thorold Gosset who was one of the first to study the geometry of the lattice itself around 1900. Lattice points The E lattice is a discrete subgroup of R of full rank (i.e. it spans all of R). It can be given explicitly by the set of points Γ ⊂ R such that all the coordinates are integers or all the coordinates are half-integers (a mixture of integers and half-integers is not allowed), and the sum of the eight coordinates is an even integer. In symbols, It is not hard to check that the sum of two lattice points is another lattice point, so that Γ is indeed a subgroup. An alternative description of the E lattice which is sometimes convenient is the set of all points in Γ′ ⊂ R such that all the coordinates are integers and the sum of the coordinates is even, or all the coordinates are half-integers and the sum of the coordinates is odd. In symbols, The lattices Γ and Γ′ are isomorphic and one may pass from one to the other by changing the signs of any odd number of half-integer coordinates. The lattice Γ is sometimes called the even coordinate system for E while the lattice Γ′ is called the odd coordinate system. Unless we specify otherwise we shall work in the even coordinate system. Properties The E lattice Γ can be characterized as the unique lattice in R with the following properti
https://en.wikipedia.org/wiki/Beta%20normal%20form
In the lambda calculus, a term is in beta normal form if no beta reduction is possible. A term is in beta-eta normal form if neither a beta reduction nor an eta reduction is possible. A term is in head normal form if there is no beta-redex in head position. The normal form of a term, if one exists, is unique (as a corollary of the Church–Rosser theorem). However, a term may have more than one head normal form. Beta reduction In the lambda calculus, a beta redex is a term of the form: . A redex is in head position in a term , if has the following shape (note that application has higher priority than abstraction, and that the formula below is meant to be a lambda-abstraction, not an application): , where and . A beta reduction is an application of the following rewrite rule to a beta redex contained in a term: where is the result of substituting the term for the variable in the term . A head beta reduction is a beta reduction applied in head position, that is, of the following form: , where and . Any other reduction is an internal beta reduction. A normal form is a term that does not contain any beta redex, i.e. that cannot be further reduced. A head normal form is a term that does not contain a beta redex in head position, i.e. that cannot be further reduced by a head reduction. When considering the simple lambda calculus (viz. without the addition of constant or function symbols, meant to be reduced by additional delta rule), head normal forms are the terms of the following shape: , where is a variable, and . A head normal form is not always a normal form, because the applied arguments need not be normal. However, the converse is true: any normal form is also a head normal form. In fact, the normal forms are exactly the head normal forms in which the subterms are themselves normal forms. This gives an inductive syntactic description of normal forms. There is also the notion of weak head normal form: a term in weak head normal form is either a
https://en.wikipedia.org/wiki/Information%20model
An information model in software engineering is a representation of concepts and the relationships, constraints, rules, and operations to specify data semantics for a chosen domain of discourse. Typically it specifies relations between kinds of things, but may also include relations with individual things. It can provide sharable, stable, and organized structure of information requirements or knowledge for the domain context. Overview The term information model in general is used for models of individual things, such as facilities, buildings, process plants, etc. In those cases, the concept is specialised to facility information model, building information model, plant information model, etc. Such an information model is an integration of a model of the facility with the data and documents about the facility. Within the field of software engineering and data modeling, an information model is usually an abstract, formal representation of entity types that may include their properties, relationships and the operations that can be performed on them. The entity types in the model may be kinds of real-world objects, such as devices in a network, or occurrences, or they may themselves be abstract, such as for the entities used in a billing system. Typically, they are used to model a constrained domain that can be described by a closed set of entity types, properties, relationships and operations. An information model provides formalism to the description of a problem domain without constraining how that description is mapped to an actual implementation in software. There may be many mappings of the information model. Such mappings are called data models, irrespective of whether they are object models (e.g. using UML), entity relationship models or XML schemas. Information modeling languages In 1976, an entity-relationship (ER) graphic notation was introduced by Peter Chen. He stressed that it was a "semantic" modelling technique and independent of any database mode
https://en.wikipedia.org/wiki/List%20of%20Unix%20daemons
This is a list of Unix daemons that are found on various Unix-like operating systems. Unix daemons typically have a name ending with a d. See also List of Unix commands
https://en.wikipedia.org/wiki/Event-driven%20process%20chain
An event-driven process chain (EPC) is a type of flow chart for business process modeling. EPC can be used to configure enterprise resource planning execution, and for business process improvement. It can be used to control an autonomous workflow instance in work sharing. The event-driven process chain method was developed within the framework of Architecture of Integrated Information Systems (ARIS) by August-Wilhelm Scheer at the Institut für Wirtschaftsinformatik, Universität des Saarlandes (Institute for Business Information Systems at the University of Saarland) in the early 1990s. Overview Businesses use event-driven process chain diagrams to lay out business process workflows, originally in conjunction with SAP R/3 modeling, but now more widely. It is used by many companies for modeling, analyzing, and redesigning business processes. The event-driven process chain method was developed within the framework of Architecture of Integrated Information Systems (ARIS). As such it forms the core technique for modeling in ARIS, which serves to link the different views in the so-called control view. To quote from a 2006 publication on event-driven process chains: An Event-driven process chain (EPC) is an ordered graph of events and functions. It provides various connectors that allow alternative and parallel execution of processes. Furthermore it is specified by the usages of logical operators, such as OR, AND, and XOR. A major strength of EPC is claimed to be its simplicity and easy-to-understand notation. This makes EPC a widely acceptable technique to denote business processes. The statement that event-driven process chains are ordered graphs is also found in other directed graphs for which no explicit node ordering is provided. No restrictions actually appear to exist on the possible structure of EPCs, but nontrivial structures involving parallelism have ill-defined execution semantics; in this respect they resemble UML activity diagrams. Several scientific a
https://en.wikipedia.org/wiki/128-bit%20computing
General home computing and gaming utility emerged at 8-bit (but not at 1-bit or 4-bit) word sizes, as 28=256 words become possible. Thus, early 8-bit CPUs (Zilog Z80, 6502, Intel 8088 introduced 1976-1981 by Commodore, Tandy Corporation, Apple and IBM) inaugurated the era of personal computing. Many 16-bit CPUs already existed in the mid-1970's. Over the next 30 years, the shift to 16-bit, 32-bit and 64-bit computing allowed, respectively, 216=65,536 unique words, 232=4,294,967,296 unique words and 264=18,446,744,073,709,551,615 unique words respectively, each step offering a meaningful advantage until 64 bits was reached. Further advantages evaporate from 64-bit to 128-bit computing as the number of possible values in a register increases from roughly 18 quintillion () to 340 undecillion () as so many unique values are never utilized. Thus, with a register that can store 2128 values, no advantages over 64-bit computing accrue to either home computing or gaming. CPUs with a larger word size also require more circuitry, are physically larger, require more power and generate more heat. Thus, there are currently no mainstream general-purpose processors built to operate on 128-bit integers or addresses, although a number of processors do have specialized ways to operate on 128-bit chunks of data, and uses are given below. Representation A processor with 128-bit byte addressing could directly address up to 2128 (over ) bytes, which would greatly exceed the total data captured, created, or replicated on Earth as of 2018, which has been estimated to be around 33 zettabytes (over 274 bytes). A 128-bit register can store 2128 (over 3.40 × 1038) different values. The range of integer values that can be stored in 128 bits depends on the integer representation used. With the two most common representations, the range is 0 through 340,282,366,920,938,463,463,374,607,431,768,211,455 (2128 − 1) for representation as an (unsigned) binary number, and −170,141,183,460,469,231,731,6
https://en.wikipedia.org/wiki/PL-4
PL-4 or POS-PHY Level 4 was the name of the interface that the interface SPI-4.2 is based on. It was proposed by PMC-Sierra to the Optical Internetworking Forum. The name means Packet Over SONET Physical layer level 4. PL-4 was developed by PMC-Sierra in conjunction with the Saturn Development Group. Context There are two broad categories of chip-to-chip interfaces. The first, exemplified by PCI-Express and HyperTransport, supports reads and writes of memory addresses. The second broad category carries user packets over 1 or more channels and is exemplified by the IEEE 802.3 family of Media Independent Interfaces and the Optical Internetworking Forum family of System Packet Interfaces. Of these last two, the family of System Packet Interfaces is optimized to carry user packets from many channels. The family of System Packet Interfaces is the most important packet-oriented, chip-to-chip interface family used between devices in the Packet over SONET and Optical Transport Network, which are the principal protocols used to carry the internet between cities. Applications PL-4 was designed to be used in systems that support OC-192 SONET interfaces and is sometimes used in 10 Gigabit Ethernet based systems. A typical application of PL-4 (SPI-4.2) is to connect a framer device to a network processor. It has been widely adopted by the high speed networking marketplace. Technical details The interface consists of (per direction): sixteen LVDS pairs for the data path one LVDS pair for control one LVDS pair for clock at half of the data rate two FIFO status lines running at 1/8 of the data rate one status clock The clocking is Source-synchronous and operates around 700 MHz. Implementations of SPI-4.2 (PL-4) have been produced which allow somewhat higher clock rates. This is important when overhead bytes are added to incoming packets. Trivia The name is an acronym of an acronym of an acronym as the P in PL stands for POS-PHY and the S in POS-PHY stands for SONET (Synch
https://en.wikipedia.org/wiki/Abrizio
Abrizio was a fabless semiconductor company which made switching fabric chip sets (integrated circuits for computer network switches). Their chip set, the TT1, was used by several large system development companies as the core switch fabric in their high value communication systems. Founding Abrizio was founded in 1997, by Professor Nick McKeown as a spinout of the Tiny-tera project at Stanford University. It received US$6M of funding from Benchmark Capital and Sequoia Capital. Product and technology The product name TT1 referred to "Tiny Tera" meaning a small, highly integrated semiconductor implementation of a terabit/s capacity switching fabric. The Stanford program demonstrated a scalable packet switch that had a terabit-per-second performance in CMOS. Abrizio was the first to introduce a more optimized Input-Buffered Output Queued Switch Fabrics, which addressed the memory efficiency issue of similar technologies. Its technology made better use of memory, making the TT1 a less expensive product. Abrizio's key technology was a sophisticated implementation of a Wavefront arbiter which allowed the switch to make complex arbitration decisions very quickly. Senior leadership In 1998, Anders Swahn, who had been executive vice president of sales and marketing at Allied-Telesyn Inc., joined Abrizio as chief executive. Abrizio's corporate colors were purple and yellow. The CEO of Abrizio was Anders Swahn. The CTO was McKeown who was taking a leave from his professorship at Stanford. Zubair Hussein was the V.P. of Engineering. Acquisition Abrizio was acquired on August 24, 1999, by PMC-Sierra for 4,352,000 shares of PMC-Sierra stock, worth at that time $400M. After the acquisition, the former Abrizio development team completed the TTx switch chip set. In the wake of the bursting of the telecom bubble, PMC-Sierra laid off most of the former Abrizio team in 2001.
https://en.wikipedia.org/wiki/Chalaza
The chalaza (; from Greek "hailstone"; plural chalazas or chalazae, ) is a structure inside bird eggs and plant ovules. It attaches or suspends the yolk or nucellus within the larger structure. In animals In the eggs of most birds (not of the reptiles), the chalazae are two spiral bands of tissue that suspend the yolk in the center of the white (the albumen). The function of the chalazae is to hold the yolk in place. In baking, the chalazae are sometimes removed in order to ensure a uniform texture. In plants In plant ovules, the chalaza is located opposite the micropyle opening of the integuments. It is the tissue where the integuments and nucellus are joined. Nutrients from the plant travel through vascular tissue in the funiculus and outer integument through the chalaza into the nucellus. During the development of the embryo sac inside a flowering plant ovule, the three cells at the chalazal end become the antipodal cells. Chalazogamy In most flowering plants, the pollen tube enters the ovule through the micropyle opening in the integuments for fertilization (porogamy). In chalazogamous fertilization, the pollen tubes penetrate the ovule through the chalaza rather than the micropyle opening. Chalazogamy was first discovered in monoecious plant species of the family Casuarinaceae by Melchior Treub, but has since then also been observed in others, for example in pistachio and walnut. Notes Oology Plant morphology Plant anatomy Pollination
https://en.wikipedia.org/wiki/Variogram
In spatial statistics the theoretical variogram, denoted , is a function describing the degree of spatial dependence of a spatial random field or stochastic process . The semivariogram is half the variogram. In the case of a concrete example from the field of gold mining, a variogram will give a measure of how much two samples taken from the mining area will vary in gold percentage depending on the distance between those samples. Samples taken far apart will vary more than samples taken close to each other. Definition The semivariogram was first defined by Matheron (1963) as half the average squared difference between the values at points ( and ) separated at distance . Formally where is a point in the geometric field , and is the value at that point. The triple integral is over 3 dimensions. is the separation distance (e.g., in meters or km) of interest. For example, the value could represent the iron content in soil, at some location (with geographic coordinates of latitude, longitude, and elevation) over some region with element of volume . To obtain the semivariogram for a given , all pairs of points at that exact distance would be sampled. In practice it is impossible to sample everywhere, so the empirical variogram is used instead. The variogram is twice the semivariogram and can be defined, equivalently, as the variance of the difference between field values at two locations ( and , note change of notation from to and to ) across realizations of the field (Cressie 1993): If the spatial random field has constant mean , this is equivalent to the expectation for the squared increment of the values between locations and (Wackernagel 2003) (where and are points in space and possibly time): In the case of a stationary process, the variogram and semivariogram can be represented as a function of the difference between locations only, by the following relation (Cressie 1993): If the process is furthermore isotropic, then the variogram and se
https://en.wikipedia.org/wiki/Wake%20Shield%20Facility
Wake Shield Facility (WSF) was a NASA experimental science platform that was placed in low Earth orbit by the Space Shuttle. It was a diameter, free-flying stainless steel disk. The WSF was deployed using the Space Shuttle's Canadarm. The WSF then used nitrogen gas thrusters to position itself about behind the Space Shuttle, which was at an orbital altitude of over , within the thermosphere, where the atmosphere is exceedingly tenuous. The WSF's orbital speed was at least three to four times faster than the speed of thermospheric gas molecules in the area, which resulted in a cone behind the WSF that was entirely free of gas molecules. The WSF thus created an ultrahigh vacuum in its wake. The resulting vacuum was used to study epitaxial film growth. The WSF operated at a distance from the Space Shuttle to avoid contamination from the Shuttle's rocket thrusters and water dumped overboard from the Shuttle's Waste Collection System (space toilet). After two days, the Space Shuttle would rendezvous with the WSF and again use its robotic arm to collect the WSF and to store it in the Shuttle's payload bay for return to Earth. The WSF was flown into space three times, aboard Shuttle flights STS-60 (WSF-1), STS-69 (WSF-2) and STS-80 (WSF-3). During STS-60, some hardware issues were experienced, and, as a result, the WSF-1 was only deployed at the end of the Shuttle's Canadarm. During the later missions, the WSF was deployed as a free-flying platform in the wake of the Shuttle. These flights proved the vacuum wake concept and realized the space epitaxy concept by growing the first-ever crystalline semiconductor thin films in the vacuum of space. These included gallium arsenide (GaAs) and aluminum gallium arsenide (AlGaAs) depositions. These experiments have been used to develop better photocells and thin films. Among the potential resulting applications are artificial retinas made from tiny ceramic detectors. Pre-flight calculations suggested that the pressure on the w
https://en.wikipedia.org/wiki/Markov%20chain%20geostatistics
Markov chain geostatistics uses Markov chain spatial models, simulation algorithms and associated spatial correlation measures (e.g., transiogram) based on the Markov chain random field theory, which extends a single Markov chain into a multi-dimensional random field for geostatistical modeling. A Markov chain random field is still a single spatial Markov chain. The spatial Markov chain moves or jumps in a space and decides its state at any unobserved location through interactions with its nearest known neighbors in different directions. The data interaction process can be well explained as a local sequential Bayesian updating process within a neighborhood. Because single-step transition probability matrices are difficult to estimate from sparse sample data and are impractical in representing the complex spatial heterogeneity of states, the transiogram, which is defined as a transition probability function over the distance lag, is proposed as the accompanying spatial measure of Markov chain random fields.
https://en.wikipedia.org/wiki/Continuous%20linear%20operator
In functional analysis and related areas of mathematics, a continuous linear operator or continuous linear mapping is a continuous linear transformation between topological vector spaces. An operator between two normed spaces is a bounded linear operator if and only if it is a continuous linear operator. Continuous linear operators Characterizations of continuity Suppose that is a linear operator between two topological vector spaces (TVSs). The following are equivalent: is continuous. is continuous at some point is continuous at the origin in If is locally convex then this list may be extended to include: for every continuous seminorm on there exists a continuous seminorm on such that If and are both Hausdorff locally convex spaces then this list may be extended to include: is weakly continuous and its transpose maps equicontinuous subsets of to equicontinuous subsets of If is a sequential space (such as a pseudometrizable space) then this list may be extended to include: is sequentially continuous at some (or equivalently, at every) point of its domain. If is pseudometrizable or metrizable (such as a normed or Banach space) then we may add to this list: is a bounded linear operator (that is, it maps bounded subsets of to bounded subsets of ). If is seminormable space (such as a normed space) then this list may be extended to include: maps some neighborhood of 0 to a bounded subset of If and are both normed or seminormed spaces (with both seminorms denoted by ) then this list may be extended to include: for every there exists some such that If and are Hausdorff locally convex spaces with finite-dimensional then this list may be extended to include: the graph of is closed in Continuity and boundedness Throughout, is a linear map between topological vector spaces (TVSs). Bounded subset The notion of a "bounded set" for a topological vector space is that of being a von Neumann bounded set. If the space happens to
https://en.wikipedia.org/wiki/Paul%20Vojta
Paul Alan Vojta (born September 30, 1957) is an American mathematician, known for his work in number theory on Diophantine geometry and Diophantine approximation. Contributions In formulating Vojta's conjecture, he pointed out the possible existence of parallels between the Nevanlinna theory of complex analysis, and diophantine analysis in the circle of ideas around the Mordell conjecture and abc conjecture. This suggested the importance of the integer solutions (affine space) aspect of diophantine equations. Vojta wrote the .dvi-previewer xdvi. Education and career He was an undergraduate student at the University of Minnesota, where he became a Putnam Fellow in 1977, and a doctoral student at Harvard University (1983). He currently is a professor in the Department of Mathematics at the University of California, Berkeley. Awards and honors In 2012 he became a fellow of the American Mathematical Society. Selected publications Diophantine Approximations and Value Distribution Theory, Lecture Notes in Mathematics 1239, Springer Verlag, 1987,
https://en.wikipedia.org/wiki/Texas%20toast
Texas toast is a toasted bread that is typically made from sliced bread that has been sliced at double the usual thickness of packaged bread. Texas toast is prepared by spreading butter on both sides of the bread and broiling or grilling it until it is a light golden brown. Commonly, garlic is added to the butter, yielding a form of garlic bread. The toast may include cheese on one or both sides, similar to an open-faced grilled cheese sandwich. Popular in Texas and its bordering states, Texas toast is often served as a side with southern-style dishes such as chicken fried steak, fried catfish, or barbecue. Thick-sliced bread sold for making Texas toast can be used in the same manner as ordinary bread slices, such as in sandwiches, and it is especially useful for dishes involving liquids or where extra thickness could improve the product, such as French toast. While most varieties sold for Texas toast are white bread, whole wheat varieties also exist. Frozen Texas toast brands are also popular, with best sellers including the New York Brand of the T. Marzetti Company, Pepperidge Farm, and Coles. History One claimant to the invention of Texas toast is Kirby's Pig Stand. The once-thriving chain, whose heyday in the 1940s saw over 100 locations across the United States, also claims to be the originator of the onion ring. Texas toast may have been first created in 1946 at the Pig Stand in Denton, Texas, after a bakery order for thicker slices of bread resulted in slices too thick for the toaster and a cook, Wiley W. W. Cross, suggested buttering and grilling them as a solution. Another Pig Stand cook in Beaumont, Texas claimed he created the idea of grilling the bread. W.W.W. Cross is also credited for combining the Texas toast with chicken fried steak to create Kirby's Pig Stand's famous Chicken Fried Steak Sandwich. See also Texas Tommy (hot dog) Texas wiener
https://en.wikipedia.org/wiki/Kathleen%20Ni%20Houlihan
Kathleen Ni Houlihan (, literally, "Kathleen, daughter of Houlihan") is a mythical symbol and emblem of Irish nationalism found in literature and art, sometimes representing Ireland as a personified woman. The figure of Kathleen Ni Houlihan has also been invoked in nationalist Irish politics. Kathleen Ni Houlihan is sometimes spelled as Cathleen Ni Houlihan, and the figure is also sometimes referred to as the (, often spelled phonetically as "Shan Van Vocht"), the Poor Old Woman, and similar appellations. Kathleen Ni Houlihan is generally depicted as an old woman who needs the help of young Irish men willing to fight and die to free Ireland from colonial rule, usually resulting in the young men becoming martyrs for this cause, the colonial power being the United Kingdom. After the Anglo-Irish War, Kathleen Ni Houlihan became associated with the Irish Republican Army in Northern Ireland, especially during the Troubles. As a literary figure, Kathleen Ni Houlihan was featured by William Butler Yeats and Lady Augusta Gregory in their play Cathleen Ní Houlihan. Other authors that have used Kathleen Ni Houlihan in some way include Seán O'Casey (especially in The Shadow of the Gunman) and James Joyce who introduces characters named Kathleen and Mr Holohan in his story "A Mother" (in Dubliners) to illustrate the ideological shallowness of an Irish revival festival. General features and Yeats and Gregory's treatment Kathleen Ni Houlihan is generally portrayed as an old woman without a home. Frequently it is hinted that this is because she has been dispossessed of her home which comprised a farmhouse and "four green fields" (symbolising the four provinces of Ireland). In Yeats and Gregory's Cathleen Ní Houlihan (1902), she arrives at an Irish family's home as they are making preparations for the marriage of their oldest son. In Yeats and Gregory's play, Kathleen Ni Houlihan tells the family her sad tale, interspersed with songs about famous Irish heroes that had give
https://en.wikipedia.org/wiki/Coagulative%20necrosis
Coagulative necrosis is a type of accidental cell death typically caused by ischemia or infarction. In coagulative necrosis, the architectures of dead tissue are preserved for at least a couple of days. It is believed that the injury denatures structural proteins as well as lysosomal enzymes, thus blocking the proteolysis of the damaged cells. The lack of lysosomal enzymes allows it to maintain a "coagulated" morphology for some time. Like most types of necrosis, if enough viable cells are present around the affected area, regeneration will usually occur. Coagulative necrosis occurs in most bodily organs, excluding the brain. Different diseases are associated with coagulative necrosis, including acute tubular necrosis and acute myocardial infarction. Coagulative necrosis can also be induced by high local temperature; it is a desired effect of treatments such as high intensity focused ultrasound applied to cancerous cells. Causes Coagulative necrosis is most commonly caused by conditions that do not involve severe trauma, toxins or an acute or chronic immune response. The lack of oxygen (hypoxia) causes cell death in a localized area which is perfused by blood vessels failing to deliver primarily oxygen, but also other important nutrients. It is important to note that while ischemia in most tissues of the body will cause coagulative necrosis, in the central nervous system ischemia causes liquefactive necrosis, as there is very little structural framework in neural tissue. Pathology Macroscopic The macroscopic appearance of an area of coagulative necrosis is a pale segment of tissue contrasting against surrounding well vascularized tissue and is dry on cut surface. The tissue may later turn red due to inflammatory response. The surrounding surviving cells can aid in regeneration of the affected tissue unless they are stable or permanent. Microscopic Microscopically, coagulative necrosis causes cells to appear to have the same outline, but no nuclei. The nucleu
https://en.wikipedia.org/wiki/Pearson%20distribution
The Pearson distribution is a family of continuous probability distributions. It was first published by Karl Pearson in 1895 and subsequently extended by him in 1901 and 1916 in a series of articles on biostatistics. History The Pearson system was originally devised in an effort to model visibly skewed observations. It was well known at the time how to adjust a theoretical model to fit the first two cumulants or moments of observed data: Any probability distribution can be extended straightforwardly to form a location-scale family. Except in pathological cases, a location-scale family can be made to fit the observed mean (first cumulant) and variance (second cumulant) arbitrarily well. However, it was not known how to construct probability distributions in which the skewness (standardized third cumulant) and kurtosis (standardized fourth cumulant) could be adjusted equally freely. This need became apparent when trying to fit known theoretical models to observed data that exhibited skewness. Pearson's examples include survival data, which are usually asymmetric. In his original paper, Pearson (1895, p. 360) identified four types of distributions (numbered I through IV) in addition to the normal distribution (which was originally known as type V). The classification depended on whether the distributions were supported on a bounded interval, on a half-line, or on the whole real line; and whether they were potentially skewed or necessarily symmetric. A second paper (Pearson 1901) fixed two omissions: it redefined the type V distribution (originally just the normal distribution, but now the inverse-gamma distribution) and introduced the type VI distribution. Together the first two papers cover the five main types of the Pearson system (I, III, IV, V, and VI). In a third paper, Pearson (1916) introduced further special cases and subtypes (VII through XII). Rhind (1909, pp. 430–432) devised a simple way of visualizing the parameter space of the Pearson system, which wa
https://en.wikipedia.org/wiki/Liquefactive%20necrosis
Liquefactive necrosis (or colliquative necrosis) is a type of necrosis which results in a transformation of the tissue into a liquid viscous mass. Often it is associated with focal bacterial or fungal infections, and can also manifest as one of the symptoms of an internal chemical burn. In liquefactive necrosis, the affected cell is completely digested by hydrolytic enzymes, resulting in a soft, circumscribed lesion consisting of pus and the fluid remains of necrotic tissue. Dead leukocytes will remain as a creamy yellow pus. After the removal of cell debris by white blood cells, a fluid filled space is left. It is generally associated with abscess formation and is commonly found in the central nervous system. In the brain Due to excitotoxicity, hypoxic death of cells within the central nervous system can result in liquefactive necrosis. This is a process in which lysosomes turn tissues into pus as a result of lysosomal release of digestive enzymes. Loss of tissue architecture means that the tissue can be liquefied. This process is not associated with bacterial action or infection. Ultimately, in a living patient most necrotic cells and their contents disappear. The affected area is soft with liquefied centre containing necrotic debris. Later, a cyst wall is formed. Microscopically, the cystic space contains necrotic cell debris and macrophages filled with phagocytosed material. The cyst wall is formed by proliferating capillaries, inflammatory cells, and gliosis (proliferating glial cells) in the case of brain and proliferating fibroblasts in the case of abscess cavities. Brain cells have a large amount of digestive enzymes (hydrolases). These enzymes cause the neural tissue to become soft and liquefy. In the lung Liquefactive necrosis can also occur in the lung, especially in the context of lung abscesses. Infection Liquefactive necrosis can also take place due to certain infections. Neutrophils, fighting off a bacteria, will release hydrolytic enzymes whi
https://en.wikipedia.org/wiki/Greedy%20randomized%20adaptive%20search%20procedure
The greedy randomized adaptive search procedure (also known as GRASP) is a metaheuristic algorithm commonly applied to combinatorial optimization problems. GRASP typically consists of iterations made up from successive constructions of a greedy randomized solution and subsequent iterative improvements of it through a local search. The greedy randomized solutions are generated by adding elements to the problem's solution set from a list of elements ranked by a greedy function according to the quality of the solution they will achieve. To obtain variability in the candidate set of greedy solutions, well-ranked candidate elements are often placed in a restricted candidate list (RCL), and chosen at random when building up the solution. This kind of greedy randomized construction method is also known as a semi-greedy heuristic, first described in Hart and Shogan (1987). GRASP was first introduced in Feo and Resende (1989). Survey papers on GRASP include Feo and Resende (1995), and Resende and Ribeiro (2003). There are variations of the classical algorithm, such as the Reactive GRASP. In this variation, the basic parameter that defines the restrictiveness of the RCL during the construction phase is self-adjusted according to the quality of the solutions previously found. There are also techniques for search speed-up, such as cost perturbations, bias functions, memorization and learning, and local search on partially constructed solutions. See also Constructive cooperative coevolution Cooperative coevolution Local search (optimization) Metaheuristic Simulated annealing Tabu search
https://en.wikipedia.org/wiki/Microsoft%20NetMeeting
Microsoft NetMeeting is a discontinued VoIP and multi-point videoconferencing program offered by Microsoft. NetMeeting allows multiple clients to host and join a call that includes video and audio, text chat, application and desktop sharing, and file sharing. It was originally bundled with Internet Explorer 3 and then with Windows versions from Windows 95 to Windows Server 2003. History NetMeeting was released on May 29, 1996, with Internet Explorer 3 and later Internet Explorer 4. It incorporated technology acquired by Microsoft from UK software developer Data Connection Ltd and DataBeam Corporation (subsequently acquired by Lotus Software). Before video service became common on free IM clients, such as Yahoo! Messenger and MSN Messenger, NetMeeting was a popular way to perform video conferences and chat over the Internet (with the help of public ILS servers, or "direct-dialing" to an IP address). The defunct TechTV channel even used NetMeeting as a means of getting viewers onto their call-in shows via webcam, although viewers had to call on their telephones, because broadband Internet connections were still rare. Protocol architecture NetMeeting is an implementation of the ITU T.120 and H.323 protocol stacks for videoconferencing, with Microsoft extensions. A call is set up, undertaken and torn down between NetMeeting clients using the H.225 protocol. Audio is carried using H.245, encoded using the G.711 and G.723.1 codecs from 5.3 to 64 kbit/s, while the video is encoded using the H.263 and H.261 codecs. Application sharing is performed using the "Share 2.0" protocol, based on a pre-release version of T.128, with the protocol also being used to transport chat messages; whiteboard sharing uses ITU T.126, while file sharing is performed using FTP over T.127. Due to its use of a standardised protocol, NetMeeting can interoperate with other H.232-implementing software, such as Ekiga. Discontinuation In Windows XP, the Start menu shortcut to NetMeeting was removed
https://en.wikipedia.org/wiki/Retrograde%20inversion
Retrograde inversion is a musical term that literally means "backwards and upside down": "The inverse of the series is sounded in reverse order." Retrograde reverses the order of the motif's pitches: what was the first pitch becomes the last, and vice versa. This is a technique used in music, specifically in twelve-tone technique, where the inversion and retrograde techniques are performed on the same tone row successively, "[t]he inversion of the prime series in reverse order from last pitch to first." Conventionally, inversion is carried out first, and the inverted form is then taken backward to form the retrograde inversion, so that the untransposed retrograde inversion ends with the pitch that began the prime form of the series. In his late twelve-tone works, however, Igor Stravinsky preferred the opposite order, so that his row charts use inverse retrograde (IR) forms for his source sets, instead of retrograde inversions (RI), although he sometimes labeled them RI in his sketches. For example, the forms of the row from Requiem Canticles are as follows: P0: R0: I0: RI0: IR0: Note that IR is a transposition of RI, the pitch class between the last pitches of P and I above RI. Other compositions that include retrograde inversions in its rows include works by Tadeusz Baird and Karel Goeyvaerts. One work in particular by the latter composer, Nummer 2, employs retrograde of the recurring twelve-tone row B–F–F–E–G–A–E–D–A–B–D–C in the piano part. It is performed in both styles, particularly in the outer sections of the piece. The final movement of Hindemith's Ludus Tonalis, the Postludium, is an exact retrograde inversion of the work's opening Praeludium. Sources Musical symmetry Serialism
https://en.wikipedia.org/wiki/Bornological%20space
In mathematics, particularly in functional analysis, a bornological space is a type of space which, in some sense, possesses the minimum amount of structure needed to address questions of boundedness of sets and linear maps, in the same way that a topological space possesses the minimum amount of structure needed to address questions of continuity. Bornological spaces are distinguished by the property that a linear map from a bornological space into any locally convex spaces is continuous if and only if it is a bounded linear operator. Bornological spaces were first studied by George Mackey. The name was coined by Bourbaki after , the French word for "bounded". Bornologies and bounded maps A on a set is a collection of subsets of that satisfy all the following conditions: covers that is, ; is stable under inclusions; that is, if and then ; is stable under finite unions; that is, if then ; Elements of the collection are called or simply if is understood. The pair is called a or a . A or of a bornology is a subset of such that each element of is a subset of some element of Given a collection of subsets of the smallest bornology containing is called the If and are bornological sets then their on is the bornology having as a base the collection of all sets of the form where and A subset of is bounded in the product bornology if and only if its image under the canonical projections onto and are both bounded. Bounded maps If and are bornological sets then a function is said to be a or a (with respect to these bornologies) if it maps -bounded subsets of to -bounded subsets of that is, if If in addition is a bijection and is also bounded then is called a . Vector bornologies Let be a vector space over a field where has a bornology A bornology on is called a if it is stable under vector addition, scalar multiplication, and the formation of balanced hulls (i.e. if the sum of two bounded sets is bounded, e
https://en.wikipedia.org/wiki/Fragile%20binary%20interface%20problem
The fragile binary interface problem or FBI is a shortcoming of certain object-oriented programming language compilers, in which internal changes to an underlying class library can cause descendant libraries or programs to cease working. It is an example of software brittleness. This problem is more often called the fragile base class problem or FBC; however, that term has a wider sense. Cause The problem occurs due to a "shortcut" used with compilers for many common object-oriented (OO) languages, a design feature that was kept when OO languages were evolving from earlier non-OO structured programming languages such as C and Pascal. In these languages there were no objects in the modern sense, but there was a similar construct known as a record (or "struct" in C) that held a variety of related information in one piece of memory. The parts within a particular record were accessed by keeping track of the starting location of the record, and knowing the offset from that starting point to the part in question. For instance a "person" record might have a first name, last name and middle initial, to access the initial the programmer writes thisPerson.middleInitial which the compiler turns into something like a = location(thisPerson) + offset(middleInitial). Modern CPUs typically include instructions for this common sort of access. When object-oriented language compilers were first being developed, much of the existing compiler technology was used, and objects were built on top of the record concept. In these languages the objects were referred to by their starting point, and their public data, known as "fields", were accessed through the known offset. In effect the only change was to add another field to the record, which is set to point to an immutable virtual method table for each class, such that the record describes both its data and methods (functions). When compiled, the offsets are used to access both the data and the code (via the virtual method table). Symp
https://en.wikipedia.org/wiki/FUNET
FUNET is the Finnish University and Research Network, a backbone network providing Internet connections for Finnish universities and polytechnics as well as other research facilities. It is governed by the state-owned CSC – IT Center for Science Ltd. The FUNET project started in December 1983 and soon gained international connectivity via EARN with DECnet as the dominant protocol. FUNET was connected to the greater Internet through NORDUnet in 1988. The FUNET FTP service went online in 1990, hosting the first versions of Linux in 1991. The main backbone connections have gradually been upgraded to optical fiber since 2008. First 100 Gbit/s connections were put in production in 2015. FUNET is connected to other research networks through NORDUnet, and to other Finnish ISPs via three FICIX points. See also NORDUnet GEANT
https://en.wikipedia.org/wiki/Distributed%20version%20control
In software development, distributed version control (also known as distributed revision control) is a form of version control in which the complete codebase, including its full history, is mirrored on every developer's computer. Compared to centralized version control, this enables automatic management branching and merging, speeds up most operations (except pushing and pulling), improves the ability to work offline, and does not rely on a single location for backups. Git, the world's most popular version control system, is a distributed version control system. In 2010, software development author Joel Spolsky described distributed version control systems as "possibly the biggest advance in software development technology in the [past] ten years". Distributed vs. centralized Distributed version control systems (DVCS) use a peer-to-peer approach to version control, as opposed to the client–server approach of centralized systems. Distributed revision control synchronizes repositories by transferring patches from peer to peer. There is no single central version of the codebase; instead, each user has a working copy and the full change history. Advantages of DVCS (compared with centralized systems) include: Allows users to work productively when not connected to a network. Common operations (such as commits, viewing history, and reverting changes) are faster for DVCS, because there is no need to communicate with a central server. With DVCS, communication is necessary only when sharing changes among other peers. Allows private work, so users can use their changes even for early drafts they do not want to publish. Working copies effectively function as remote backups, which avoids relying on one physical machine as a single point of failure. Allows various development models to be used, such as using development branches or a Commander/Lieutenant model. Permits centralized control of the "release version" of the project On FOSS software projects it is much easi
https://en.wikipedia.org/wiki/Acoustic%20network
An acoustic network is a method of positioning equipment using sound waves. It is primarily used in water, and can be as small or as large as required by the users specifications. Size of network The simplest acoustic network consists of one measurement resulting in a single range between sound source and sound receiver. Bigger networks are only limited by the amount of equipment available, and computing power needed to resolve the resulting data. The latest acoustic networks used in the marine seismic industry can resolve a network of some 16,000 individual ranges in a matter of seconds. The principle The principle behind all acoustic networks is the same. Distance = speed x travel time. If the travel time and speed of the sound signal are known, we can calculate the distance between source and receiver. In most networks, the speed of the acoustic signal is assumed at a specific value. This value is either derived from measuring a signal between two known points, or by using specific equipment to calculate it from environmental conditions. The diagram below shows the basic operation of measuring a single range. At a specified time the processor issues a signal to the source, which then sends out the sound wave. Once the sound wave is received another signal is received at the processor resulting in a time difference between transmission and reception. This gives the travel time. Using the travel time and assumed speed of the signal, the processor can calculate the distance between source and receiver. If the operator is using acoustic ranges to position items in unknown locations they will need to use more than the single range example shown above. As there is only one measurement, the receiver could be anywhere on a circle with a radius equal to the calculated range and centered on the transmitter. Acoustic Processing If a second transmitter is added to the system the number of possible positions for the receiver is reduced to two. It is only when
https://en.wikipedia.org/wiki/Mackey%20space
In mathematics, particularly in functional analysis, a Mackey space is a locally convex topological vector space X such that the topology of X coincides with the Mackey topology τ(X,X′), the finest topology which still preserves the continuous dual. They are named after George Mackey. Examples Examples of locally convex spaces that are Mackey spaces include: All barrelled spaces and more generally all infrabarreled spaces Hence in particular all bornological spaces and reflexive spaces All metrizable spaces. In particular, all Fréchet spaces, including all Banach spaces and specifically Hilbert spaces, are Mackey spaces. The product, locally convex direct sum, and the inductive limit of a family of Mackey spaces is a Mackey space. Properties A locally convex space with continuous dual is a Mackey space if and only if each convex and -relatively compact subset of is equicontinuous. The completion of a Mackey space is again a Mackey space. A separated quotient of a Mackey space is again a Mackey space. A Mackey space need not be separable, complete, quasi-barrelled, nor -quasi-barrelled. See also
https://en.wikipedia.org/wiki/Inequity%20aversion
Inequity aversion (IA) is the preference for fairness and resistance to incidental inequalities. The social sciences that study inequity aversion include sociology, economics, psychology, anthropology, and ethology. Researches on inequity aversion aim to explain behaviors that are not purely driven by self-interests but fairness considerations. In some literature, the terminology inequality aversion was used in the places of inequity aversion. The discourses in social studies argue that "inequality" pertains to the gap between the distribution of resources, while "inequity" pertains to the fundamental and institutional unfairness. Therefore, the choice between using inequity or inequality aversion may depend on the specific context. Human studies Inequity aversion research on humans mostly occurs in the discipline of economics though it is also studied in sociology. Research on inequity aversion began in 1978 when studies suggested that humans are sensitive to inequities in favor of as well as those against them, and that some people attempt overcompensation when they feel "guilty" or unhappy to have received an undeserved reward. A more recent definition of inequity aversion (resistance to inequitable outcomes) was developed in 1999 by Fehr and Schmidt. They postulated that people make decisions so as to minimize inequity in outcomes. Specifically, consider a setting with individuals {1,2,...,n} who receive pecuniary outcomes xi. Then the utility to person i would be given by where α parametrizes the distaste of person i for disadvantageous inequality in the first nonstandard term, and β parametrizes the distaste of person i for advantageous inequality in the final term. The results suggested that a small fraction of selfish behaviors may influence the majority with a fair mind to act selfishly in some scenarios, while a minority of fair-minded behaviors may also affect selfish players to cooperate in games with punishment. In addition, the inequity aversion
https://en.wikipedia.org/wiki/Sierpi%C5%84ski%27s%20constant
Sierpiński's constant is a mathematical constant usually denoted as K. One way of defining it is as the following limit: where r2(k) is a number of representations of k as a sum of the form a2 + b2 for integer a and b. It can be given in closed form as: where is Gauss's constant and is the Euler-Mascheroni constant. Another way to define/understand Sierpiński's constant is, Let r(n) denote the number of representations of  by  squares, then the Summatory Function of has the Asymptotic expansion , where  is the Sierpinski constant. The above plot shows , with the value of  indicated as the solid horizontal line. See also Wacław Sierpiński External links http://www.plouffe.fr/simon/constants/sierpinski.txt - Sierpiński's constant up to 2000th decimal digit. https://archive.lib.msu.edu/crcmath/math/math/s/s276.htm Mathematical constants
https://en.wikipedia.org/wiki/Programmable%20interrupt%20controller
In computing, a programmable interrupt controller (PIC) is an integrated circuit that helps a microprocessor (or CPU) handle interrupt requests (IRQ) coming from multiple different sources (like external I/O devices) which may occur simultaneously. It helps prioritize IRQs so that the CPU switches execution to the most appropriate interrupt handler (ISR) after the PIC assesses the IRQ's relative priorities. Common modes of interrupt priority include hard priorities, rotating priorities, and cascading priorities. PICs often allow mapping input to outputs in a configurable way. On the PC architecture PIC are typically embedded into a southbridge chip whose internal architecture is defined by the chipset vendor's standards. Common features PICs typically have a common set of registers: interrupt request register (IRR), in-service register (ISR), and interrupt mask register (IMR). The IRR specifies which interrupts are pending acknowledgement, and is typically a symbolic register which can not be directly accessed. The ISR register specifies which interrupts have been acknowledged, but are still waiting for an end of interrupt (EOI). The IMR specifies which interrupts are to be ignored and not acknowledged. A simple register schema such as this allows up to two distinct interrupt requests to be outstanding at one time, one waiting for acknowledgement, and one waiting for EOI. There are a number of common priority schemas in PICs including hard priorities, specific priorities, and rotating priorities. Interrupts may be either edge triggered or level triggered. There are a number of common ways of acknowledging an interrupt has completed when an EOI is issued. These include specifying which interrupt completed, using an implied interrupt which has completed (usually the highest priority pending in the ISR), and treating interrupt acknowledgement as the EOI. Well-known types One of the best known PICs, the 8259A, was included in the x86 PC. In modern times, this is n
https://en.wikipedia.org/wiki/Software%20brittleness
In computer programming and software engineering, software brittleness is the increased difficulty in fixing older software that may appear reliable, but actually fails badly when presented with unusual data or altered in a seemingly minor way. The phrase is derived from analogies to brittleness in metalworking. Causes When software is new, it is very malleable; it can be formed to be whatever is wanted by the implementers. But as the software in a given project grows larger and larger, and develops a larger base of users with long experience with the software, it becomes less and less malleable. Like a metal that has been work-hardened, the software becomes a legacy system, brittle and unable to be easily maintained without fracturing the entire system. Brittleness in software can be caused by algorithms that do not work well for the full range of input data. A good example is an algorithm that allows a divide by zero to occur, or a curve-fitting equation that is used to extrapolate beyond the data that it was fitted to. Another cause of brittleness is the use of data structures that restrict values. This was commonly seen in the late 1990s as people realized that their software only had room for a 2 digit year entry; this led to the sudden updating of tremendous quantities of brittle software before the year 2000. Another more commonly encountered form of brittleness is in graphical user interfaces that make invalid assumptions. For example, a user may be running on a low resolution display, and the software will open a window too large to fit the display. Another common problem is expressed when a user uses a color scheme other than the default, causing text to be rendered in the same color as the background, or a user uses a font other than the default, which won't fit in the allowed space and cuts off instructions and labels. Very often, an old code base is simply abandoned and a brand-new system (which is intended to be free of many of the burdens of the l
https://en.wikipedia.org/wiki/Vicinal%20%28chemistry%29
In chemistry the descriptor vicinal (from Latin vicinus = neighbor), abbreviated vic, is a descriptor that identifies two functional groups as bonded to two adjacent carbon atoms (i.e., in a 1,2-relationship). Relation of atoms in a molecule For example, the molecule 2,3-dibromobutane carries two vicinal bromine atoms and 1,3-dibromobutane does not. Mostly, the use of the term vicinal is restricted to two identical functional groups. Likewise in a gem-dibromide the prefix gem, an abbreviation of geminal, signals that both bromine atoms are bonded to the same atom (i.e., in a 1,1-relationship). For example, 1,1-dibromobutane is geminal. While comparatively less common, the term hominal has been suggested as a descriptor for groups in a 1,3-relationship. Like other descriptors, such as syn, anti, exo or endo, the description vicinal helps explain how different parts of a molecule are related to each other either structurally or spatially. The vicinal adjective is sometimes restricted to those molecules with two identical functional groups. The use of the term can also be extended to substituents on aromatic rings. 1H-NMR spectroscopy In 1H-NMR spectroscopy, the coupling of two hydrogen atoms on adjacent carbon atoms is called vicinal coupling. The coupling constant 3J represents coupling of vicinal hydrogen atoms because they couple through three bonds. Depending on the other substituents, the vicinal coupling constant is typically a value between 0 and +20 Hz. The dependence of the vicinal coupling constant on the dihedral angle is described by the Karplus relation.
https://en.wikipedia.org/wiki/Kilocalorie%20per%20mole
The kilocalorie per mole is a unit to measure an amount of energy per number of molecules, atoms, or other similar particles. It is defined as one kilocalorie of energy (1000 thermochemical gram calories) per one mole of substance. The unit symbol is written kcal/mol or kcal⋅mol−1. As typically measured, one kcal/mol represents a temperature increase of one degree Celsius in one liter of water (with a mass of 1 kg) resulting from the reaction of one mole of reagents. In SI units, one kilocalorie per mole is equal to 4.184 kilojoules per mole (kJ/mol), which comes to approximately joules per molecule, or about 0.043 eV per molecule. At room temperature (25 °C, 77 °F, or 298.15 K) it is approximately equal to 1.688 units in the kT term of Boltzmann's equation. Even though it is not an SI unit, the kilocalorie per mole is still widely used in chemistry and biology for thermodynamical quantities such as thermodynamic free energy, heat of vaporization, heat of fusion and ionization energy. This is due to a variety of factors, including the ease with which it can be calculated based on the units of measure typically employed in quantifying a chemical reaction, especially in aqueous solution. In addition, for many important biological processes, thermodynamic changes are on a convenient order of magnitude when expressed in kcal/mol. For example, for the reaction of glucose with ATP to form glucose-6-phosphate and ADP, the free energy of reaction is −4.0 kcal/mol using the pH = 7 standard state.
https://en.wikipedia.org/wiki/Douglas%20J.%20Futuyma
Douglas Joel Futuyma (born 24 April 1942) is an American evolutionary biologist. He is a Distinguished Professor in the Department of Ecology and Evolution at Stony Brook University in Stony Brook, New York and a Research Associate on staff at the American Museum of Natural History in New York City. His research focuses on speciation and population biology. Futuyma is the author of a widely used undergraduate textbook on evolution and is also known for his work in public outreach, particularly in advocating against creationism. Education Futuyma graduated with a B.S. from Cornell University. He received his M.S. in 1966 and his Ph.D. in zoology in 1969, both from the University of Michigan, Ann Arbor. Academic career Futuyma began his career in the Department of Ecology and Evolution at Stony Brook University in 1969 and was appointed Distinguished Professor in 2001. He served as the chair of the Department of Ecology and Evolutionary Biology at University of Michigan, Ann Arbor from 2002-2003 and as the Lawrence B. Slobodkin Collegiate Professor in that department from 2003-2004 before returning to Stony Brook in 2004. Futuyma served as the president of the Society for the Study of Evolution in 1987, of the American Society of Naturalists in 1994, and of the American Institute of Biological Sciences in 2008. He has served as the editor of the scientific journals Evolution and Annual Review of Ecology, Evolution, and Systematics. Research Futuyma's research examines speciation and population biology, particularly the evolutionary interactions between herbivorous insects and their plant hosts and the implications for evolution of host specificity. Teaching and outreach Futuyma is well known for his success in teaching and public outreach. He is the author of several textbooks, most notably the very widely used authoritative text Evolutionary Biology (in its third edition, published 1998) and a simplified version targeted explicitly to undergraduates, Evolution (
https://en.wikipedia.org/wiki/Calm%20technology
Calm technology or calm design is a type of information technology where the interaction between the technology and its user is designed to occur in the user's periphery rather than constantly at the center of attention. Information from the technology smoothly shifts to the user's attention when needed but otherwise stays calmly in the user's periphery. Mark Weiser and John Seely Brown describe calm technology as "that which informs but doesn't demand our focus or attention." The use of calm technology is paired with ubiquitous computing as a way to minimize the perceptible invasiveness of computers in everyday life. Principles For a technology to be considered calm technology, there are three core principles it should adhere to: The user's attention to the technology must reside mainly in the periphery. This means that either the technology can easily shift between the center of attention and the periphery or that much of the information conveyed by the technology is present in the periphery rather than the center. The technology increases a user's use of his or her periphery. This creates a pleasant user experience by not overburdening the user with information. The technology relays a sense of familiarity to the user and allows awareness of the user's surroundings in the past, present, and future. History The phrase "calm technology" was first published in the article "Designing Calm Technology", written by Mark Weiser and John Seely Brown in 1995. The concept had developed amongst researchers at the Xerox Palo Alto Research Center in addition to the concept of ubiquitous computing. Weiser introduced the concept of calm technology by using the example of LiveWire or "Dangling String". It is an string connected to the mounted small electric motor in the ceiling. The motor is connected to a nearby Ethernet cable. When a bit of information flows through that Ethernet cable, it causes a twitch of the motor. The more the information flows, the motor runs f
https://en.wikipedia.org/wiki/NNDB
The Notable Names Database (NNDB) is an online database of biographical details of over 40,000 people. Soylent Communications, a sole proprietorship that also hosted the now-defunct Rotten.com, describes NNDB as an "intelligence aggregator" of noteworthy persons, highlighting their interpersonal connections. The Rotten.com domain was registered in 1996 by former Apple and Netscape software engineer Thomas E. Dell, who was also known by his internet alias, "Soylent". Entries Each entry has an executive summary with an assessment of the person's notability. It also lists their deaths, cause of death, and life risk factors that may affect their lives span such as obesity, cocaine addiction, or dwarfism. Businesspeople and government officials are listed with chronologies of their posts, positions, and board memberships. NNDB has articles on films with user-submitted reviews, discographies of selected music groups, and extensive bibliographies. NNDB Mapper The NNDB Mapper, a visual tool for exploring connections between people, was made available in May 2008. It required Adobe Flash 7. See also NameBase
https://en.wikipedia.org/wiki/Topology%20dissemination%20based%20on%20reverse-path%20forwarding
Topology broadcast based on reverse-path forwarding (TBRPF) is a link-state routing protocol for wireless mesh networks. The obvious design for a wireless link-state protocol (such as the optimized link-state routing protocol) transmits large amounts of routing data, and this limits the utility of a link-state protocol when the network is made of moving nodes. The number and size of the routing transmissions make the network unusable for any but the smallest networks. The conventional solution is to use a distance-vector routing protocol such as AODV, which usually transmits no data about routing. However, distance-vector routing requires more time to establish a connection, and the routes are less optimized than a link-state router. TBRPF transmits only the differences between the previous network state and the current network state. Therefore, routing messages are smaller, and can therefore be sent more frequently. This means that nodes' routing tables are more up-to-date. TBRPF is controlled under a US patent filed in December 2000 and assigned to SRI International (Patent ID 6845091, issued January 18, 2005). Further reading B. Bellur, and R.G. Ogier. 1999. "A Reliable, Efficient Topology Broadcast Protocol for Dynamic Networks," Proc. IEEE INFOCOMM ’99, pp. 178–186. R.G. Ogier, M.G. Lewis, F.L. Templin, and B. Bellur. 2002. "Topology Broadcast based on Reverse Path Forwarding (TBRPF)," RFC 3684. External links : Topology Dissemination Based on Reverse-Path Forwarding (TBRPF) Packethop Inc. website Wireless networking Ad hoc routing protocols SRI International
https://en.wikipedia.org/wiki/Dynamic%20Source%20Routing
Dynamic Source Routing (DSR) is a routing protocol for wireless mesh networks. It is similar to AODV in that it forms a route on-demand when a transmitting node requests one. However, it uses source routing instead of relying on the routing table at each intermediate device. Background Determining the source route requires accumulating the address of each device between the source and destination during route discovery. The accumulated path information is cached by nodes processing the route discovery packets. The learned paths are used to route packets. To accomplish source routing, the routed packets contain the address of each device the packet will traverse. This may result in high overhead for long paths or large addresses, like IPv6. To avoid using source routing, DSR optionally defines a flow id option that allows packets to be forwarded on a hop-by-hop basis. This protocol is truly based on source routing whereby all the routing information is maintained (continually updated) at mobile nodes. It has only two major phases, which are Route Discovery and Route Maintenance. Route Reply would only be generated if the message has reached the intended destination node (route record which is initially contained in Route Request would be inserted into the Route Reply). To return the Route Reply, the destination node must have a route to the source node. If the route is in the Destination Node's route cache, the route would be used. Otherwise, the node will reverse the route based on the route record in the Route Request message header (this requires that all links are symmetric). In the event of fatal transmission, the Route Maintenance Phase is initiated whereby the Route Error packets are generated at a node. The erroneous hop will be removed from the node's route cache; all routes containing the hop are truncated at that point. Again, the Route Discovery Phase is initiated to determine the most viable route. For information on other similar protocols, see t
https://en.wikipedia.org/wiki/WMYD
WMYD (channel 20) is an independent television station in Detroit, Michigan, United States. It is owned by the E. W. Scripps Company alongside ABC affiliate WXYZ-TV (channel 7). Both stations share studios at Broadcast House on 10 Mile Road in Southfield, while WMYD's transmitter is located on Eight Mile Road in Oak Park. Founded in 1968 as WXON on channel 62 and relocated to channel 20 in 1972, the station was an independent focusing primarily on syndicated programs and classic reruns. It made an ill-fated foray into subscription television (STV) from 1979 to 1983, broadcasting a pay service under the ON TV brand that was dogged by a poor relationship with the station and signal piracy issues exacerbated by Detroit's proximity to Canada. After it folded, WXON continued as an independent station and emerged as the second-rated independent in its market, affiliating with The WB in 1995. Granite Broadcasting purchased WXON in 1997 and renamed it WDWB. However, its high debt load motivated several attempts to sell the station, one of which fell apart after The WB merged with UPN to form The CW but did not include WDWB as an affiliate. The station then became WMYD, aligned with MyNetworkTV and airing its programming for 15 years. In 2014, Scripps purchased WMYD and added local newscasts from the WXYZ-TV newsroom. As Detroit's ATSC 3.0 (NextGen TV) station, WMYD is used in automotive-related tests of the transmission technology. History The channel 62 years At the end of January 1965, Aben Johnson, majority owner of a chemical manufacturing company and with several real estate holdings, filed with the Federal Communications Commission (FCC) to build a television station on channel 44 in Pontiac, in Oakland County. After an overhaul of the FCC's UHF table of allocations, Johnson amended his application to specify channel 62 in Detroit. A construction permit for the station was issued on October 7, 1965, and assigned the call sign WXON that December. Johnson also held
https://en.wikipedia.org/wiki/Nationalnyckeln%20till%20Sveriges%20flora%20och%20fauna
Nationalnyckeln till Sveriges flora och fauna (Swedish for "National Key to Sweden's Flora and Fauna") is a set of books, the first volume of which, Fjärilar: Dagfjärilar (Butterflies, 140 species), appeared on 25 April 2005. The publishing plan comprises 100,000 illustrations spread over more than 100 volumes, to appear over a period of 20 years, listing and providing popular scientific descriptions of all species of plants (flora) and animals (fauna) in Sweden. So large a work has never been published in the history of Swedish literature. Nationalnyckeln is a popular scientific account produced on contract from the Riksdag by the Swedish Species Information Centre (ArtDatabanken) at the Swedish University of Agricultural Sciences (SLU) in Uppsala.
https://en.wikipedia.org/wiki/Ornament%20%28art%29
In architecture and decorative art, ornament is decoration used to embellish parts of a building or object. Large figurative elements such as monumental sculpture and their equivalents in decorative art are excluded from the term; most ornaments do not include human figures, and if present they are small compared to the overall scale. Architectural ornament can be carved from stone, wood or precious metals, formed with plaster or clay, or painted or impressed onto a surface as applied ornament; in other applied arts the main material of the object, or a different one such as paint or vitreous enamel may be used. A wide variety of decorative styles and motifs have been developed for architecture and the applied arts, including pottery, furniture, metalwork. In textiles, wallpaper and other objects where the decoration may be the main justification for its existence, the terms pattern or design are more likely to be used. The vast range of motifs used in ornament draw from geometrical shapes and patterns, plants, and human and animal figures. Across Eurasia and the Mediterranean world there has been a rich and linked tradition of plant-based ornament for over three thousand years; traditional ornament from other parts of the world typically relies more on geometrical and animal motifs. The inspiration for the patterns usually lies in the nature that surrounds the people in the region. Many nomadic tribes in Central Asia had many animalistic motifs before the penetration of Islam in the region. In a 1941 essay, the architectural historian Sir John Summerson called it "surface modulation". The earliest decoration and ornament often survives from prehistoric cultures in simple markings on pottery, where decoration in other materials (including tattoos) has been lost. Where the potter's wheel was used, the technology made some kinds of decoration very easy; weaving is another technology which also lends itself very easily to decoration or pattern, and to some extent
https://en.wikipedia.org/wiki/Handheld%20PC
A handheld personal computer (PC) is a pocket-sized computer typically built around a clamshell form factor and is significantly smaller than any standard laptop computer, but based on the same principles. It is sometimes referred to as a palmtop computer, not to be confused with Palmtop PC which was a name used mainly by Hewlett-Packard. Most handheld PCs use an operating system specifically designed for mobile use. Ultra-compact laptops capable of running common x86-compatible desktop operating systems are typically classified as subnotebooks. The name Handheld PC was used by Microsoft from 1996 until the early 2000s to describe a category of small computers having keyboards and running the Windows CE operating system. History The first hand-held device compatible with desktop IBM personal computers of the time was the Atari Portfolio of 1989. Other early models were the Poqet PC of 1989 and the Hewlett Packard HP 95LX of 1991 which run the MS-DOS operating system. Other DOS-compatible hand-held computers also existed. After 2000 the handheld PC segment practically halted, replaced by other forms, although later communicators such as Nokia E90 can be considered to be of the same class. Today, most modern Handheld PCs are designed around portable gaming. And due to the popularity of the Nintendo Switch and subsequently; the Steam Deck, most modern handheld PC's designs are influenced by the designes of both devices. Microsoft's Handheld PC standard The Handheld PC (with capital "H") or H/PC for short was the official name of a hardware design for personal digital assistant (PDA) devices running Windows CE. The intent of Windows CE was to provide an environment for applications compatible with the Microsoft Windows operating system, on processors better suited to low-power operation in a portable device. It provides the appointment calendar functions usual for any PDA. Microsoft was wary of using the term "PDA" for the Handheld PC. Instead, Microsoft market
https://en.wikipedia.org/wiki/Order%20One%20Network%20Protocol
The OrderOne MANET Routing Protocol is an algorithm for computers communicating by digital radio in a mesh network to find each other, and send messages to each other along a reasonably efficient path. It was designed for, and promoted as working with wireless mesh networks. OON's designers say it can handle thousands of nodes, where most other protocols handle less than a hundred. OON uses hierarchical algorithms to minimize the total amount of transmissions needed for routing. Routing overhead is limited to between 1% and 5% of node-to-node bandwidth in any network and does not grow as the network size grows. The basic idea is that a network organizes itself into a tree. Nodes meet at the root of the tree to establish an initial route. The route then moves away from the root by cutting corners, as ant-trails do. When there are no more corners to cut, a nearly optimum route exists. This route is continuously maintained. Each process can be performed with localized minimal communication, and very small router tables. OORP requires about 200K of memory. A simulated network with 500 nodes transmitting at 200 bytes/second organized itself in about 20 seconds. As of 2004, OORP was patented or had other significant intellectual property restrictions. See the link below. Assumptions Each computer, or "node" of the network has a unique name, at least one network link, and a computer with some capacity to hold a list of neighbors. Organizing the tree The network nodes form a hierarchy by having each node select a parent. The parent is a neighbor node that is the next best step to the most other nodes. This method creates a hierarchy around nodes that are more likely to be present, and which have more capacity, and which are closer to the topological center of the network. The memory limitations of a small node are reflected in its small routing table, which automatically prevents it from being a preferred central node. At the top, one or two nodes are un
https://en.wikipedia.org/wiki/Photosynthetically%20active%20radiation
Photosynthetically active radiation (PAR) designates the spectral range (wave band) of solar radiation from 400 to 700 nanometers that photosynthetic organisms are able to use in the process of photosynthesis. This spectral region corresponds more or less with the range of light visible to the human eye. Photons at shorter wavelengths tend to be so energetic that they can be damaging to cells and tissues, but are mostly filtered out by the ozone layer in the stratosphere. Photons at longer wavelengths do not carry enough energy to allow photosynthesis to take place. Other living organisms, such as cyanobacteria, purple bacteria, and heliobacteria, can exploit solar light in slightly extended spectral regions, such as the near-infrared. These bacteria live in environments such as the bottom of stagnant ponds, sediment and ocean depths. Because of their pigments, they form colorful mats of green, red and purple. Chlorophyll, the most abundant plant pigment, is most efficient in capturing red and blue light. Accessory pigments such as carotenes and xanthophylls harvest some green light and pass it on to the photosynthetic process, but enough of the green wavelengths are reflected to give leaves their characteristic color. An exception to the predominance of chlorophyll is autumn, when chlorophyll is degraded (because it contains N and Mg) but the accessory pigments are not (because they only contain C, H and O) and remain in the leaf producing red, yellow and orange leaves. In land plants, leaves absorb mostly red and blue light in the first layer of photosynthetic cells because of chlorophyll absorbance. Green light, however, penetrates deeper into the leaf interior and can drive photosynthesis more efficiently than red light. Because green and yellow wavelengths can transmit through chlorophyll and the entire leaf itself, they play a crucial role in growth beneath the plant canopy. PAR measurement is used in agriculture, forestry and oceanography. One of the
https://en.wikipedia.org/wiki/Sensorimotor%20rhythm
The sensorimotor rhythm (SMR) is a brain wave. It is an oscillatory idle rhythm of synchronized electric brain activity. It appears in spindles in recordings of EEG, MEG, and ECoG over the sensorimotor cortex. For most individuals, the frequency of the SMR is in the range of 7 to 11 Hz. Meaning The meaning of SMR is not fully understood. Phenomenologically, a person is producing a stronger SMR amplitude when the corresponding sensorimotor areas are idle, e.g. during states of immobility. SMR typically decreases in amplitude when the corresponding sensory or motor areas are activated, e.g. during motor tasks and even during motor imagery. Conceptually, SMR is sometimes mixed up with alpha waves of occipital origin, the strongest source of neural signals in the EEG. One reason might be, that without appropriate spatial filtering the SMR is very difficult to detect because it is usually flooded by the stronger occipital alpha waves. The feline SMR has been noted as being analogous to the human mu rhythm. Relevance in research Neurofeedback Neurofeedback training can be used to gain control over the SMR activity. Neurofeedback practitioners believe that this feedback enables the subject to learn the regulation of their own SMR. People with learning difficulties, ADHD, epilepsy, and autism may benefit from an increase in SMR activity via neurofeedback. Furthermore, in the sport domain, SMR neurofeedback training has been found to be useful to enhance the golf putting performance. In the field of Brain–Computer Interfaces (BCI), the deliberate modification of the SMR amplitude during motor imagery can be used to control external applications. See also Brain waves Delta wave – (0.1 – 3 Hz) Theta wave – (4 – 7 Hz) Alpha wave – (8 – 12 Hz) Mu wave – (7.5 – 12.5 Hz) SMR wave – (12.5 – 15.5 Hz) Beta wave – (12 – 31 Hz) Gamma wave – (32 – 100 Hz)
https://en.wikipedia.org/wiki/Stata
Stata (, , alternatively , occasionally stylized as STATA) is a general-purpose statistical software package developed by StataCorp for data manipulation, visualization, statistics, and automated reporting. It is used by researchers in many fields, including biomedicine, economics, epidemiology, and sociology. Stata was initially developed by Computing Resource Center in California and the first version was released in 1985. In 1993, the company moved to College Station, TX and was renamed Stata Corporation, now known as StataCorp. A major release in 2003 included a new graphics system and dialog boxes for all commands. Since then, a new version has been released once every two years. The current version is Stata 18, released in April 2023. Technical overview and terminology User interface From its creation, Stata has always employed an integrated command-line interface. Starting with version 8.0, Stata has included a graphical user interface based on Qt framework which uses menus and dialog boxes to give access to many built-in commands. The dataset can be viewed or edited in spreadsheet format. From version 11 on, other commands can be executed while the data browser or editor is opened. Data structure and storage Until the release of version 16, Stata could only open a single dataset at any one time. Stata allows for flexibility with assigning data types to data. Its compress command automatically reassigns data to data types that take up less memory without loss of information. Stata utilizes integer storage types which occupy only one or two bytes rather than four, and single-precision (4 bytes) rather than double-precision (8 bytes) is the default for floating-point numbers. Stata's data format is always tabular in format. Stata refers to the columns of tabular data as variables. Data format compatibility Stata can import data in a variety of formats. This includes ASCII data formats (such as CSV or databank formats) and spreadsheet formats (including
https://en.wikipedia.org/wiki/Comparative%20biology
Comparative biology uses natural variation and disparity to understand the patterns of life at all levels—from genes to communities—and the critical role of organisms in ecosystems. Comparative biology is a cross-lineage approach to understanding the phylogenetic history of individuals or higher taxa and the mechanisms and patterns that drives it. Comparative biology encompasses Evolutionary Biology, Systematics, Neontology, Paleontology, Ethology, Anthropology, and Biogeography as well as historical approaches to Developmental biology, Genomics, Physiology, Ecology and many other areas of the biological sciences. The comparative approach also has numerous applications in human health, genetics, biomedicine, and conservation biology. The biological relationships (phylogenies, pedigree) are important for comparative analyses and usually represented by a phylogenetic tree or cladogram to differentiate those features with single origins (Homology) from those with multiple origins (Homoplasy). See also Cladistics Comparative Anatomy Evolution Evolutionary Biology Systematics Bioinformatics Neontology Paleontology Phylogenetics Genomics Evolutionary biology Comparisons
https://en.wikipedia.org/wiki/Mathematical%20structure
In mathematics, a structure is a set endowed with some additional features on the set (e.g. an operation, relation, metric, or topology). Often, the additional features are attached or related to the set, so as to provide it with some additional meaning or significance. A partial list of possible structures are measures, algebraic structures (groups, fields, etc.), topologies, metric structures (geometries), orders, events, equivalence relations, differential structures, and categories. Sometimes, a set is endowed with more than one feature simultaneously, which allows mathematicians to study the interaction between the different structures more richly. For example, an ordering imposes a rigid form, shape, or topology on the set, and if a set has both a topology feature and a group feature, such that these two features are related in a certain way, then the structure becomes a topological group. Mappings between sets which preserve structures (i.e., structures in the domain are mapped to equivalent structures in the codomain) are of special interest in many fields of mathematics. Examples are homomorphisms, which preserve algebraic structures; homeomorphisms, which preserve topological structures; and diffeomorphisms, which preserve differential structures. History In 1939, the French group with the pseudonym Nicolas Bourbaki saw structures as the root of mathematics. They first mentioned them in their "Fascicule" of Theory of Sets and expanded it into Chapter IV of the 1957 edition. They identified three mother structures: algebraic, topological, and order. Example: the real numbers The set of real numbers has several standard structures: An order: each number is either less than or greater than any other number. Algebraic structure: there are operations of multiplication and addition that make it into a field. A measure: intervals of the real line have a specific length, which can be extended to the Lebesgue measure on many of its subsets. A metric: there is
https://en.wikipedia.org/wiki/Pycnocline
A pycnocline is the cline or layer where the density gradient () is greatest within a body of water. An ocean current is generated by the forces such as breaking waves, temperature and salinity differences, wind, Coriolis effect, and tides caused by the gravitational pull of celestial bodies. In addition, the physical properties in a pycnocline driven by density gradients also affect the flows and vertical profiles in the ocean. These changes can be connected to the transport of heat, salt, and nutrients through the ocean, and the pycnocline diffusion controls upwelling. Below the mixed layer, a stable density gradient (or pycnocline) separates the upper and lower water, hindering vertical transport. This separation has important biological effects on the ocean and the marine living organisms. However, vertical mixing across a pycnocline is a regular phenomenon in oceans, and occurs through shear-produced turbulence. Such mixing plays a key role in the transport of nutrients. Physical function Turbulent mixing produced by winds and waves transfers heat downward from the surface. In low and mid-latitudes, this creates a surface-mixed layer of water of almost uniform temperature which may be a few meters deep to several hundred meters deep. Below this mixed layer, at depths of 200–300 m in the open ocean, the temperature begins to decrease rapidly down to about 1000 m. The water layer within which the temperature gradient is steepest is known as the permanent thermocline. The temperature difference through this layer may be as large as 20°C, depending on latitude. The permanent thermocline coincides with a change in water density between the warmer, low-density surface waters and the underlying cold dense bottom waters. The region of rapid density change is known as the pycnocline, and it acts as a barrier to vertical water circulation; thus it also affects the vertical distribution of certain chemicals which play a role in the biology of the seas. The sharp gradie
https://en.wikipedia.org/wiki/Polyphage
Polyphage are genomic multimers of bacteriophage in which multiple viral particles are all encapsulated, one after the other, within the same set of coat proteins. This phenomenon is characteristic of filamentous phage.
https://en.wikipedia.org/wiki/250%20%28number%29
250 (two hundred [and] fifty) is the natural number following 249 and preceding 251. 250 is also the sum of squares of the divisors of the number 14. 250 has the same digits and prime factors. Integers between 251 and 259 251 252 253 254 255 256 257 258 259
https://en.wikipedia.org/wiki/221%20%28number%29
221 (two hundred [and] twenty-one) is the natural number following 220 and preceding 222. In mathematics Its factorization as 13 × 17 makes 221 the product of two consecutive prime numbers, the sixth smallest such product. 221 is a centered square number. In other fields In Texas hold 'em, the probability of being dealt pocket aces (the strongest possible outcome in the initial deal of two cards per player) is 1/221. Sherlock Holmes's home address: 221B Baker Street.
https://en.wikipedia.org/wiki/Klaus%20Hasselmann
Klaus Ferdinand Hasselmann (, born 25 October 1931) is a German oceanographer and climate modeller. He is Professor Emeritus at the University of Hamburg and former Director of the Max Planck Institute for Meteorology. He was awarded the 2021 Nobel Prize in Physics jointly with Syukuro Manabe and Giorgio Parisi. Hasselmann grew up in Welwyn Garden City, England and returned to Hamburg in 1949 to attend university. Throughout his career he has mainly been affiliated with the University of Hamburg and the Max Planck Institute for Meteorology, which he founded. He also spent five years in the United States as a professor at the Scripps Institution of Oceanography and the Woods Hole Oceanographic Institution, and a year as a visiting professor at the University of Cambridge. He is best known for developing the Hasselmann model of climate variability, where a system with a long memory (the ocean) integrates stochastic forcing, thereby transforming a white-noise signal into a red-noise one, thus explaining (without special assumptions) the ubiquitous red-noise signals seen in the climate (see, for example, the development of swell waves). Background Hasselmann was born in Hamburg, Germany (Weimar Republic). His father was an economist, journalist, and publisher, who was politically active for the Social Democratic Party of Germany (SDPG) from the 1920s. Due to his father's activity in the SDPG, the family emigrated to the United Kingdom in mid-1934 at the beginning of the Nazi era to escape the repressive regime and persecution of social democrats, and Klaus Hasselmann grew up in the U.K. from age 2. They lived in Welwyn Garden City north of London and his father worked as a journalist in the U.K. Although the Hasselmanns themselves were not Jewish, they lived in a close-knit community of mostly Jewish German emigrants and received assistance from the English Quakers when they arrived in the country. Klaus Hasselmann attended Elementary and Grammar School in Welwyn G
https://en.wikipedia.org/wiki/Sega%20Channel
The Sega Channel is a discontinued online game service developed by Sega for the Sega Genesis video game console, serving as a content delivery system. Launched on December 14, 1994, the Sega Channel was provided to the public by TCI and Time Warner Cable through cable television services by way of coaxial cable. It was a pay to play service, through which customers could access Genesis games online, play game demos, and get cheat codes. Lasting until July 31, 1998, the Sega Channel operated three years after the release of Sega's next generation console, the Sega Saturn. Though criticized for its poorly timed launch and high subscription fee, the Sega Channel has been praised for its innovations in downloadable content and impact on online game services. History Released in Japan as the Mega Drive in 1988, North America in 1989, and Europe and other regions as the Mega Drive in 1990, the Sega Genesis was Sega's entry into the 16-bit era of video game consoles. In 1990, Sega started its first internet-based service for Genesis, Sega Meganet, in Japan. Operating through a cartridge and a peripheral called the Mega Modem, it allowed Mega Drive owners to play 17 games online. A North American version, the "Tele-Genesis", was announced but never released. Another phone-based system, the Mega Anser, turned the Japanese Mega Drive into an online banking terminal. Due to Meganet's low number of games, high price, and the Mega Drive's lack of success in Japan, the system was a commercial failure. By 1992, the Mega Modem peripheral could be found in bargain bins at a reduced price, and a remodeled version of the Mega Drive released in 1993 removed the EXT 9-pin port, preventing connections to the Meganet service. In April 1993, Sega announced the Sega Channel service, which would use cable television services to deliver content. In the US, national testing began in June, and deployment began in December, with a complete US release in 1994. By June 1994, 21 cable compan
https://en.wikipedia.org/wiki/Smart%20gun
A smart gun, also called a smart-gun, or smartgun, is a firearm that can detect its authorized user(s) or something that is normally only possessed by its authorized user(s). The term is also used in science fiction to refer to various types of semi-automatic firearms. Smart guns have one or more systems that allow them to fire only when activated by an authorized user. Those systems typically employ RFID chips or other proximity tokens, fingerprint recognition, magnetic rings, or mechanical locks. They can thereby prevent accidental shootings, gun thefts, and criminal usage by persons not authorized to use the guns. Related to smart guns are other smart firearms safety devices such as biometric or RFID activated accessories and safes. Commercial availability No smart gun has ever been sold on the commercial market in the United States. The Armatix iP1, a .22 caliber handgun with an active RFID watch used to unlock it, is the most mature smart gun developed. It was briefly planned to be offered at a few retailers before being quickly withdrawn due to pressure from gun-rights advocates concerned that it would trigger the New Jersey Childproof Handgun Law. As of 2019, a number of startups and companies including Armatix, Biofire, LodeStar Firearms, and Swiss company SAAR are purportedly developing various smart handguns and rifles, but none have brought the technology to market. Reception Reception to the concept of smart gun technology has been mixed. There have been public calls to develop the technology, most notably from President Obama. Gun-rights groups including the National Rifle Association of America have expressed concerns that the technology could be mandated, and some firearms enthusiasts are concerned that the technology wouldn't be reliable enough to trust. National Rifle Association The NRA and its membership boycotted Smith & Wesson after it was revealed in 1999 that the company was developing a smart gun for the U.S. government. More recen
https://en.wikipedia.org/wiki/Josephus%20problem
In computer science and mathematics, the Josephus problem (or Josephus permutation) is a theoretical problem related to a certain counting-out game. Such games are used to pick out a person from a group, e.g. eeny, meeny, miny, moe. In the particular counting-out game that gives rise to the Josephus problem, a number of people are standing in a circle waiting to be executed. Counting begins at a specified point in the circle and proceeds around the circle in a specified direction. After a specified number of people are skipped, the next person is executed. The procedure is repeated with the remaining people, starting with the next person, going in the same direction and skipping the same number of people, until only one person remains, and is freed. The problem—given the number of people, starting point, direction, and number to be skipped—is to choose the position in the initial circle to avoid execution. History The problem is named after Flavius Josephus, a Jewish historian living in the 1st century. According to Josephus' firsthand account of the siege of Yodfat, he and his 40 soldiers were trapped in a cave by Roman soldiers. They chose suicide over capture, and settled on a serial method of committing suicide by drawing lots. Josephus states that by luck or possibly by the hand of God, he and another man remained until the end and surrendered to the Romans rather than killing themselves. This is the story given in Book 3, Chapter 8, part 7 of Josephus' The Jewish War (writing of himself in the third person): The details of the mechanism used in this feat are rather vague. According to James Dowdy and Michael Mays, in 1612 Claude Gaspard Bachet de Méziriac suggested the specific mechanism of arranging the men in a circle and counting by threes to determine the order of elimination. This story has been often repeated and the specific details vary considerably from source to source. For instance, Israel Nathan Herstein and Irving Kaplansky (1974) have Joseph
https://en.wikipedia.org/wiki/Biclustering
Biclustering, block clustering, Co-clustering or two-mode clustering is a data mining technique which allows simultaneous clustering of the rows and columns of a matrix. The term was first introduced by Boris Mirkin to name a technique introduced many years earlier, in 1972, by John A. Hartigan. Given a set of samples represented by an -dimensional feature vector, the entire dataset can be represented as rows in columns (i.e., an matrix). The Biclustering algorithm generates Biclusters. A Bicluster is a subset of rows which exhibit similar behavior across a subset of columns, or vice versa. Development Biclustering was originally introduced by John A. Hartigan in 1972. The term "Biclustering" was then later used and refined by Boris G. Mirkin. This algorithm was not generalized until 2000, when Y. Cheng and George M. Church proposed a biclustering algorithm based on the mean squared residue score (MSR) and applied it to biological gene expression data. In 2001 and 2003, I. S. Dhillon published two algorithms applying biclustering to files and words. One version was based on bipartite spectral graph partitioning. The other was based on information theory. Dhillon assumed the loss of mutual information during biclustering was equal to the Kullback–Leibler-distance (KL-distance) between P and Q. P represents the distribution of files and feature words before Biclustering, while Q is the distribution after Biclustering. KL-distance is for measuring the difference between two random distributions. KL = 0 when the two distributions are the same and KL increases as the difference increases. Thus, the aim of the algorithm was to find the minimum KL-distance between P and Q. In 2004, Arindam Banerjee used a weighted-Bregman distance instead of KL-distance to design a Biclustering algorithm that was suitable for any kind of matrix, unlike the KL-distance algorithm. To cluster more than two types of objects, in 2005, Bekkerman expanded the mutual information in Dhill
https://en.wikipedia.org/wiki/Reconciliation%20ecology
Reconciliation ecology is the branch of ecology which studies ways to encourage biodiversity in the human-dominated ecosystems of the anthropocene era. Michael Rosenzweig first articulated the concept in his book Win-Win Ecology, based on the theory that there is not enough area for all of earth's biodiversity to be saved within designated nature preserves. Therefore, humans should increase biodiversity in human-dominated landscapes. By managing for biodiversity in ways that do not decrease human utility of the system, it is a "win-win" situation for both human use and native biodiversity. The science is based in the ecological foundation of human land-use trends and species-area relationships. It has many benefits beyond protection of biodiversity, and there are numerous examples of it around the globe. Aspects of reconciliation ecology can already be found in management legislation, but there are challenges in both public acceptance and ecological success of reconciliation attempts. Theoretical basis Human land use trends Traditional conservation is based on "reservation and restoration"; reservation meaning setting pristine lands aside for the sole purpose of maintaining biodiversity, and restoration meaning returning human impacted ecosystems to their natural state. However, reconciliation ecologists argue that there is too great a proportion of land already impacted by humans for these techniques to succeed. While it is difficult to measure exactly how much land has been transformed by human use, estimates range from 39 to 50%. This includes agricultural land, pastureland, urban areas, and heavily harvested forest systems. An estimated 50% of arable land is already under cultivation. Land transformation has increased rapidly over the last fifty years, and is likely to continue to increase. Beyond direct transformation of land area, humans have impacted the global biogeochemical cycles, leading to human caused change in even the most remote areas. These inc
https://en.wikipedia.org/wiki/New%20York%20State%20Agricultural%20Experiment%20Station
The New York State Agricultural Experiment Station (NYSAES) at Geneva, Ontario County, New York State, is an agricultural experiment station operated by the New York State College of Agriculture and Life Sciences at Cornell University. In August 2018, the station was rebranded as Cornell AgriTech, but its official name remains unchanged. The Station is the sixth oldest institution of its kind in the country. History The New York State Agricultural Experiment Station was established by an Act of the New York State Legislature on June 26, 1880. More than 100 locations were considered, but a 125-acre parcel in Geneva was eventually chosen. In 1882, the State purchased the land, an Italianate villa, and all outbuildings from Nehemiah and Louisa Denton for $25,000. The villa was converted into the Station headquarters, now known as Parrott Hall. The new institution became operative on March 1, 1882. It would become known colloquially as the Geneva Experiment Station. An 1883 Report of the Board of Control of the NYSAES to the New York State Assembly stated that there were immediate and dire threats State agricultural output caused by insect pests, bovine diseases, drought, soil nutrient exhaustion, and outward labor migration, and that an organization dedicated to staving off these threats was needed. Originally, farmers wanted the station to serve as a model farm. However, the first director, E. Lewis Sturtevant, immediately established the policy that the station was to conduct agricultural science research and to establish experimental plots, both of which would have little resemblance to commercial agriculture. Nevertheless, the primary mission of the Station has always been to serve those who produce and consume New York's agricultural products. In its early days, Station scientists, who were few in number, concentrated their efforts on dairy, horticulture, and evaluation of varieties of vegetables and field crops. In 1887, the program was broadened to include
https://en.wikipedia.org/wiki/Project%20Sherwood
Project Sherwood was the codename for a United States program in controlled nuclear fusion during the period it was classified. After 1958, when fusion research was declassified around the world, the project was reorganized as a separate division within the United States Atomic Energy Commission (AEC) and lost its codename. Sherwood developed out of a number of ad hoc efforts dating back to about 1951. Primary among these was the stellarator program at Princeton University, itself code-named Project Matterhorn. Since then the weapons labs had clamored to join the club, Los Alamos with its z-pinch efforts, Livermore's magnetic mirror program, and later, Oak Ridge's fuel injector efforts. By 1953 the combined budgets were increasing into the million dollar range, demanding some sort of oversight at the AEC level. The name "Sherwood" was suggested by Paul McDaniel, Deputy Director of the AEC. He noted that funding for the wartime Hood Building was being dropped and moved to the new program, so they "robbing Hood to pay Friar Tuck", a reference to the British physicist and fusion researcher James L. Tuck. The connection to Robin Hood and Friar Tuck gave the project its name. Lewis Strauss strongly supported keeping the program secret until pressure from the United Kingdom led to a declassification effort at the 2nd Atoms for Peace meeting in the fall of 1958. After this time a number of purely civilian organizations also formed to organize meetings on the topic, with the American Physical Society organizing meetings under their Division of Plasma Physics. These meetings have been carried on to this day and were renamed International Sherwood Fusion Theory Conference. The original Project Sherwood became simply the Controlled Thermonuclear Research program within the AEC and its follow-on organizations. Designs and concepts Research centered on three plasma confinement designs; the stellarator headed by Lyman Spitzer at the Princeton Plasma Physics Laboratory, the to
https://en.wikipedia.org/wiki/Functional%20selectivity
Functional selectivity (or “agonist trafficking”, “biased agonism”, “biased signaling”, "ligand bias" and “differential engagement”) is the ligand-dependent selectivity for certain signal transduction pathways relative to a reference ligand (often the endogenous hormone or peptide) at the same receptor. Functional selectivity can be present when a receptor has several possible signal transduction pathways. To which degree each pathway is activated thus depends on which ligand binds to the receptor. Studies within the chemokine receptor system also suggest that GPCR biased agonism is physiologically relevant. For example, a beta-arrestin biased agonist of the chemokine receptor CXCR3 induced greater chemotaxis of T cells relative to a G protein biased agonist. Functional selectivity, or biased signaling, is most extensively characterized at Class A G protein coupled receptors (GPCRs). A number of biased agonists, such as those at muscarinic M2 receptors tested as analgesics or antiproliferative drugs, or those at opioid receptors that mediate pain, show potential at various receptor families to increase beneficial properties while reducing side effects. Pre-clinical studies with G protein biased agonists at the μ-opioid receptor show equivalent efficacy for treating pain with reduced risk for addictive potential and respiratory depression. However, this approach may be limited in practical terms. While Professor Bannister at Scripps and the University of Florida states on his profile page that his neuroscience projects include "[b]iased mu opioid agonists, aiming for a holy grail of sorts: to separate the robust pain relief –provided by opiates from their many unwanted side effects. (italics added for emphasis), there is no evidence or studies which support a claim that such biased agonists for G-protein do not initiate the LPS/MyD88/TLR4/pro-inflammatory cytokines in a similar magnitude and manner as conventional non-biased opioids, which cause damage to virtua
https://en.wikipedia.org/wiki/LonTalk
LonTalk is a networking protocol. Originally developed by Echelon Corporation for networking devices over media such as twisted pair, powerlines, fiber optics, and RF. It is popular for the automation of various functions in industrial control, home automation, transportation, and buildings systems such as lighting and HVAC (such as in intelligent buildings), the protocol has now been adopted as an open international control networking standard in the ISO/IEC 14908 family of standards. Published through ISO/IEC JTC 1/SC 6, this standard specifies a multi-purpose control network protocol stack optimized for smart grid, smart building, and smart city applications. LonWorks LonTalk is part of the technology platform called LonWorks. Protocol The protocol is defined by ISO/IEC 14908.1 and published by ISO/IEC JTC 1/SC 6. The LonTalk protocol has also been ratified by standards setting bodies in the following industries & regions: ANSI Standard ANSI/CEA 709.1 - Control networking (US) EN 14908 - Building controls (EU) GB/Z 20177.1-2006 - Control networking and building controls (China) IEEE 1473-L - Train controls (US) SEMI E54 - Semiconductor manufacturing equipment sensors & actuators (US) IFSF - International forecourt standard for EU petrol stations OSGP - A widely use protocol for smart grid devices built on ISO/IEC 14908.1 The protocol is only available from the official distribution organizations of each regional standards body or in the form of microprocessors manufactured by companies that have ported the standard to their respective chip designs. Security An April 2015 cryptanalysis paper claims to have found serious security flaws in the OMA Digest algorithm of the Open Smart Grid Protocol, which itself is built on the same EN 14908 foundations as LonTalk. The authors speculate that "every other LonTalk-derived standard" is similarly vulnerable to the key-recovery attacks described. See also BACnet -- A building automation and control protoc
https://en.wikipedia.org/wiki/Iron%20law%20of%20prohibition
The iron law of prohibition is a term coined by Richard Cowan in 1986 which posits that as law enforcement becomes more intense, the potency of prohibited substances increases. Cowan put it this way: "the harder the enforcement, the harder the drugs." This law is an application of the Alchian–Allen effect; Libertarian judge Jim Gray calls the law the "cardinal rule of prohibition", and notes that is a powerful argument for the legalization of drugs. It is based on the premise that when drugs or alcohol are prohibited, they will be produced in black markets in more concentrated and powerful forms, because these more potent forms offer better efficiency in the business model—they take up less space in storage, less weight in transportation, and they sell for more money. Economist Mark Thornton writes that the iron law of prohibition undermines the argument in favor of prohibition, because the higher potency forms are less safe for the consumer. Findings Thornton published research showing that the potency of marijuana increased in response to higher enforcement budgets. He later expanded this research in his dissertation to include other illegal drugs and alcohol during Prohibition in the United States (1920–1933). The basic approach is based on the Alchian and Allen Theorem. This argument says that a fixed cost (e.g. transportation fee) added to the price of two varieties of the same product (e.g. high quality red apple and a low quality red apple) results in greater sales of the more expensive variety. When applied to rum-running, drug smuggling, and blockade running the more potent products become the sole focus of the suppliers. Thornton notes that the greatest added cost in illegal sales is the avoidance of detection. Thornton says that if drugs are legalized, then consumers will begin to wean themselves off the higher potency forms, for instance with cocaine users buying coca leaves, and heroin users switching to opium. The popular shift from beer to wine to
https://en.wikipedia.org/wiki/Cryoscopic%20constant
In thermodynamics, the cryoscopic constant, , relates molality to freezing point depression (which is a colligative property). It is the ratio of the latter to the former: is the van ‘t Hoff factor, the number of particles the solute splits into or forms when dissolved. is the molality of the solution. Through cryoscopy, a known constant can be used to calculate an unknown molar mass. The term "cryoscopy" comes from Greek and means "freezing measurement." Freezing point depression is a colligative property, so depends only on the number of solute particles dissolved, not the nature of those particles. Cryoscopy is related to ebullioscopy, which determines the same value from the ebullioscopic constant (of boiling point elevation). The value of , which depends on the nature of the solvent can be found out by the following equation: is the ideal gas constant is the molar mass of the solvent in kg mol−1 is the freezing point of the pure solvent in kelvins represents the molar enthalpy of fusion of the solvent in J mol−1. The for water is 1.853 K kg mol−1. See also List of boiling and freezing information of solvents
https://en.wikipedia.org/wiki/Remote%20pickup%20unit
A remote pickup unit or RPU is a radio system using special radio frequencies set aside for electronic news-gathering (ENG) and remote broadcasting. It can also be used for other types of point-to-point radio links. An RPU is used to send program material from a remote location to the broadcast station or network. Usually these systems use specialized high audio fidelity radio equipment. One manufacturer, Marti, was best known for manufacturing remote pickup equipment, so much so that the name is usually used to refer to a remote pickup unit regardless of who the actual equipment manufacturer actually is. Today much of the remote broadcast use digital audio system fed over ISDN telephone lines. This method is favored because of reliability of telephone lines versus a radio link back to the station. The radio RPU remains much more favored for ENG however, because of its flexibility. Footnotes Broadcast engineering
https://en.wikipedia.org/wiki/Linear%20bounded%20automaton
In computer science, a linear bounded automaton (plural linear bounded automata, abbreviated LBA) is a restricted form of Turing machine. Operation A linear bounded automaton is a Turing machine that satisfies the following three conditions: Its input alphabet includes two special symbols, serving as left and right endmarkers. Its transitions may not print other symbols over the endmarkers. Its transitions may neither move to the left of the left endmarker nor to the right of the right endmarker. In other words: instead of having potentially infinite tape on which to compute, computation is restricted to the portion of the tape containing the input plus the two tape squares holding the endmarkers. An alternative, less restrictive definition is as follows: Like a Turing machine, an LBA possesses a tape made up of cells that can contain symbols from a finite alphabet, a head that can read from or write to one cell on the tape at a time and can be moved, and a finite number of states. An LBA differs from a Turing machine in that while the tape is initially considered to have unbounded length, only a finite contiguous portion of the tape, whose length is a linear function of the length of the initial input, can be accessed by the read/write head; hence the name linear bounded automaton. This limitation makes an LBA a somewhat more accurate model of a real-world computer than a Turing machine, whose definition assumes unlimited tape. The strong and the weaker definition lead to the same computational abilities of the respective automaton classes, by the same argument used to prove the linear speedup theorem. LBA and context-sensitive languages Linear bounded automata are acceptors for the class of context-sensitive languages. The only restriction placed on grammars for such languages is that no production maps a string to a shorter string. Thus no derivation of a string in a context-sensitive language can contain a sentential form longer than the string
https://en.wikipedia.org/wiki/Random%20amplification%20of%20polymorphic%20DNA
Random amplified polymorphic DNA (RAPD), pronounced "rapid", is a type of polymerase chain reaction (PCR), but the segments of DNA that are amplified are random. The scientist performing RAPD creates several arbitrary, short primers (10–12 nucleotides), then proceeds with the PCR using a large template of genomic DNA, hoping that fragments will amplify. By resolving the resulting patterns, a semi-unique profile can be gleaned from an RAPD reaction. No knowledge of the DNA sequence of the targeted genome is required, as the primers will bind somewhere in the sequence, but it is not certain exactly where. This makes the method popular for comparing the DNA of biological systems that have not had the attention of the scientific community, or in a system in which relatively few DNA sequences are compared (it is not suitable for forming a cDNA databank). Because it relies on a large, intact DNA template sequence, it has some limitations in the use of degraded DNA samples. Its resolving power is much lower than targeted, species-specific DNA comparison methods, such as short tandem repeats. In recent years, RAPD has been used to characterize, and trace, the phylogeny of diverse plant and animal species. Introduction RAPD markers are decamer (10 nucleotides long) DNA fragments from PCR amplification of random segments of genomic DNA with a single primer of arbitrary nucleotide sequence and which are able to differentiate between genetically distinct individuals, although not necessarily in a reproducible way. It is used to analyze the genetic diversity of an individual by using random primers. Due to problems in experiment reproducibility, many scientific journals do not accept experiments merely based on RAPDs anymore. RAPD requires only one primer for amplification. How it works After amplification with PCR, samples are loaded into a gel (either agarose or polyacrylamide) for gel electrophoresis. The differing sizes created through random amplification will sepa
https://en.wikipedia.org/wiki/Cascode%20voltage%20switch%20logic
Cascode Voltage Switch Logic (CVSL) refers to a CMOS-type logic family which is designed for certain advantages. It requires mainly N-channel MOSFET transistors to implement the logic using true and complementary input signals, and also needs two P-channel transistors at the top to pull one of the outputs high. This logic family is also known as Differential Cascode Voltage Switch Logic (DCVS or DCVSL). See also Logic family
https://en.wikipedia.org/wiki/Skeleton%20%28category%20theory%29
In mathematics, a skeleton of a category is a subcategory that, roughly speaking, does not contain any extraneous isomorphisms. In a certain sense, the skeleton of a category is the "smallest" equivalent category, which captures all "categorical properties" of the original. In fact, two categories are equivalent if and only if they have isomorphic skeletons. A category is called skeletal if isomorphic objects are necessarily identical. Definition A skeleton of a category C is an equivalent category D in which no two distinct objects are isomorphic. It is generally considered to be a subcategory. In detail, a skeleton of C is a category D such that: D is a subcategory of C: every object of D is an object of C for every pair of objects d1 and d2 of D, the morphisms in D are morphisms in C, i.e. and the identities and compositions in D are the restrictions of those in C. The inclusion of D in C is full, meaning that for every pair of objects d1 and d2 of D we strengthen the above subset relation to an equality: The inclusion of D in C is essentially surjective: Every C-object is isomorphic to some D-object. D is skeletal: No two distinct D-objects are isomorphic. Existence and uniqueness It is a basic fact that every small category has a skeleton; more generally, every accessible category has a skeleton. (This is equivalent to the axiom of choice.) Also, although a category may have many distinct skeletons, any two skeletons are isomorphic as categories, so up to isomorphism of categories, the skeleton of a category is unique. The importance of skeletons comes from the fact that they are (up to isomorphism of categories), canonical representatives of the equivalence classes of categories under the equivalence relation of equivalence of categories. This follows from the fact that any skeleton of a category C is equivalent to C, and that two categories are equivalent if and only if they have isomorphic skeletons. Examples The category Set of all sets h
https://en.wikipedia.org/wiki/Nikolay%20Danilevsky
Nikolay Yakovlevich Danilevsky (; – ) was a Russian naturalist, economist, ethnologist, philosopher, historian and ideologue of pan-Slavism and the Slavophile movement. He expounded a circular view of world history. He is remembered also for his opposition to Charles Darwin's theory of evolution and for his theory of historical-cultural types. Life Danilevsky was born in the village of Oberets in Oryol Governorate. As a member of a noble family, he was educated at the Tsarskoye Selo Lyceum. After graduation, he went on to an appointment with the Military Ministry Office. Dissatisfied with the prospect of a military career, he began to attend the University of St Petersburg, where he studied physics and mathematics. Having passed his master's exams, Danilevsky prepared to defend his thesis on the flora of the Black Sea area of European Russia but in 1849 he was arrested there for his membership in the Petrashevsky Circle, which studied the work of French socialists and included Fyodor Dostoevsky. Its most active members were sentenced to death, later commuted to life imprisonment. Danilevsky was imprisoned for 100 days in the Peter and Paul Fortress and then was sent to live under police surveillance in Vologda, where he worked in provincial administration. In 1852, he was appointed to an expedition, led by Karl Ernst von Baer, to assess the condition of the fishing industry on the Volga and the Caspian Sea. The expedition lasted four years, and Danilevsky was then reassigned to the Agricultural Department of the State Property Ministry. For over 20 years, he was responsible for expeditions to the White Sea, the Black Sea, the Azov Sea, the Caspian Sea and the Arctic Ocean. The expertise that he gained from the expeditions led to the publication of his 1872 book, Examination of Fishery Conditions in Russia. Aside from his work on fisheries and the seal trade, he was the head of the commission setting the rules for the use of running water in Crimea from 1872 t
https://en.wikipedia.org/wiki/Eumycetoma
Eumycetoma, also known as Madura foot, is a persistent fungal infection of the skin and the tissues just under the skin, affecting most commonly the feet, although it can occur in hands and other body parts. It starts as a painless wet nodule, which may be present for years before ulceration, swelling, grainy discharge and weeping from sinuses and fistulae, followed by bone deformity. Several fungi can cause eumycetoma, including: Madurella mycetomatis, Madurella grisea, Leptosphaeria senegalensis, Curvularia lunata, Scedosporium apiospermum, Neotestudina rosatii, and Acremonium and Fusarium species. Diagnosis is by biopsy, visualising the fungi under the microscope and culture. Medical imaging may reveal extent of bone involvement. Other tests include ELISA, immunodiffusion, and DNA Barcoding. Treatment includes surgical removal of affected tissue and antifungal medicines. After treatment, recurrence is common. Sometimes, amputation is required. The infection occurs generally in the tropics, and is endemic in Sub-Saharan Africa, especially Sudan, India, parts of South America and Mexico. Few cases have been reported across North Africa. Mycetoma is probably low-endemic to Egypt with predilection for eumycetoma. In 2016, the World Health Organization recognised eumycetoma as a neglected tropical disease. Signs and symptoms The initial lesion is a small swelling under the skin following minor trauma. It appears as a painless wet nodule, which may be present for years before ulceration, swelling and weeping from sinuses, followed by bone deformity. The sinuses discharge a grainy liquid of fungal colonies. These grains are usually black or white. Destruction of deeper tissues, and deformity and loss of function in the affected limbs may occur in later stages. It tends to occur in one foot. Mycetoma due to bacteria has similar clinical features. Causes Eumycetoma is a type of mycetoma caused by fungi. Mycetoma caused by bacteria from the phylum Actinomycetes is
https://en.wikipedia.org/wiki/Neutron%20star%20spin-up
Neutron star spin up is the name given to the increase in rotational speed over time first noted in Cen X-3 and Her X-1 but now observed in other X-ray pulsars. In the case of Cen X-3, the pulse period is decreasing over a timescale of 3400 years (defined as , where is the rotation period and is the rate of change in the rotation period). Ever since the detection of the first millisecond pulsar (MSP), it has been theorized that MSPs are neutron stars that have been spun up by accretion in a close binary system. The change in rotational period of the neutron star comes from the transition region between the magnetosphere and the plasma flow from the companion star. In this context the magnetosphere is defined as the region of space surrounding the neutron star, in which the magnetic field determines the motion of the plasma. Inside the magnetic field, the plasma will eventually co-rotate with the neutron star while in the transition region, angular momentum from the accretion disk will be transferred via the magnetic field to the neutron star, leading to the spin-up. See also Stellar spin-down
https://en.wikipedia.org/wiki/Pseudorandom%20generator
In theoretical computer science and cryptography, a pseudorandom generator (PRG) for a class of statistical tests is a deterministic procedure that maps a random seed to a longer pseudorandom string such that no statistical test in the class can distinguish between the output of the generator and the uniform distribution. The random seed itself is typically a short binary string drawn from the uniform distribution. Many different classes of statistical tests have been considered in the literature, among them the class of all Boolean circuits of a given size. It is not known whether good pseudorandom generators for this class exist, but it is known that their existence is in a certain sense equivalent to (unproven) circuit lower bounds in computational complexity theory. Hence the construction of pseudorandom generators for the class of Boolean circuits of a given size rests on currently unproven hardness assumptions. Definition Let be a class of functions. These functions are the statistical tests that the pseudorandom generator will try to fool, and they are usually algorithms. Sometimes the statistical tests are also called adversaries or distinguishers. The notation in the codomain of the functions is the Kleene star. A function with is a pseudorandom generator against with bias if, for every in , the statistical distance between the distributions and is at most , where is the uniform distribution on . The quantity is called the seed length and the quantity is called the stretch of the pseudorandom generator. A pseudorandom generator against a family of adversaries with bias is a family of pseudorandom generators , where is a pseudorandom generator against with bias and seed length . In most applications, the family represents some model of computation or some set of algorithms, and one is interested in designing a pseudorandom generator with small seed length and bias, and such that the output of the generator can be computed by the same s
https://en.wikipedia.org/wiki/Malaise%20trap
A Malaise trap is a large, tent-like structure used for trapping, killing, and preserving flying insects, particularly Hymenoptera and Diptera. The trap is made of a material such as PET (polyester) netting and can be various colours. Insects fly into the tent wall and are funneled into a collecting vessel attached to its highest point. It was invented by René Malaise in 1934. Structure Many versions of the Malaise trap are used, but the basic structure consists of a tent with a large opening at the bottom for insects to fly into and a tall central wall that directs the flying insects upward to a cylinder containing a killing agent. The chemical varies according to purpose and access. Conventionally, cyanide was used inside the jar with an absorbent material. However, due to restrictions, many people use ethanol. Ethanol damages some flying insects such as lepidopterans, but most people use the malaise trap primarily for hymenopterans and dipterans. In addition, the ethanol keeps the specimens preserved for a longer period of time. Other dry killing agents including no-pest strips (dichlorvos) and ethyl acetate need to be checked more regularly. Design details Cylinder When choosing a Malaise trap design, the types of insects to catch must be considered. The opening to the cylinder is of key importance. Typically, the opening is around , and can vary according to the size of insect desired. If using a dry agent, a smaller hole results in a faster death, limiting the amount of damage a newly caught insect can inflict on older, fragile specimens. In ethanol, this is less of a concern. Larger holes potentially allow in more butterflies, moths, and dragonflies. Location Placement of the trap is very important. It should be positioned to maximize the number of flying insects that pass through the opening. This is determined by the natural features of the site. One should evaluate topography, vegetation, wind, and water. For example, if a wide corridor in a
https://en.wikipedia.org/wiki/Creatio%20ex%20nihilo
(Latin for "creation out of nothing") is the doctrine that matter is not eternal but had to be created by some divine creative act. It is a theistic answer to the question of how the universe came to exist. It is in contrast to Ex nihilo nihil fit or "nothing comes from nothing", which means that all things were formed from preexisting things; an idea by the Greek philosopher Parmenides (c. 540 – c. 480 BC) about the nature of all things, and later more formally stated by Titus Lucretius Carus (c. 99 – c. 55 BC). Understanding of concept through contrast Ex nihilo nihil fit: uncreated matter Ex nihilo nihil fit means that nothing comes from nothing. In ancient creation myths, the universe is formed from eternal formless matter, namely the dark and still primordial ocean of chaos. In Sumerian myth this cosmic ocean is personified as the goddess Nammu "who gave birth to heaven and earth" and had existed forever; in the Babylonian creation epic Enuma Elish pre-existent chaos is made up of fresh-water Apsu and salt-water Tiamat, and from Tiamat the god Marduk created Heaven and Earth; in Egyptian creation myths a pre-existent watery chaos personified as the god Nun and associated with darkness, gave birth to the primeval hill (or in some versions a primeval lotus flower, or in others a celestial cow); and in Greek traditions the ultimate origin of the universe, depending on the source, is sometimes Oceanus (a river that circles the Earth), Night, or water. To these can be added the account of the Book of Genesis, which opens with God creating the heavens and the earth, separating and restraining the waters, not creating the waters themselves out of nothing. Alternately, whether this is truly 'ex nihilo', is unclear. The Hebrew sentence which opens Genesis, Bereshit bara Elohim et hashamayim ve'et ha'aretz, can be interpreted in at least three ways: As a statement that the cosmos had an absolute beginning (In the beginning, God created the heavens and earth). As
https://en.wikipedia.org/wiki/Maximum%20entropy%20probability%20distribution
In statistics and information theory, a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions. According to the principle of maximum entropy, if nothing is known about a distribution except that it belongs to a certain class (usually defined in terms of specified properties or measures), then the distribution with the largest entropy should be chosen as the least-informative default. The motivation is twofold: first, maximizing entropy minimizes the amount of prior information built into the distribution; second, many physical systems tend to move towards maximal entropy configurations over time. Definition of entropy and differential entropy If is a discrete random variable with distribution given by then the entropy of is defined as If is a continuous random variable with probability density , then the differential entropy of is defined as The quantity is understood to be zero whenever . This is a special case of more general forms described in the articles Entropy (information theory), Principle of maximum entropy, and differential entropy. In connection with maximum entropy distributions, this is the only one needed, because maximizing will also maximize the more general forms. The base of the logarithm is not important as long as the same one is used consistently: change of base merely results in a rescaling of the entropy. Information theorists may prefer to use base 2 in order to express the entropy in bits; mathematicians and physicists will often prefer the natural logarithm, resulting in a unit of nats for the entropy. The choice of the measure is however crucial in determining the entropy and the resulting maximum entropy distribution, even though the usual recourse to the Lebesgue measure is often defended as "natural". Distributions with measured constants Many statistical distributions of applicable interest are those for which the
https://en.wikipedia.org/wiki/ALGOL%2068C
ALGOL 68C is an imperative computer programming language, a dialect of ALGOL 68, that was developed by Stephen R. Bourne and Michael Guy to program the Cambridge Algebra System (CAMAL). The initial compiler was written in the Princeton Syntax Compiler (PSYCO, by Edgar T. Irons) that was implemented by J. H. Mathewman at Cambridge. ALGOL 68C was later used for the CHAOS OS for the capability-based security CAP computer at University of Cambridge in 1971. Other early contributors were Andrew D. Birrell and Ian Walker. Subsequent work was done on the compiler after Bourne left Cambridge University in 1975. Garbage collection was added, and the code base is still running on an emulated OS/MVT using Hercules. The ALGOL 68C compiler generated output in ZCODE, a register-based intermediate language, which could then be either interpreted or compiled to a native executable. This ability to interpret or compile ZCODE encouraged the porting of ALGOL 68C to many different computing platforms. Aside from the CAP computer, the compiler was ported to systems including Conversational Monitor System (CMS), TOPS-10, and Zilog Z80. Popular culture A very early predecessor of this compiler was used by Guy and Bourne to write the first Game of Life programs on the PDP-7 with a DEC 340 display. Various Liverpool Software Gazette issues detail the Z80 implementation. The compiler required about 120 KB of memory to run; hence the Z80's 64 KB memory is actually too small to run the compiler. So ALGOL 68C programs for the Z80 had to be cross-compiled from the larger CAP computer, or an IBM System/370 mainframe computer. Algol 68C and Unix Stephen Bourne subsequently reused ALGOL 68's if ~ then ~ else ~ fi, case ~ in ~ out ~ esac and for ~ while ~ do ~ od clauses in the common Unix Bourne shell, but with in's syntax changed, out removed, and od replaced with done (to avoid conflict with the od utility). After Cambridge, Bourne spent nine years at Bell Labs with the Version 7 Unix (Se
https://en.wikipedia.org/wiki/Free-fall%20time
The free-fall time is the characteristic time that would take a body to collapse under its own gravitational attraction, if no other forces existed to oppose the collapse. As such, it plays a fundamental role in setting the timescale for a wide variety of astrophysical processes—from star formation to helioseismology to supernovae—in which gravity plays a dominant role. Derivation Infall to a point source of gravity It is relatively simple to derive the free-fall time by applying Kepler's Third Law of planetary motion to a degenerate elliptic orbit. Consider a point mass at distance from a point source of mass which falls radially inward to it. (Crucially, Kepler's Third Law depends only on the semi-major axis of the orbit, and does not depend on the eccentricity). A purely radial trajectory is an example of a degenerate ellipse with an eccentricity of 1 and semi-major axis . Therefore, the time it would take a body to fall inward, turn around, and return to its original position is the same as the period of a circular orbit of radius , by Kepler's Third Law: To see that the semi-major axis is , we must examine properties of orbits as they become increasingly elliptical. Kepler's First Law states that an orbit is an ellipse with the center of mass as one focus. In the case of a very small mass falling toward a very large mass , the center of mass is within the larger mass. The focus of an ellipse is increasingly off-center with increasing ellipticity. In the limiting case of a degenerate ellipse with an eccentricity of 1, the largest diameter of the orbit extends from the initial position of the infalling object to the point source of mass . In other words, the ellipse becomes a line of length . The semi-major axis is half the width of the ellipse along the long axis, which in the degenerate case becomes . If the free-falling body completed a full orbit, it would begin at distance from the point source mass , fall inward until it reached that point source,
https://en.wikipedia.org/wiki/Digital%20distribution
Digital distribution, also referred to as content delivery, online distribution, or electronic software distribution, among others, is the delivery or distribution of digital media content such as audio, video, e-books, video games, and other software. The term is generally used to describe distribution over an online delivery medium, such as the Internet, thus bypassing physical distribution methods, such as paper, optical discs, and VHS videocassettes. The term online distribution is typically applied to freestanding products; downloadable add-ons for other products are more commonly known as downloadable content. With the advancement of network bandwidth capabilities, online distribution became prominent in the 21st century, with prominent platforms such as Amazon Video, and Netflix's streaming service starting in 2007. Content distributed online may be streamed or downloaded, and often consists of books, films and television programs, music, software, and video games. Streaming involves downloading and using content at a user's request, or "on-demand", rather than allowing a user to store it permanently. In contrast, fully downloading content to a hard drive or other forms of storage media may allow offline access in the future. Specialist networks known as content delivery networks help distribute content over the Internet by ensuring both high availability and high performance. Alternative technologies for content delivery include peer-to-peer file sharing technologies. Alternatively, content delivery platforms create and syndicate content remotely, acting like hosted content management systems. Unrelated to the above, the term "digital distribution" is also used in film distribution to describe the distribution of content through physical digital media, in opposition to distribution by analog media such as photographic film and magnetic tape (see: digital cinema). Impact on traditional retail The rise of online distribution has provided controversy for t
https://en.wikipedia.org/wiki/Sun%20Ray
The Sun Ray was a stateless thin client computer (and associated software) aimed at corporate environments, originally introduced by Sun Microsystems in September 1999 and discontinued by Oracle Corporation in 2014. It featured a smart card reader and several models featured an integrated flat panel display. The idea of a stateless desktop was a significant shift from, and the eventual successor to, Sun's earlier line of diskless Java-only desktops, the JavaStation. Predecessor The concept began in Sun Microsystems Laboratories in 1997 as a project codenamed NetWorkTerminal (NeWT). The client was designed to be small, low cost, low power, and silent. It was based on the Sun Microelectronics MicroSPARC IIep. Other processors initially considered for it included Intel's StrongARM, Philips Semiconductors' TriMedia, and National Semiconductor's Geode. The MicroSPARC IIep was selected because of its high level of integration, good performance, low cost, and availability. NeWT included 8 MiB of EDO DRAM and 4 MiB of NOR flash. The graphics controller used was the ATI Rage 128 because of its low power, 2D rendering performance, and low cost. It also included an ATI video encoder for TV-out (removed in the Sun Ray 1), a Philips Semiconductor SAA7114 video decoder/scaler, Crystal Semiconductor audio CODEC, Sun Microelectronics Ethernet controller, PCI USB host interface with 4 port hub, and I²C smart card interface. The motherboard and daughtercard were housed in an off-the-shelf commercial small form-factor PC case with internal +12/+5VDC auto ranging power supply. NeWT was designed to have feature parity with a modern business PC in every way possible. Instead of a commercial operating system. the client ran a real-time operating system called "exec", which was originally developed in Sun Labs as part of an Ethernet-based security camera project codenamed NetCam. Less than 60 NeWTs were ever built and very few survived; one is in the collection of the Computer Histo
https://en.wikipedia.org/wiki/Golden-section%20search
The golden-section search is a technique for finding an extremum (minimum or maximum) of a function inside a specified interval. For a strictly unimodal function with an extremum inside the interval, it will find that extremum, while for an interval containing multiple extrema (possibly including the interval boundaries), it will converge to one of them. If the only extremum on the interval is on a boundary of the interval, it will converge to that boundary point. The method operates by successively narrowing the range of values on the specified interval, which makes it relatively slow, but very robust. The technique derives its name from the fact that the algorithm maintains the function values for four points whose three interval widths are in the ratio φ:1:φ where φ is the golden ratio. These ratios are maintained for each iteration and are maximally efficient. Excepting boundary points, when searching for a minimum, the central point is always less than or equal to the outer points, assuring that a minimum is contained between the outer points. The converse is true when searching for a maximum. The algorithm is the limit of Fibonacci search (also described below) for many function evaluations. Fibonacci search and golden-section search were discovered by Kiefer (1953) (see also Avriel and Wilde (1966)). Basic idea The discussion here is posed in terms of searching for a minimum (searching for a maximum is similar) of a unimodal function. Unlike finding a zero, where two function evaluations with opposite sign are sufficient to bracket a root, when searching for a minimum, three values are necessary. The golden-section search is an efficient way to progressively reduce the interval locating the minimum. The key is to observe that regardless of how many points have been evaluated, the minimum lies within the interval defined by the two points adjacent to the point with the least value so far evaluated. The diagram above illustrates a single step in the techniqu
https://en.wikipedia.org/wiki/Pixel-art%20scaling%20algorithms
Pixel-art scaling algorithms are graphical filters that enhance hand-drawn 2D pixel art graphics. The re-scaling of pixel art is a specialist sub-field of image rescaling. As pixel-art graphics are usually in very low resolutions, they rely on careful placing of individual pixels, often with a limited palette of colors. This results in graphics that rely on a high amount of stylized visual cues to define complex shapes with very little resolution, down to individual pixels and making image scaling of pixel art a particularly difficult problem. A number of specialized algorithms have been developed to handle pixel-art graphics, as the traditional scaling algorithms do not take such perceptual cues into account. Since a typical application of this technology is improving the appearance of fourth-generation and earlier video games on arcade and console emulators, many are designed to run in real time for sufficiently small input images at 60-frames per second. This places constraints on the type of programming techniques that can be used for this sort of real-time processing. Many work only on specific scale factors: 2× is the most common, with 3×, 4×, 5× and 6× also present. Algorithms SAA5050 'Diagonal Smoothing' The Mullard SAA5050 Teletext character generator chip (1980) used a primitive pixel scaling algorithm to generate higher-resolution characters on screen from a lower-resolution representation from its internal ROM. Internally each character shape was defined on a 5×9 pixel grid, which was then interpolated by smoothing diagonals to give a 10×18 pixel character, with a characteristically angular shape, surrounded to the top and to the left by two pixels of blank space. The algorithm only works on monochrome source data, and assumes the source pixels will be logical true or false depending on whether they are 'on' or 'off'. Pixels 'outside the grid pattern' are assumed to be off. The algorithm works as follows: A B C --\ 1 2 D E F --/ 3 4 1 = B | (A &
https://en.wikipedia.org/wiki/Coulomb%20collision
A Coulomb collision is a binary elastic collision between two charged particles interacting through their own electric field. As with any inverse-square law, the resulting trajectories of the colliding particles is a hyperbolic Keplerian orbit. This type of collision is common in plasmas where the typical kinetic energy of the particles is too large to produce a significant deviation from the initial trajectories of the colliding particles, and the cumulative effect of many collisions is considered instead. The importance of Coulomb collisions was first pointed out by Lev Landau in 1936, who also derived the corresponding kinetic equation which is known as the Landau kinetic equation. Simplified mathematical treatment for plasmas In a plasma, a Coulomb collision rarely results in a large deflection. The cumulative effect of the many small angle collisions, however, is often larger than the effect of the few large angle collisions that occur, so it is instructive to consider the collision dynamics in the limit of small deflections. We can consider an electron of charge and mass passing a stationary ion of charge and much larger mass at a distance with a speed . The perpendicular force is at the closest approach and the duration of the encounter is about . The product of these expressions divided by the mass is the change in perpendicular velocity: Note that the deflection angle is proportional to . Fast particles are "slippery" and thus dominate many transport processes. The efficiency of velocity-matched interactions is also the reason that fusion products tend to heat the electrons rather than (as would be desirable) the ions. If an electric field is present, the faster electrons feel less drag and become even faster in a "run-away" process. In passing through a field of ions with density , an electron will have many such encounters simultaneously, with various impact parameters (distance to the ion) and directions. The cumulative effect can be described
https://en.wikipedia.org/wiki/Frobenius%20algebra
In mathematics, especially in the fields of representation theory and module theory, a Frobenius algebra is a finite-dimensional unital associative algebra with a special kind of bilinear form which gives the algebras particularly nice duality theories. Frobenius algebras began to be studied in the 1930s by Richard Brauer and Cecil Nesbitt and were named after Georg Frobenius. Tadashi Nakayama discovered the beginnings of a rich duality theory , . Jean Dieudonné used this to characterize Frobenius algebras . Frobenius algebras were generalized to quasi-Frobenius rings, those Noetherian rings whose right regular representation is injective. In recent times, interest has been renewed in Frobenius algebras due to connections to topological quantum field theory. Definition A finite-dimensional, unital, associative algebra A defined over a field k is said to be a Frobenius algebra if A is equipped with a nondegenerate bilinear form that satisfies the following equation: . This bilinear form is called the Frobenius form of the algebra. Equivalently, one may equip A with a linear functional such that the kernel of λ contains no nonzero left ideal of A. A Frobenius algebra is called symmetric if σ is symmetric, or equivalently λ satisfies . There is also a different, mostly unrelated notion of the symmetric algebra of a vector space. Nakayama automorphism For a Frobenius algebra A with σ as above, the automorphism ν of A such that is Nakayama automorphism associated to A and σ. Examples Any matrix algebra defined over a field k is a Frobenius algebra with Frobenius form σ(a,b)=tr(a·b) where tr denotes the trace. Any finite-dimensional unital associative algebra A has a natural homomorphism to its own endomorphism ring End(A). A bilinear form can be defined on A in the sense of the previous example. If this bilinear form is nondegenerate, then it equips A with the structure of a Frobenius algebra. Every group ring k[G] of a finite group G over a field k is a s
https://en.wikipedia.org/wiki/Plasmodesma
Plasmodesmata (singular: plasmodesma) are microscopic channels which traverse the cell walls of plant cells and some algal cells, enabling transport and communication between them. Plasmodesmata evolved independently in several lineages, and species that have these structures include members of the Charophyceae, Charales, Coleochaetales and Phaeophyceae (which are all algae), as well as all embryophytes, better known as land plants. Unlike animal cells, almost every plant cell is surrounded by a polysaccharide cell wall. Neighbouring plant cells are therefore separated by a pair of cell walls and the intervening middle lamella, forming an extracellular domain known as the apoplast. Although cell walls are permeable to small soluble proteins and other solutes, plasmodesmata enable direct, regulated, symplastic transport of substances between cells. There are two forms of plasmodesmata: primary plasmodesmata, which are formed during cell division, and secondary plasmodesmata, which can form between mature cells. Similar structures, called gap junctions and membrane nanotubes, interconnect animal cells and stromules form between plastids in plant cells. Formation Primary plasmodesmata are formed when fractions of the endoplasmic reticulum are trapped across the middle lamella as new cell wall are synthesized between two newly divided plant cells. These eventually become the cytoplasmic connections between cells. At the formation site, the wall is not thickened further, and depressions or thin areas known as pits are formed in the walls. Pits normally pair up between adjacent cells. Plasmodesmata can also be inserted into existing cell walls between non-dividing cells (secondary plasmodesmata). Primary plasmodesmata The formation of primary plasmodesmata occurs during the part of the cellular division process where the endoplasmic reticulum and the new plate are fused together, this process results in the formation of a cytoplasmic pore (or cytoplasmic sleeve). The d
https://en.wikipedia.org/wiki/Louis%20Nirenberg
Louis Nirenberg (February 28, 1925 – January 26, 2020) was a Canadian-American mathematician, considered one of the most outstanding mathematicians of the 20th century. Nearly all of his work was in the field of partial differential equations. Many of his contributions are now regarded as fundamental to the field, such as his strong maximum principle for second-order parabolic partial differential equations and the Newlander-Nirenberg theorem in complex geometry. He is regarded as a foundational figure in the field of geometric analysis, with many of his works being closely related to the study of complex analysis and differential geometry. Biography Nirenberg was born in Hamilton, Ontario to Ukrainian Jewish immigrants. He attended Baron Byng High School and McGill University, completing his BS in both mathematics and physics in 1945. Through a summer job at the National Research Council of Canada, he came to know Ernest Courant's wife Sara Paul. She spoke to Courant's father, the eminent mathematician Richard Courant, for advice on where Nirenberg should apply to study theoretical physics. Following their discussion, Nirenberg was invited to enter graduate school at the Courant Institute of Mathematical Sciences at New York University. In 1949, he obtained his doctorate in mathematics, under the direction of James Stoker. In his doctoral work, he solved the "Weyl problem" in differential geometry, which had been a well-known open problem since 1916. Following his doctorate, he became a professor at the Courant Institute, where he remained for the rest of his career. He was the advisor of 45 PhD students, and published over 150 papers with a number of coauthors, including notable collaborations with Henri Berestycki, Haïm Brezis, Luis Caffarelli, and Yanyan Li, among many others. He continued to carry out mathematical research until the age of 87. On January 26, 2020, Nirenberg died at the age of 94. Nirenberg's work was widely recognized, including the followi
https://en.wikipedia.org/wiki/Butterfat
Butterfat or milkfat is the fatty portion of milk. Milk and cream are often sold according to the amount of butterfat they contain. Composition Butterfat is mainly composed of triglycerides. Each triglyceride contains three fatty acids. Butterfat triglycerides contain the following amounts of fatty acids (by mass fraction): Butterfat contains about 3% trans fat, which is slightly less than 0.5 grams per US tablespoon. Trans fats occur naturally in meat and milk from ruminants. The predominant kind of trans fat found in milk is vaccenic fatty acid. Trans fats may be also found in some industrially produced foods, such as shortenings obtained by hydrogenation of vegetable oils. In light of recognized scientific evidence, nutritional authorities consider all trans fats equally harmful for health and recommend that their consumption be reduced to trace amounts. However, two Canadian studies have shown that vaccenic acid could be beneficial compared to vegetable shortenings containing trans fats, or a mixture of pork lard and soy fat, by lowering total LDL and triglyceride levels. A study by the US Department of Agriculture showed that vaccenic acid raises both HDL and LDL cholesterol, whereas industrial trans fats only raise LDL with no beneficial effect on HDL. U.S. standards In the U.S., there are federal standards for butterfat content of dairy products. Many other countries also have standards for minimum fat levels in dairy products. Commercial products generally contain the minimum legal amount of fat with any excess being removed to make cream, a valuable commodity. Milks Non-fat milk, also labeled "fat-free milk" or "skim milk", contains less than 0.5% fat Low-fat milk is 1% fat Reduced-fat milk is 2% fat Whole milk contains at least 3.25% fat Cheeses Dry curd and nonfat cottage cheese contain less than 0.5% fat Lowfat cottage cheese contains 0.5–2% fat Cottage cheese contains at least 4% fat Swiss cheese contains at least 43% fat relative t
https://en.wikipedia.org/wiki/Pteridospermatophyta
Pteridospermatophyta, also called "pteridosperms" or "seed ferns" are a polyphyletic grouping of extinct seed-producing plants. The earliest fossil evidence for plants of this type is the genus Elkinsia and the Lyginopterids of late Devonian age. They flourished particularly during the Carboniferous and Permian periods. Pteridosperms declined during the Mesozoic Era and had mostly disappeared by the end of the Cretaceous Period, though Komlopteris seem to have survived into Eocene times, based on fossil finds in Tasmania. With regard to the enduring utility of this division, many palaeobotanists still use the pteridosperm grouping in an informal sense to refer to the seed plants that are not angiosperms, coniferoids (conifers or cordaites), ginkgophytes or cycadophytes (cycads or bennettites). This is particularly useful for extinct seed plant groups whose systematic relationships remain speculative, as they can be classified as pteridosperms with no valid implications being made as to their systematic affinities. Also, from a purely curatorial perspective the term pteridosperms is a useful shorthand for describing the fern-like fronds that were probably produced by seed plants, which are commonly found in many Palaeozoic and Mesozoic fossil floras. History of classification The concept of pteridosperms goes back to the late 19th century when palaeobotanists came to realise that many Carboniferous fossils resembling fern fronds had anatomical features more reminiscent of the modern-day seed plants, the cycads. In 1899 the German palaeobotanist Henry Potonié coined the term "Cycadofilices" ("cycad-ferns") for such fossils, suggesting that they were a group of non-seed plants intermediate between the ferns and cycads. Shortly afterwards, the British palaeobotanists Frank Oliver and Dukinfield Henry Scott (with the assistance of Oliver's student at the time, Marie Stopes) made the critical discovery that some of these fronds (genus Lyginopteris) were associated wit
https://en.wikipedia.org/wiki/Coherent%20duality
In mathematics, coherent duality is any of a number of generalisations of Serre duality, applying to coherent sheaves, in algebraic geometry and complex manifold theory, as well as some aspects of commutative algebra that are part of the 'local' theory. The historical roots of the theory lie in the idea of the adjoint linear system of a linear system of divisors in classical algebraic geometry. This was re-expressed, with the advent of sheaf theory, in a way that made an analogy with Poincaré duality more apparent. Then according to a general principle, Grothendieck's relative point of view, the theory of Jean-Pierre Serre was extended to a proper morphism; Serre duality was recovered as the case of the morphism of a non-singular projective variety (or complete variety) to a point. The resulting theory is now sometimes called Serre–Grothendieck–Verdier duality, and is a basic tool in algebraic geometry. A treatment of this theory, Residues and Duality (1966) by Robin Hartshorne, became a reference. One concrete spin-off was the Grothendieck residue. To go beyond proper morphisms, as for the versions of Poincaré duality that are not for closed manifolds, requires some version of the compact support concept. This was addressed in SGA2 in terms of local cohomology, and Grothendieck local duality; and subsequently. The Greenlees–May duality, first formulated in 1976 by Ralf Strebel and in 1978 by Eben Matlis, is part of the continuing consideration of this area. Adjoint functor point of view While Serre duality uses a line bundle or invertible sheaf as a dualizing sheaf, the general theory (it turns out) cannot be quite so simple. (More precisely, it can, but at the cost of imposing the Gorenstein ring condition.) In a characteristic turn, Grothendieck reformulated general coherent duality as the existence of a right adjoint functor , called twisted or exceptional inverse image functor, to a higher direct image with compact support functor . Higher direct images a
https://en.wikipedia.org/wiki/Finalizer
In computer science, a finalizer or finalize method is a special method that performs finalization, generally some form of cleanup. A finalizer is executed during object destruction, prior to the object being deallocated, and is complementary to an initializer, which is executed during object creation, following allocation. Finalizers are strongly discouraged by some, due to difficulty in proper use and the complexity they add, and alternatives are suggested instead, mainly the dispose pattern (see problems with finalizers). The term finalizer is mostly used in object-oriented and functional programming languages that use garbage collection, of which the archetype is Smalltalk. This is contrasted with a destructor, which is a method called for finalization in languages with deterministic object lifetimes, archetypically C++. These are generally exclusive: a language will have either finalizers (if automatically garbage collected) or destructors (if manually memory managed), but in rare cases a language may have both, as in C++/CLI and D, and in case of reference counting (instead of tracing garbage collection), terminology varies. In technical use, finalizer may also be used to refer to destructors, as these also perform finalization, and some subtler distinctions are drawn – see terminology. The term final also indicates a class that cannot be inherited; this is unrelated. Terminology The terminology of finalizer and finalization versus destructor and destruction varies between authors and is sometimes unclear. In common use, a destructor is a method called deterministically on object destruction, and the archetype is C++ destructors; while a finalizer is called non-deterministically by the garbage collector, and the archetype is Java finalize methods. For languages that implement garbage collection via reference counting, terminology varies, with some languages such as Objective-C and Perl using destructor, and other languages such as Python using finalizer (p