source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Malonyl-CoA | Malonyl-CoA is a coenzyme A derivative of malonic acid.
Functions
It plays a key role in chain elongation in fatty acid biosynthesis and polyketide biosynthesis.
Cytosolic fatty acid biosynthesis
Malonyl-CoA provides 2-carbon units to fatty acids and commits them to fatty acid chain synthesis.
Malonyl-CoA is formed by carboxylating acetyl-CoA using the enzyme acetyl-CoA carboxylase. One molecule of acetyl-CoA joins with a molecule of bicarbonate, requiring energy rendered from ATP.
Malonyl-CoA is utilised in fatty acid biosynthesis by the enzyme malonyl coenzyme A:acyl carrier protein transacylase (MCAT). MCAT serves to transfer malonate from malonyl-CoA to the terminal thiol of holo-acyl carrier protein (ACP).
Mitochondrial fatty acid synthesis
Malonyl-CoA is formed in the first step of mitochondrial fatty acid synthesis (mtFASII) from malonic acid by malonyl-CoA synthetase (ACSF3).
Polyketide biosynthesis
MCAT is also involved in bacterial polyketide biosynthesis. The enzyme MCAT together with an acyl carrier protein (ACP), and a polyketide synthase (PKS) and chain-length factor heterodimer, constitutes the minimal PKS of type II polyketides.
Regulation
Malonyl-CoA is a highly regulated molecule in fatty acid synthesis; as such, it inhibits the rate-limiting step in beta-oxidation of fatty acids. Malonyl-CoA inhibits fatty acids from associating with carnitine by regulating the enzyme carnitine acyltransferase, thereby preventing them from entering the mitochondria, where fatty acid oxidation and degradation occur.
Related diseases
Malonyl-CoA plays a special role in the mitochondrial clearance of toxic malonic acid in the metabolic disorder combined malonic and methylmalonic aciduria (CMAMMA). In CMAMMA due to ACSF3, malonyl-CoA synthetase is decreased, which can generate malonyl-CoA from malonic acid, which can then be converted to acetyl-CoA by malonyl-CoA decarboxylase. In contrast, in CMAMMA due to malonyl-CoA decarboxylase deficiency, malonyl-CoA |
https://en.wikipedia.org/wiki/Acyl%20carrier%20protein | The acyl carrier protein (ACP) is a cofactor of both fatty acid and polyketide biosynthesis machinery. It is one of the most abundant proteins in cells of E. coli. In both cases, the growing chain is bound to the ACP via a thioester derived from the distal thiol of a 4'-phosphopantetheine moiety.
Structure
The ACPs are small negatively charged α-helical bundle proteins with a high degree of structural and amino acid similarity. The structures of a number of acyl carrier proteins have been solved using various NMR and crystallography techniques. The ACPs are related in structure and mechanism to the peptidyl carrier proteins (PCP) from nonribosomal peptide synthases.
Biosynthesis
Subsequent to the expression of the inactive apo ACP, the 4'-phosphopantetheine moiety is attached to a serine residue. This coupling is mediated by acyl carrier protein synthase (ACPS), a 4'-phosphopantetheinyl transferase. 4'-Phosphopantetheine is a prosthetic group of several acyl carrier proteins including the acyl carrier proteins (ACP) of fatty acid synthases, ACPs of polyketide synthases, the peptidyl carrier proteins (PCP), as well as aryl carrier proteins (ArCP) of nonribosomal peptide synthetases (NRPS). |
https://en.wikipedia.org/wiki/FreeBSD%20jail | The jail mechanism is an implementation of FreeBSD's OS-level virtualisation that allows system administrators to partition a FreeBSD-derived computer system into several independent mini-systems called jails, all sharing the same kernel, with very little overhead. It is implemented through a system call, jail(2), as well as a userland utility, jail(8), plus, depending on the system, a number of other utilities. The functionality was committed into FreeBSD in 1999 by Poul-Henning Kamp after some period of production use by a hosting provider, and was first released with FreeBSD 4.0, thus being supported on a number of FreeBSD descendants, including DragonFly BSD, to this day.
History
The need for the FreeBSD jails came from a small shared-environment hosting provider's (R&D Associates, Inc.'s owner, Derrick T. Woolworth) desire to establish a clean, clear-cut separation between their own services and those of their customers, mainly for security and ease of administration (jail(8)). Instead of adding a new layer of fine-grained configuration options, the solution adopted by Poul-Henning Kamp was to compartmentalize the system – both its files and its resources – in such a way that only the right people are given access to the right compartments.
Jails were first introduced in FreeBSD version 4.0, that was released on . Most of the original functionality is supported on DragonFly, and several of the new features have been ported as well.
Goals
FreeBSD jails mainly aim at three goals:
Virtualization: Each jail is a virtual environment running on the host machine with its own files, processes, user and superuser accounts. From within a jailed process, the environment is almost indistinguishable from a real system.
Security: Each jail is sealed from the others, thus providing an additional level of security.
Ease of delegation: The limited scope of a jail allows system administrators to delegate several tasks which require superuser access without handing out |
https://en.wikipedia.org/wiki/Video%20Disk%20Control%20Protocol | Video Disk Control Protocol (VDCP) is a proprietary communications protocol primarily used in broadcast automation to control hard disk video servers for broadcast television. VDCP was originally developed by Louth Automation and is commonly called the Louth Protocol. At the time it was developed when Hewlett Packard (eventually sold to Pinnacle Systems) and Tektronix were both bringing to market the first of the VideoFile Servers to be used in the broadcast industry. They contacted Louth Automation who then designed the communications protocol basing it on Sony protocols of both the Sony LMS Storage Device and the Sony VTR. The principal work was carried out by Ken Louth at Louth Automation.
VDCP uses a tightly coupled master-slave methodology. The controlling device takes the initiative in communications between the controlling broadcast automation device and the controlled device (video disk). VDCP conforms to the Open Systems Interconnection (OSI) reference model.
VDCP is a serial communications protocol based on RS-422. It is derived from the Sony 9-Pin Protocol, an industry-standard protocol for control of professional broadcast VTRs that is used in online editing.
Full details of the protocol are available from Harris Broadcast, who acquired Louth in 2000.
External links
Harris Broadcast
NDCP Launch press release
Broadcast engineering
Television technology
Digital television
Television terminology |
https://en.wikipedia.org/wiki/Philosophical%20Magazine | The Philosophical Magazine is one of the oldest scientific journals published in English. It was established by Alexander Tilloch in 1798; in 1822 Richard Taylor became joint editor and it has been published continuously by Taylor & Francis ever since.
Early history
The name of the journal dates from a period when "natural philosophy" embraced all aspects of science. The very first paper published in the journal carried the title "Account of Mr Cartwright's Patent Steam Engine". Other articles in the first volume include "Methods of discovering whether Wine has been adulterated with any Metals prejudicial to Health" and "Description of the Apparatus used by Lavoisier to produce Water from its component Parts, Oxygen and Hydrogen".
19th century
Early in the nineteenth century, classic papers by Humphry Davy, Michael Faraday and James Prescott Joule appeared in the journal and in the 1860s James Clerk Maxwell contributed several long articles, culminating in a paper containing the deduction that light is an electromagnetic wave or, as he put it himself, "We can scarcely avoid the inference that light consists in transverse undulations of the same medium which is the cause of electric and magnetic phenomena". The famous experimental paper of Albert A. Michelson and Edward Morley was published in 1887 and this was followed ten years later by J. J. Thomson with article "Cathode Rays" – essentially the discovery of the electron.
In 1814, the Philosophical Magazine merged with the Journal of Natural Philosophy, Chemistry, and the Arts, otherwise known as Nicholson's Journal (published by William Nicholson), to form The Philosophical Magazine and Journal. Further mergers in 1827 with the Annals of Philosophy, and in 1840 with The London and Edinburgh Philosophical Magazine and Journal of Science (named the Edinburgh Journal of Science until 1832) led to the retitling of the journal as The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. In 194 |
https://en.wikipedia.org/wiki/Channel%20I/O | In computing, channel I/O is a high-performance input/output (I/O) architecture that is implemented in various forms on a number of computer architectures, especially on mainframe computers. In the past, channels were generally implemented with custom devices, variously named channel, I/O processor, I/O controller, I/O synchronizer, or DMA controller.
Overview
Many I/O tasks can be complex and require logic to be applied to the data to convert formats and other similar duties. In these situations, the simplest solution is to ask the CPU to handle the logic, but because I/O devices are relatively slow, a CPU could waste time waiting for the data from the device. This situation is called 'I/O bound'.
Channel architecture avoids this problem by processing some or all of the I/O task without the aid of the CPU by offloading the work to dedicated logic. Channels are logically self-contained, with sufficient logic and working storage to handle I/O tasks. Some are powerful or flexible enough to be used as a computer on their own and can be construed as a form of coprocessor, for example, the 7909 Data Channel on an IBM 7090 or IBM 7094; however, most are not. On some systems the channels use memory or registers addressable by the central processor as their working storage, while on other systems it is present in the channel hardware. Typically, there are standard interfaces between channels and external peripheral devices, and multiple channels can operate concurrently.
A CPU typically designates a block of storage as, or sends, a relatively small channel program to the channel in order to handle I/O tasks, which the channel and controller can, in many cases, complete without further intervention from the CPU (exception: those channel programs which utilize 'program controlled interrupts', PCIs, to facilitate program loading, demand paging and other essential system tasks).
When I/O transfer is complete or an error is detected, the controller typically communicates wi |
https://en.wikipedia.org/wiki/Gershgorin%20circle%20theorem | In mathematics, the Gershgorin circle theorem may be used to bound the spectrum of a square matrix. It was first published by the Soviet mathematician Semyon Aronovich Gershgorin in 1931. Gershgorin's name has been transliterated in several different ways, including Geršgorin, Gerschgorin, Gershgorin, Hershhorn, and Hirschhorn.
Statement and proof
Let be a complex matrix, with entries . For let be the sum of the absolute values of the non-diagonal entries in the -th row:
Let be a closed disc centered at with radius . Such a disc is called a Gershgorin disc.
Theorem. Every eigenvalue of lies within at least one of the Gershgorin discs
Proof. Let be an eigenvalue of with corresponding eigenvector . Find i such that the element of x with the largest absolute value is . Since , in particular we take the ith component of that equation to get:
Taking to the other side:
Therefore, applying the triangle inequality and recalling that based on how we picked i,
Corollary. The eigenvalues of A must also lie within the Gershgorin discs Cj corresponding to the columns of A.
Proof. Apply the Theorem to AT while recognizing that the eigenvalues of the transpose are the same as those of the original matrix.
Example. For a diagonal matrix, the Gershgorin discs coincide with the spectrum. Conversely, if the Gershgorin discs coincide with the spectrum, the matrix is diagonal.
Discussion
One way to interpret this theorem is that if the off-diagonal entries of a square matrix over the complex numbers have small norms, the eigenvalues of the matrix cannot be "far from" the diagonal entries of the matrix. Therefore, by reducing the norms of off-diagonal entries one can attempt to approximate the eigenvalues of the matrix. Of course, diagonal entries may change in the process of minimizing off-diagonal entries.
The theorem does not claim that there is one disc for each eigenvalue; if anything, the discs rather correspond to the axes in , and each expresse |
https://en.wikipedia.org/wiki/Upsilon%20meson | The Upsilon meson () is a quarkonium state (i.e. flavourless meson) formed from a bottom quark and its antiparticle. It was discovered by the E288 experiment team, headed by Leon Lederman, at Fermilab in 1977, and was the first particle containing a bottom quark to be discovered because it is the lightest that can be produced without additional massive particles. It has a lifetime of and a mass about in the ground state.
See also
Oops-Leon, an erroneously-claimed discovery of a similar particle at a lower mass in 1976.
The particle is the analogous state made from strange quarks.
The particle is the analogous state made from charm quarks.
List of mesons |
https://en.wikipedia.org/wiki/Timeline%20of%20particle%20discoveries | This is a timeline of subatomic particle discoveries, including all particles thus far discovered which appear to be elementary (that is, indivisible) given the best available evidence. It also includes the discovery of composite particles and antiparticles that were of particular historical importance.
More specifically, the inclusion criteria are:
Elementary particles from the Standard Model of particle physics that have so far been observed. The Standard Model is the most comprehensive existing model of particle behavior. All Standard Model particles including the Higgs boson have been verified, and all other observed particles are combinations of two or more Standard Model particles.
Antiparticles which were historically important to the development of particle physics, specifically the positron and antiproton. The discovery of these particles required very different experimental methods from that of their ordinary matter counterparts, and provided evidence that all particles had antiparticles—an idea that is fundamental to quantum field theory, the modern mathematical framework for particle physics. In the case of most subsequent particle discoveries, the particle and its anti-particle were discovered essentially simultaneously.
Composite particles which were the first particle discovered containing a particular elementary constituent, or whose discovery was critical to the understanding of particle physics.
See also
List of baryons
List of mesons
List of particles |
https://en.wikipedia.org/wiki/Oidium%20%28spore%29 | An oidium (plural: oidia) is an asexually produced fungal spore that (in contrast to conidia) is presumed not to constitute the main reproductive preoccupation of the fungus at that time.
The hypha breaks up into component cells/ small pieces and develop into spores. Oidia cannot survive in unfavourable conditions. |
https://en.wikipedia.org/wiki/Independence%20system | In combinatorial mathematics, an independence system is a pair , where is a finite set and is a collection of subsets of (called the independent sets or feasible sets) with the following properties:
The empty set is independent, i.e., . (Alternatively, at least one subset of is independent, i.e., .)
Every subset of an independent set is independent, i.e., for each , we have . This is sometimes called the hereditary property, or downward-closedness.
Another term for an independence system is an abstract simplicial complex.
Relation to other concepts
A pair , where is a finite set and is a collection of subsets of is also called a hypergraph. When using this terminology, the elements in the set are called vertices and elements in the family are called hyperedges. So an independence system can be defined shortly as a downward-closed hypergraph.
An independence system with an additional property called the augmentation property or the independent set exchange property yields a matroid. The following expression summarizes the relations between the terms:HYPERGRAPHS INDEPENDENCE-SYSTEMS ABSTRACT-SIMPLICIAL-COMPLEXES MATROIDS. |
https://en.wikipedia.org/wiki/Veritas%20Cluster%20Server | Veritas Cluster Server (rebranded as Veritas Infoscale Availability and also known as VCS and also sold bundled in the SFHA product) is a high-availability cluster software for Unix, Linux and Microsoft Windows computer systems, created by Veritas Technologies. It provides application cluster capabilities to systems running other applications, including databases, network file sharing, and electronic commerce websites.
Description
High-availability clusters (HAC) improve application availability by failing or switching them over in a group of systems—as opposed to high-performance clusters, which improve application performance by running them on multiple systems simultaneously.
Most Veritas Cluster Server implementations attempt to build availability into a cluster, eliminating single points of failure by making use of redundant components like multiple network cards, storage area networks in addition to the use of VCS.
Similar products include Fujitsu PRIMECLUSTER, IBM PowerHA System Mirror, HP ServiceGuard, IBM Tivoli System Automation for Multiplatforms (SA MP), Linux-HA, OpenSAF, Microsoft Cluster Server (MSCS), NEC ExpressCluster, Red Hat Cluster Suite, SteelEye LifeKeeper and Sun Cluster. VCS is one of the few products in the industry that provides both high availability and disaster recovery across all major operating systems while supporting 40+ major application/replication technologies out of the box.
VCS is mostly a user-level clustering software; most of its processes are normal system processes on the systems it operates on, and have no special access to the operating system or kernel functions in the host systems. However, the interconnect (heartbeat) technology used with VCS is a proprietary Layer 2 ethernet-based protocol that is run in the kernel space using kernel modules. The group membership protocol that runs on top of the interconnect heartbeat protocol is also implemented in the kernel. In case of a split brain, the 'fencing' module doe |
https://en.wikipedia.org/wiki/Security%20hacker | A security hacker is someone who explores methods for breaching defenses and exploiting weaknesses in a computer system or network. Hackers may be motivated by a multitude of reasons, such as profit, protest, information gathering, challenge, recreation, or evaluation of a system weaknesses to assist in formulating defenses against potential hackers.
Longstanding controversy surrounds the meaning of the term "hacker." In this controversy, computer programmers reclaim the term hacker, arguing that it refers simply to someone with an advanced understanding of computers and computer networks, and that cracker is the more appropriate term for those who break into computers, whether computer criminals (black hats) or computer security experts (white hats). A 2014 article noted that "the black-hat meaning still prevails among the general public". The subculture that has evolved around hackers is often referred to as the "computer underground".
History
Birth of subculture and entering mainstream: 1960s-1980s
The subculture around such hackers is termed network hacker subculture, hacker scene, or computer underground. It initially developed in the context of phreaking during the 1960s and the microcomputer BBS scene of the 1980s. It is implicated with 2600: The Hacker Quarterly and the alt.2600 newsgroup.
In 1980, an article in the August issue of Psychology Today (with commentary by Philip Zimbardo) used the term "hacker" in its title: "The Hacker Papers." It was an excerpt from a Stanford Bulletin Board discussion on the addictive nature of computer use. In the 1982 film Tron, Kevin Flynn (Jeff Bridges) describes his intentions to break into ENCOM's computer system, saying "I've been doing a little hacking here." CLU is the software he uses for this. By 1983, hacking in the sense of breaking computer security had already been in use as computer jargon, but there was no public awareness about such activities. However, the release of the film WarGames that year, featuri |
https://en.wikipedia.org/wiki/Single-particle%20spectrum | The single-particle spectrum is a distribution of a physical quantity such as energy or momentum. The study of particle spectra allows us to see the global picture of particle production.
The spectrum are particles that are in space. This belongs to Raman spectroscopy by Chandrasekhar Venkata Raman. Spectrum particles are nothing but the VIBGYOR rays which are separated by prism or water. For example, a rainbow.
Physical quantities |
https://en.wikipedia.org/wiki/Solaris%20Cluster | Oracle Solaris Cluster (sometimes Sun Cluster or SunCluster) is a high-availability cluster software product for Solaris, originally created by Sun Microsystems, which was acquired by Oracle Corporation in 2010. It is used to improve the availability of software services such as databases, file sharing on a network, electronic commerce websites, or other applications. Sun Cluster operates by having redundant computers or nodes where one or more computers continue to provide service if another fails. Nodes may be located in the same data center or on different continents.
Background
Solaris Cluster provides services that remain available even when individual nodes or components of the cluster fail. Solaris Cluster provides two types of HA services: failover services and scalable services.
To eliminate single points of failure, a Solaris Cluster configuration has redundant components, including multiple network connections and data storage which is multiply connected via a storage area network. Clustering software such as Solaris Cluster is a key component in a Business Continuity solution, and the Solaris Cluster Geographic Edition was created specifically to address that requirement.
Solaris Cluster is an example of kernel-level clustering software. Some of the processes it runs are normal system processes on the systems it operates on, but it does have some special access to operating system or kernel functions in the host systems.
In June 2007, Sun released the source code to Solaris Cluster via the OpenSolaris HA Clusters community.
Solaris Cluster Geographic Edition
SCGE is a management framework that was introduced in August 2005. It enables two Solaris Cluster installations to be managed as a unit, in conjunction with one or more Data replication products, to provide Disaster Recovery for a computer installation. By ensuring that data updates are continuously replicated to a remote site in near-real time, that site can rapidly take over the provision o |
https://en.wikipedia.org/wiki/User%20interface%20management%20system | A User Interface Management System (UIMS) is a mechanism for cleanly separating process or business logic from Graphical user interface (GUI) code in a computer program. UIMS are designed to support N-tier architectures by strictly defining and enforcing the boundary between the business logic and the GUI. A fairly rigid Software architecture is nearly always implied by the UIMS, and most often only one paradigm of separation is supported in a single UIMS. A UIMS may also have libraries and systems such as graphical tools for the creation of user interface resources or data stores.
Generally, you cannot easily use multiple UIMS systems at the same time, so choosing the correct model for your UIMS is a critical design decision in any project. The choice of system is dependent upon the system(s) you wish to create user interfaces for, and the general style of your application. For example, if you want to create a web based front end, or just a standalone application or both that would be an important factor in choosing. If you want to deploy to the Macintosh, Windows and Linux, that would further influence your choice of a UIMS system.
There are many UIMS approaches described in research papers. However, there are not very many systems available commercially or through open source.
Models
In a frequently cited body of work, Foley and Wallace describe a "linguistic model" for user interface management consisting of a Presentation Layer, a Dialog Control layer and an Application layer. These layers correspond to the lexical, syntactic and semantic layers from formal language theory. While Foley's model is theoretically enlightening, it does not propose a specific practical system for separating code. There are also many interesting border cases that don't fall cleanly into one of these layers.
A more directly applicable theory of user interface management is the model–view–controller design pattern, which is described in detail in its own article. A recent variant |
https://en.wikipedia.org/wiki/Constraint%20counting | In mathematics, constraint counting is counting the number of constraints in order to compare it with the number of variables, parameters, etc. that are free to be determined, the idea being that in most cases the number of independent choices that can be made is the excess of the latter over the former.
For example, in linear algebra if the number of constraints (independent equations) in a system of linear equations equals the number of unknowns then precisely one solution exists; if there are fewer independent equations than unknowns, an infinite number of solutions exist; and if the number of independent equations exceeds the number of unknowns, then no solutions exist.
In the context of partial differential equations, constraint counting is a crude but often useful way of counting the number of free functions needed to specify a solution to a partial differential equation.
Partial differential equations
Consider a second order partial differential equation in three variables, such as the two-dimensional wave equation
It is often profitable to think of such an equation as a rewrite rule allowing us to rewrite arbitrary partial derivatives of the function using fewer partials than would be needed for an arbitrary function. For example, if satisfies the wave equation, we can rewrite
where in the first equality, we appealed to the fact that partial derivatives commute.
Linear equations
To answer this in the important special case of a linear partial differential equation, Einstein asked: how many of the partial derivatives of a solution can be linearly independent? It is convenient to record his answer using an ordinary generating function
where is a natural number counting the number of linearly independent partial derivatives (of order k) of an arbitrary function in the solution space of the equation in question.
Whenever a function satisfies some partial differential equation, we can use the corresponding rewrite rule to eliminate some of them, b |
https://en.wikipedia.org/wiki/American%20Society%20of%20Mammalogists | The American Society of Mammalogists (ASM) was founded in 1919. Its primary purpose is to encourage the study of mammals, and professions studying them. There are over 4,500 members of this society, and they are primarily professional scientists who emphasize the importance of public policy and education. There are several ASM meetings held each year and the society manages several publications such as the Journal of Mammalogy, Special Publications, Mammalian Species, and Society Pamphlets. The best known of these is the Journal of Mammalogy. The ASM also maintains The Mammal Image Library which contains more than 1300 mammal slides. A president, vice president, recording secretary, secretary-treasurer, and journal editor are all elected by the members to be officers of the society. In addition, ASM is composed of thirty one committees, including the Animal Care and Use Committee, the Conservation Awards Committee, the International Relations Committee, and the Publications Committee. It also provides numerous grants and awards for research and studies on mammals. These awards can go to both scientists and students. The ASM also lists employment opportunities for their members.
External links
American Society of Mammalogists Website
Journal of Mammalogy website
Mammalian Species website
Professional associations based in the United States
Natural Science Collections Alliance members
Mammalogy
Biology societies
1919 establishments in the United States |
https://en.wikipedia.org/wiki/Direction%20cosine | In analytic geometry, the direction cosines (or directional cosines) of a vector are the cosines of the angles between the vector and the three positive coordinate axes. Equivalently, they are the contributions of each component of the basis to a unit vector in that direction.
Three-dimensional Cartesian coordinates
If v is a Euclidean vector in three-dimensional Euclidean space, R3,
where ex, ey, ez are the standard basis in Cartesian notation, then the direction cosines are
It follows that by squaring each equation and adding the results
Here α, β and γ are the direction cosines and the Cartesian coordinates of the unit vector v/|v|, and a, b and c are the direction angles of the vector v.
The direction angles a, b and c are acute or obtuse angles, i.e., 0 ≤ a ≤ π, 0 ≤ b ≤ π and 0 ≤ c ≤ π, and they denote the angles formed between v and the unit basis vectors, ex, ey and ez.
General meaning
More generally, direction cosine refers to the cosine of the angle between any two vectors. They are useful for forming direction cosine matrices that express one set of orthonormal basis vectors in terms of another set, or for expressing a known vector in a different basis.
See also
Cartesian tensor |
https://en.wikipedia.org/wiki/Signed%20graph | In the area of graph theory in mathematics, a signed graph is a graph in which each edge has a positive or negative sign.
A signed graph is balanced if the product of edge signs around every cycle is positive. The name "signed graph" and the notion of balance appeared first in a mathematical paper of Frank Harary in 1953. Dénes Kőnig had already studied equivalent notions in 1936 under a different terminology but without recognizing the relevance of the sign group.
At the Center for Group Dynamics at the University of Michigan, Dorwin Cartwright and Harary generalized Fritz Heider's psychological theory of balance in triangles of sentiments to a psychological theory of balance in signed graphs.
Signed graphs have been rediscovered many times because they come up naturally in many unrelated areas. For instance, they enable one to describe and analyze the geometry of subsets of the classical root systems. They appear in topological graph theory and group theory. They are a natural context for questions about odd and even cycles in graphs. They appear in computing the ground state energy in the non-ferromagnetic Ising model; for this one needs to find a largest balanced edge set in Σ. They have been applied to data classification in correlation clustering.
Fundamental theorem
The sign of a path is the product of the signs of its edges. Thus a path is positive only if there are an even number of negative edges in it (where zero is even). In the mathematical balance theory of Frank Harary, a signed graph is balanced when every cycle is positive. Harary proves that a signed graph is balanced when (1) for every pair of nodes, all paths between them have the same sign, or (2) the vertices partition into a pair of subsets (possibly empty), each containing only positive edges, but connected by negative edges. It generalizes the theorem that an ordinary (unsigned) graph is bipartite if and only if every cycle has even length.
A simple proof uses the method of switchi |
https://en.wikipedia.org/wiki/Colored%20matroid | In mathematics, a colored matroid is a matroid whose elements are labeled from a set of colors, which can be any set that suits the purpose, for instance the set of the first n positive integers, or the sign set {+, −}.
The interest in colored matroids is through their invariants, especially the colored Tutte polynomial, which generalizes the Tutte polynomial of a signed graph of .
There has also been study of optimization problems on matroids where the objective function of the optimization depends on the set of colors chosen as part of a matroid basis.
See also
Bipartite matroid
Rota's basis conjecture |
https://en.wikipedia.org/wiki/Lead%20sheet | A lead sheet or fake sheet is a form of musical notation that specifies the essential elements of a popular song: the melody, lyrics and harmony. The melody is written in modern Western music notation, the lyric is written as text below the staff and the harmony is specified with chord symbols above the staff.
The lead sheet does not describe the chord voicings, voice leading, bass line or other aspects of the accompaniment. These are specified later by an arranger or improvised by the performers, and are considered aspects of the arrangement or performance of a song, rather than a part of the song itself. "Lead" refers to a song's lead part, the most important melody line or voice.
A lead sheet may also specify an instrumental part or theme, if this is considered essential to the song's identity. For example, the opening guitar riff from Deep Purple's "Smoke on the Water" is a part of the song; any performance of the song should include the guitar riff, and any imitation of that guitar riff is an imitation of the song. Thus the riff belongs on the lead sheet.
A collected volume of lead sheets may be known as a fake book, due to the improvisational nature of its use: when presented with a lead sheet, proficient musicians may be able to "fake it" by performing the song adequately without a full score. This is in contrast to a full score, in which every note to be played in a piece is written out. Since fake books and lead sheets only give a rough outline of the melody and harmony, the performer or arranger is expected to improvise significantly.
Use in performance
A lead sheet is often the only form of written music used by a small jazz ensemble. One or more musicians will play the melody while the rest of the group improvises an appropriate accompaniment based on the chord progression given in the chord symbols, followed by an improvised solo also based on the chord progression. Similarly, a sufficiently skilled harmony player (e.g. a jazz pianist or a jazz |
https://en.wikipedia.org/wiki/Biased%20graph | In mathematics, a biased graph is a graph with a list of distinguished circles (edge sets of simple cycles), such that if two circles in the list are contained in a theta graph, then the third circle of the theta graph is also in the list. A biased graph is a generalization of the combinatorial essentials of a gain graph and in particular of a signed graph.
Formally, a biased graph Ω is a pair (G, B) where B is a linear class of circles; this by definition is a class of circles that satisfies the theta-graph property mentioned above.
A subgraph or edge set whose circles are all in B (and which contains no half-edges) is called balanced. For instance, a circle belonging to B is balanced and one that does not belong to B is unbalanced.
Biased graphs are interesting mostly because of their matroids, but also because of their connection with multiary quasigroups. See below.
Technical notes
A biased graph may have half-edges (one endpoint) and loose edges (no endpoints). The edges with two endpoints are of two kinds: a link has two distinct endpoints, while a loop has two coinciding endpoints.
Linear classes of circles are a special case of linear subclasses of circuits in a matroid.
Examples
If every circle belongs to B, and there are no half-edges, Ω is balanced. A balanced biased graph is (for most purposes) essentially the same as an ordinary graph.
If B is empty, Ω is called contrabalanced. Contrabalanced biased graphs are related to bicircular matroids.
If B consists of the circles of even length, Ω is called antibalanced and is the biased graph obtained from an all-negative signed graph.
The linear class B is additive, that is, closed under repeated symmetric difference (when the result is a circle), if and only if B is the class of positive circles of a signed graph.
Ω may have underlying graph that is a cycle of length n ≥ 3 with all edges doubled. Call this a biased 2Cn . Such biased graphs in which no digon (circle of length 2) is balanced |
https://en.wikipedia.org/wiki/Theta%20graph | In computational geometry, the Theta graph, or -graph, is a type of geometric spanner similar to a Yao graph. The basic method of construction involves partitioning the space around each vertex into a set of cones, which themselves partition the remaining vertices of the graph. Like Yao Graphs, a -graph contains at most one edge per cone; where they differ is how that edge is selected. Whereas Yao Graphs will select the nearest vertex according to the metric space of the graph, the -graph defines a fixed ray contained within each cone (conventionally the bisector of the cone) and selects the nearest neighbor with respect to orthogonal projections to that ray. The resulting graph exhibits several good spanner properties.
-graphs were first described by Clarkson in 1987 and independently by Keil in 1988.
Construction
-graphs are specified with a few parameters which determine their construction. The most obvious parameter is , which corresponds to the number of equal angle cones that partition the space around each vertex. In particular, for a vertex , a cone about can be imagined as two infinite rays emanating from it with angle between them. With respect to , we can label these cones as through in a counterclockwise pattern from , which conventionally opens so that its bisector has angle 0 with respect to the plane. As these cones partition the plane, they also partition the remaining vertex set of the graph (assuming general position) into the sets through , again with respect to . Every vertex in the graph gets the same number of cones in the same orientation, and we can consider the set of vertices that fall into each.
Considering a single cone, we need to specify another ray emanating from , which we will label . For every vertex in , we consider the orthogonal projection of each onto . Suppose that is the vertex with the closest such projection, then the edge is added to the graph. This is the primary difference from Yao Graphs which always sel |
https://en.wikipedia.org/wiki/Vapour%20phase%20decomposition | Vapour phase decomposition (VPD) is a method used in the semiconductor industry to improve the sensitivity of total-reflection x-ray fluorescence spectroscopy by changing the contaminant from a thin layer (which has an angle-dependent fluorescence intensity in the TXRF-domain) to a granular residue. When using granular residue the limits of detection are improved because of a more intense fluorescence signal in angles smaller than the isokinetic angle.
Method
When using granular residue the limits of detection are improved because of a more intense fluorescence signal in angles smaller than the isokinetic angle. This can be achieved by enhancing the impurity concentration in the solution to be analyzed. In standard atomic absorption spectroscopy (AAS), the impurity is dissolved together with the matrix element. In VPD, the surface of the wafer is exposed to hydrofluoric acid vapour, which causes the surface oxide to dissolve together with the impurity metals. The acid droplets, condensed on the surface, are then analyzed using AAS.
Advantages
The method has yielded good results for the detection and measurement of nickel and iron. To improve the range of elemental impurities and lower detection limits, the acid droplets obtained from the silicon wafers are analyzed by ICP-MS (Inductively coupled plasma mass spectrometry). This technique, VPD ICP-MS provides accurate measurement of up to 60 elements and detection limits of in the range of 1E6-E10 atoms/sq.cm on the silicon wafer.
Related Techniques
One related technique is VPD-DC (vapour phase decomposition-droplet collection), where the wafer is scanned with a droplet that collects the metal ions that were dissolved in the decomposition step. This procedure affords better limits of detection when applying AAS in order to detect metal impurities of very small concentrations on wafer surfaces. |
https://en.wikipedia.org/wiki/Frisch%E2%80%93Waugh%E2%80%93Lovell%20theorem | In econometrics, the Frisch–Waugh–Lovell (FWL) theorem is named after the econometricians Ragnar Frisch, Frederick V. Waugh, and Michael C. Lovell.
The Frisch–Waugh–Lovell theorem states that if the regression we are concerned with is expressed in terms of two separate sets of predictor variables:
where and are matrices, and are vectors (and is the error term), then the estimate of will be the same as the estimate of it from a modified regression of the form:
where projects onto the orthogonal complement of the image of the projection matrix . Equivalently, MX1 projects onto the orthogonal complement of the column space of X1. Specifically,
and this particular orthogonal projection matrix is known as the residual maker matrix or annihilator matrix.
The vector is the vector of residuals from regression of on the columns of .
The most relevant consequence of the theorem is that the parameters in do not apply to but to , that is: the part of uncorrelated with . This is the basis for understanding the contribution of each single variable to a multivariate regression (see, for instance, Ch. 13 in ).
The theorem also implies that the secondary regression used for obtaining is unnecessary when the predictor variables are uncorrelated: using projection matrices to make the explanatory variables orthogonal to each other will lead to the same results as running the regression with all non-orthogonal explanators included.
Moreover, the standard errors from the partial regression equal those from the full regression.
History
The origin of the theorem is uncertain, but it was well-established in the realm of linear regression before the Frisch and Waugh paper. George Udny Yule's comprehensive analysis of partial regressions, published in 1907, included the theorem in section 9 on page 184. Yule emphasized the theorem's importance for understanding multiple and partial regression and correlation coefficients, as mentioned in section 10 of the same paper.
|
https://en.wikipedia.org/wiki/Sterile%20neutrino | Sterile neutrinos (or inert neutrinos) are hypothetical particles (neutral leptons – neutrinos) that are believed to interact only via gravity and not via any of the other fundamental interactions of the Standard Model. The term sterile neutrino is used to distinguish them from the known, ordinary active neutrinos in the Standard Model, which carry an isospin charge of and engage in the weak interaction. The term typically refers to neutrinos with right-handed chirality (see right-handed neutrino), which may be inserted into the Standard Model. Particles that possess the quantum numbers of sterile neutrinos and masses great enough such that they do not interfere with the current theory of Big Bang nucleosynthesis are often called neutral heavy leptons (NHLs) or heavy neutral leptons (HNLs).
The existence of right-handed neutrinos is theoretically well-motivated, because the known active neutrinos are left-handed and all other known fermions have been observed with both left and right chirality. They could also explain in a natural way the small active neutrino masses inferred from neutrino oscillation. The mass of the right-handed neutrinos themselves is unknown and could have any value between and less than 1 eV. To comply with theories of leptogenesis and dark matter, there must be at least 3 flavors of sterile neutrinos (if they exist). This is in contrast to the number of active neutrino types required to ensure the electroweak interaction is free of anomalies, which must be exactly 3: the number of charged leptons and quark generations.
The search for sterile neutrinos is an active area of particle physics. If they exist and their mass is smaller than the energies of particles in the experiment, they can be produced in the laboratory, either by mixing between active and sterile neutrinos or in high energy particle collisions. If they are heavier, the only directly observable consequence of their existence would be the observed active neutrino masses. They |
https://en.wikipedia.org/wiki/HD%20ready | HD ready is a certification program introduced in 2005 by EICTA (European Information, Communications and Consumer Electronics Technology Industry Associations), now DIGITALEUROPE. HD ready minimum native resolution is 720 rows in widescreen ratio.
There are currently four different labels: "HD ready", "HD TV", "HD ready 1080p", "HD TV 1080p". The logos are assigned to television equipment capable of certain features.
In the United States, a similar "HD Ready" term usually refers to any display that is capable of accepting and displaying a high-definition signal at either 720p, 1080i or 1080p using a component video or digital input, but does not have a built-in HD-capable tuner.
History
The "HD ready" certification program was introduced on January 19, 2005. The labels and relevant specifications are based on agreements between over 60 broadcasters and manufacturers of the European HDTV Forum at its second session in June 2004, held at the Betzdorf, Luxembourg headquarters of founding member SES Astra.
The "HD ready" logo is used on television equipment capable of displaying High Definition (HD) pictures from an external source. However, it does not have to feature a digital tuner to decode an HD signal; devices with tuners were certified under a separate "HD TV" logo, which does not require a "HD ready" display device.
Before the introduction of the "HD ready" certification, many TV sources and displays were being promoted as capable of displaying high definition pictures when they were in fact SDTV devices; according to Alexander Oudendijk, senior VP of marketing for Astra, in early 2005 there were 74 different devices being sold as ready for HD that were not. Devices advertised as HD-compatible or HD ready could take HDTV-signal as an input (via analog -YPbPr or digital DVI or HDMI), but they did not have enough pixels for true representation of even the lower HD resolution (1280 × 720) (plasma-based sets with 853 × 480 resolution, CRT based sets only capa |
https://en.wikipedia.org/wiki/Ball%20and%20beam | The ball and beam system consists of a long beam which can be tilted by a servo or electric motor together with a ball
rolling back and forth on top of the beam.
It is a popular textbook example in control theory.
The significance of the ball and beam system is that it is a simple system which is open-loop unstable.
Even if the beam is restricted to be very nearly horizontal, without active feedback, it will swing to one side or the
other, and the ball will roll off the end of the beam. To stabilize the ball, a control system which
measures the position of the ball and adjusts the beam accordingly must be used.
In two dimensions, the ball and beam system becomes the ball and plate system, where a ball rolls on top of
a plate whose inclination can be adjusted by tilting it forwards, backwards, leftwards, or rightwards.
External links
Ball and beam with various controls strategies
Ball and beam 1: basics
Ball and beam with comprehensive dynamics and movies
Control engineering |
https://en.wikipedia.org/wiki/Institut%20f%C3%BCr%20Rundfunktechnik | The GmbH (IRT) (Institute for Broadcasting Technology Ltd.) was a research centre of German broadcasters (ARD / ZDF / DLR), Austria's broadcaster (ORF) and the Swiss public broadcaster (SRG / SSR). It was responsible for research on broadcasting technology. It was founded in 1956 and was located in Munich, Germany.
They invented or were influential in the research, development and field-testing of important standards such as ARI, RDS, VPS, DSR, DAB and DVB-T.
was a founding member of the Hybrid Broadcast Broadband TV (HbbTV) consortium of broadcasting and Internet industry companies that established an open European standard (called HbbTV) for hybrid set-top boxes for the reception of broadcast TV and broadband multimedia applications with a single user interface.
In 2020, ZDF and then other supporters indicated that they planned to withdraw from the organization, so the IRT was closed by the end of 2020.
Former members
Bayerischer Rundfunk
Deutsche Welle
Deutschlandradio
Hessischer Rundfunk
Mitteldeutscher Rundfunk
Norddeutscher Rundfunk
Radio Bremen
Rundfunk Berlin-Brandenburg
Saarländischer Rundfunk
SRG SSR
Südwestrundfunk
Westdeutscher Rundfunk Köln
ZDF
See also
BBC Research & Development
High Com FM (researched and field-trialed by IRT between 1979 and 1984)
Wittmoor List (maintained by IRT up to June 2018)
European Broadcasting Union (EBU)
International Telecommunication Union (ITU)
DVB Project
WorldDAB
Public broadcasting
Teletext
(FTZ)
(RFZ)
(IVZ) |
https://en.wikipedia.org/wiki/Phil%20Karn | Phil Karn (born October 4, 1956) is a retired American engineer from Lutherville, Maryland. He earned a bachelor's degree in electrical engineering from Cornell University in 1978 and a master's degree in electrical engineering from Carnegie Mellon University in 1979. From 1979 until 1984, Karn worked at Bell Labs in Naperville, Illinois, and Murray Hill, New Jersey. From 1984 until 1991, he was with Bell Communications Research in Morristown, New Jersey. From 1991 through to his retirement, he worked at Qualcomm in San Diego, where he specialized in wireless data networking protocols, security, and cryptography.
He is currently the President/CEO of Amateur Radio Digital Communications (ARDC), a non-profit foundation funded by the sale of part of its IP address space (44/8). ARDC manages the remaining portion of its address space by providing financial grants to amateur radio and related groups.
He has been an active contributor in the Internet Engineering Task Force, especially in security, and to the Internet architecture. He is the author or co-author of at least 6 RFCs, and is cited as contributing to many more. He is the inventor of Karn's Algorithm, a method for calculating the round trip time for IP packet retransmission. In 1991, Thomas Alexander Iannelli's Master's thesis judged Karn's KA9Q NOS software as more suitable for deployment than an Air Force Institute of Technology packet radio system. In 1990, Karn was one of the first to predict that the use of wired links for the Internet's "capillaries" would become "history" because most users would access it via wireless radio links.
In 1994, Carl Malamud interviewed Karn on Internet Talk Radio for his "Geek of the Week" podcast. They talked about the KA9Q software, Qualcomm's CDMA radio technology for data transfer, the Globalstar low Earth orbit satellite radio system, Mobile IP, the Clipper chip, and encryption. In June 2014, Karn was also interviewed for the History of the Internet Project, in w |
https://en.wikipedia.org/wiki/Pcap | In the field of computer network administration, pcap is an application programming interface (API) for capturing network traffic. While the name is an abbreviation of packet capture, that is not the API's proper name. Unix-like systems implement pcap in the libpcap library; for Windows, there is a port of libpcap named WinPcap that is no longer supported or developed, and a port named Npcap for Windows 7 and later that is still supported.
Monitoring software may use libpcap, WinPcap, or Npcap to capture network packets traveling over a computer network and, in newer versions, to transmit packets on a network at the link layer, and to get a list of network interfaces for possible use with libpcap, WinPcap, or Npcap.
The pcap API is written in C, so other languages such as Java, .NET languages, and scripting languages generally use a wrapper; no such wrappers are provided by libpcap or WinPcap itself. C++ programs may link directly to the C API or make use of an object-oriented wrapper.
Features
libpcap, WinPcap, and Npcap provide the packet-capture and filtering engines of many open-source and commercial network tools, including protocol analyzers (packet sniffers), network monitors, network intrusion detection systems, traffic-generators and network-testers.
libpcap, WinPcap, and Npcap also support saving captured packets to a file, and reading files containing saved packets; applications can be written, using libpcap, WinPcap, or Npcap, to be able to capture network traffic and analyze it, or to read a saved capture and analyze it, using the same analysis code. A capture file saved in the format that libpcap, WinPcap, and Npcap use can be read by applications that understand that format, such as tcpdump, Wireshark, CA NetMaster, or Microsoft Network Monitor 3.x.
The MIME type for the file format created and read by libpcap, WinPcap, and Npcap is application/vnd.tcpdump.pcap. The typical file extension is .pcap, although .cap and .dmp are also in common use. |
https://en.wikipedia.org/wiki/Smith%E2%80%93Magenis%20syndrome | Smith–Magenis Syndrome (SMS), also known as 17p- syndrome, is a microdeletion syndrome characterized by an abnormality in the short (p) arm of chromosome 17. It has features including intellectual disability, facial abnormalities, difficulty sleeping, and numerous behavioral problems such as self-harm. Smith–Magenis syndrome affects an estimated between 1 in 15,000 to 1 in 25,000 individuals.
Signs and symptoms
Facial features of children with Smith–Magenis syndrome include a broad and square face, deep-set eyes, large cheeks, and a prominent jaw, as well as a flat nose bridge (in the young child; as the child ages it becomes more ski-jump shaped). Eyes tend to be deep-set, close together, and slanted upwards. Eyebrows are heavy with lateral extension. The mouth is the most noticeable feature; both upper and lower lips are full, and the mouth is wide. The mouth curves downwards and the upper lip curves outwards, due to a fleshy philtrum. These facial features become more noticeable as the individual ages, as mandible growth outstrips that of the maxilla leading to a clear midface hypoplasia. There is also a mild brachycephaly.
Disrupted sleep patterns are characteristic of Smith–Magenis syndrome, typically beginning early in life. Affected individuals may be very sleepy during the day, but have trouble falling asleep and awaken several times each night due to an inverted circadian rhythm of melatonin.
Those affected by Smith–Magenis typically have behavioral problems, which include frequent temper tantrums, meltdowns and outbursts, aggression, anger, fidgeting, compulsive behavior, anxiety, impulsiveness, and difficulty paying attention. Self-harm behaviours, including biting, hitting, head banging, and skin picking, are very common. Behavioral complications in Smith-Magenis syndrome are thought to be worsened by issues with sleeping. Repetitive self-hugging is a behavioral trait that may be unique to Smith–Magenis syndrome. People with this condition may also c |
https://en.wikipedia.org/wiki/Proactive%20maintenance | Proactive maintenance is the maintenance philosophy that supplants “failure reactive” with “failure proactive” by activities that avoid the underlying conditions that lead to machine faults and degradation. Unlike predictive or preventive maintenance, proactive maintenance commissions corrective actions aimed at failure root causes, not failure symptoms. Its central theme is to extend the life of machinery as opposed to
making repairs when often nothing is wrong,
accommodating failure as routine or normal, or
detecting impending failure conditions followed by remediation.
Proactive maintenance depends on rigorous machine inspection and condition monitoring. In mechanical machinery it seeks to detect and eradicate failure root causes such as wrong lubricant, degraded lubricant, contaminated lubricant, botched repair, misalignment, unbalance and operator error.
See also
Predictive maintenance |
https://en.wikipedia.org/wiki/ProQuest%20Dialog | Dialog is an online information service owned by ProQuest, who acquired it from Thomson Reuters in mid-2008.
Dialog was one of the predecessors of the World Wide Web as a provider of information, though not in form. The earliest form of the Dialog system was completed in 1966 in Lockheed Martin under the direction of Roger K. Summit. According to its literature, it was "the world's first online information retrieval system to be used globally with materially significant databases". In the 1980s, a low-priced dial-up version of a subset of Dialog was marketed to individual users as Knowledge Index. This subset included INSPEC, MathSciNet, over 200 other bibliographic and reference databases, as well as third-party retrieval vendors who would go to physical libraries to copy materials for a fee and send it to the service subscriber.
While being owned by the Thomson Corporation, Dialog consisted of the Dialog, DataStar, Profound, and NewsEdge businesses. Dialog and DataStar were consolidated into Dialog. The news content from Profound and NewsEdge were consolidated, and the market research business from Profound was sold to MarketResearch.com. The NewsEdge business was eventually sold to Acquire Media, now Naviga. Prior to being owned by Thomson, MAID purchased Knight-Ridder Information which included the Dialog and DataStar businesses. MAID renamed itself to be the Dialog Corporation.
See also
Colorado Alliance of Research Libraries |
https://en.wikipedia.org/wiki/Epsilon-induction | In set theory, -induction, also called epsilon-induction or set-induction, is a principle that can be used to prove that all sets satisfy a given property. Considered as an axiomatic principle, it is called the axiom schema of set induction.
The principle implies transfinite induction and recursion.
It may also be studied in a general context of induction on well-founded relations.
Statement
The schema is for any given property of sets and states that, if for every set , the truth of follows from the truth of for all elements of , then this property holds for all sets.
In symbols:
Note that for the "bottom case" where denotes the empty set , the subexpression is vacuously true for all propositions and so that implication is proven by just proving .
In words, if a property is persistent when collecting any sets with that property into a new set (and this also requires establishing the property for the empty set), then the property is simply true for all sets. Said differently, persistence of a property with respect to set formation suffices to reach each set in the domain of discourse.
In terms of classes
One may use the language of classes to express schemata.
Denote the universal class by .
Let be and use the informal as abbreviation for .
The principle then says that for any ,
Here the quantifier ranges over all sets.
In words this says that any class that contains all of its subsets is simply just the class of all sets.
Assuming bounded separation, is a proper class. So the property is exhibited only by the proper class , and in particular by no set. Indeed, note that any set is a subset of itself and under some more assumptions, already the self-membership will be ruled out.
For comparison to another property, note that for a class to be -transitive means
There are many transitive sets - in particular the set theoretical ordinals.
Related notions of induction
If is for some predicate , then with ,
where is defined as .
If is |
https://en.wikipedia.org/wiki/Barometer%20question | The barometer question is an example of an incorrectly designed examination question demonstrating functional fixedness that causes a moral dilemma for the examiner. In its classic form, popularized by American test designer professor Alexander Calandra (1911–2006), the question asked the student to "show how it is possible to determine the height of a tall building with the aid of a barometer." The examiner was confident that there was one, and only one, correct answer, which is found by measuring the difference in pressure at the top and bottom of the building and solving for height. Contrary to the examiner's expectations, the student responded with a series of completely different answers. These answers were also correct, yet none of them proved the student's competence in the specific academic field being tested.
The barometer question achieved the status of an urban legend; according to an internet meme, the question was asked at the University of Copenhagen and the student was Niels Bohr. The Kaplan, Inc. ACT preparation textbook describes it as an "MIT legend", and an early form is found in a 1958 American humor book. However, Calandra presented the incident as a real-life, first-person experience that occurred during the Sputnik crisis. Calandra's essay, "Angels on a Pin", was published in 1959 in Pride, a magazine of the American College Public Relations Association. It was reprinted in Current Science in 1964, in Saturday Review in 1968 and included in the 1969 edition of Calandra's The Teaching of Elementary Science and Mathematics. Calandra's essay became a subject of academic discussion. It was frequently reprinted since 1970, making its way into books on subjects ranging from teaching, writing skills, workplace counseling and investment in real estate to chemical industry, computer programming and integrated circuit design.
Calandra's account
A colleague of Calandra posed the barometer question to a student, expecting the correct answer: "the heigh |
https://en.wikipedia.org/wiki/Zuiyo-maru%20carcass | The was a corpse, caught by the Japanese fishing trawler off the coast of New Zealand in 1977. The carcass's peculiar appearance led to speculation that it might be the remains of a sea serpent or prehistoric plesiosaur.
Although several scientists insisted it was "not a fish, whale, or any other mammal", analysis of amino acids in the corpse's muscle tissue later indicated it was most likely the carcass of a basking shark. Decomposing basking shark carcasses lose most of the lower head area and the dorsal and caudal fins first, making them resemble a plesiosaur.
Discovery
On April 25, 1977, the Japanese trawler Zuiyō Maru, sailing east of Christchurch, New Zealand, caught a strange, unknown creature in the trawl. The crew was convinced it was an unidentified animal, but despite the potential biological significance of the curious discovery, the captain, Akira Tanaka, decided to dump the carcass into the ocean again so not to risk spoiling the caught fish. However, before that, some photos and sketches were taken of the creature, nicknamed "Nessie" by the crew, measurements were taken and some samples of skeleton, skin and fins were collected for further analysis by experts in Japan. The discovery resulted in immense commotion and a "plesiosaur-craze" in Japan, and the shipping company ordered all its boats to try to relocate the dumped corpse, but with no apparent success.
Description
The foul-smelling, decomposing corpse reportedly weighed 1,800 kg and was about 10 m long. According to the crew, the creature had a 1.5-m-long neck, four large, reddish fins, and a tail about 2.0 m long. It seemed to lack a dorsal fin on inspection, but one was visible from photographs. No internal organs remained as the chest cavity and gut had opened up from decay, but flesh and fat were somewhat intact.
Proposed explanations
Plesiosaur
Professor Tokio Shikama from Yokohama National University was convinced the remains were of a supposedly extinct plesiosaur. Dr. Fujiro |
https://en.wikipedia.org/wiki/Lambda%20transition | The λ (lambda) universality class is a group in condensed matter physics. It regroups several systems possessing strong analogies, namely, superfluids, superconductors and smectics (liquid crystals). All these systems are expected to belong to the same universality class for the thermodynamic critical properties of the phase transition. While these systems are quite different at the first glance, they all are described by similar formalisms and their typical phase diagrams are identical.
See also
Superfluid
Superconductor
Liquid crystal
Phase transition
Renormalization group
Topological defect |
https://en.wikipedia.org/wiki/Musipedia | Musipedia is a search engine for identifying pieces of music. This can be done by whistling a theme, playing it on a virtual piano keyboard, tapping the rhythm on the computer keyboard, or entering the Parsons code. Anybody can modify the collection of melodies and enter MIDI files, bitmaps with sheet music (possibly generated by the Musipedia server after entering LilyPond or abc source code), lyrics or some text about the piece, or the melodic contours as Parsons Code. Certain features on the site may no longer work due to reliance on flash which became defunct in 2020.
Search principles
Musipedia offers three ways of searching: Based on the melodic contour, based on pitches and onset times, or based on the rhythm alone. For the first two, users can draw notes, play them on a keyboard, or type out an ASCII version of a melody.
Melody
The melodic contour search uses an editing distance. Because of this, the search engine finds not only entries with exactly the contour that is entered as a query, but also the most similar ones among the contours that are not identical. The similarity is measured by determining the editing steps (inserting, deleting, or replacing a character) that are needed for converting the query contour into that of the search result. Since only the melodic contour is relevant, one can find melodies even if the key, rhythm, or the exact intervals are unknown.
Pitch and rhythm
The pitch and onset time-based search takes more properties of the melody into account. This search method, which is used by default, is still transposition-invariant and tempo-invariant, but it takes rhythm and intervals into account. The melody can be entered in various ways, for example by clicking on a virtual keyboard on the screen. The search engine then segments the query, converts each segment into a set of points in the two-dimensional space of onset time and pitch, and, by using the Earth Mover's Distance, compares each point set to pre-computed point sets re |
https://en.wikipedia.org/wiki/Tunica%20intima | The tunica intima (Neo-Latin "inner coat"), or intima for short, is the innermost tunica (layer) of an artery or vein. It is made up of one layer of endothelial cells and is supported by an internal elastic lamina. The endothelial cells are in direct contact with the blood flow.
The three layers of a blood vessel are an inner layer (the tunica intima), a middle layer (the tunica media), and an outer layer (the tunica externa).
In dissection, the inner coat (tunica intima) can be separated from the middle (tunica media) by a little maceration, or it may be stripped off in small pieces; but, because of its friability, it cannot be separated as a complete membrane. It is a fine, transparent, colorless structure which is highly elastic, and, after death, is commonly corrugated into longitudinal wrinkles.
Structure
The structure of the tunica intima depends on the blood vessel type.
Elastic arteries – A single layer of Endothelial and a supporting layer of elastin-rich collagen. The layer also contains fibroblasts and smooth muscle cells called 'myointimal cells'
Muscular arteries – Endothelial cells
Arterioles – A single layer of Endothelial cells
Veins – Endothelial cells
The inner coat consists of:
A layer of pavement endothelium, the cells of which are polygonal, oval, or fusiform, and have very distinct round or oval nuclei. This endothelium is brought into view most distinctly by staining with silver nitrate.
A subendothelial layer, consisting of delicate connective tissue with branched cells lying in the interspaces of the tissue; in arteries of less than 2 mm in diameter the subendothelial layer consists of a single stratum of stellate cells, and the connective tissue is only largely developed in vessels of a considerable size.
An elastic or fenestrated layer, which consists of a membrane containing a network of elastic fibers, having principally a longitudinal direction, and in which, under the microscope, small elongated apertures or perforation |
https://en.wikipedia.org/wiki/Osteolepis | Osteolepis (from 'bone' and 'scale') is an extinct genus of lobe-finned fish from the Devonian period. It lived in the Lake Orcadie of northern Scotland.
Osteolepis was about long, and covered with large, square scales. The scales and plates on its head were covered in a thin layer of spongy, bony material called cosmine. This layer contained canals that were connected to sensory cells deeper in the skin. These canals ended in pores on the surface and were probably for sensing vibrations in the water.
Osteolepis was a rhipidistian, having a number of features in common with the tetrapods (land-dwelling vertebrates and their descendants), and was probably close to the base of the tetrapod family tree. |
https://en.wikipedia.org/wiki/Tunica%20media | The tunica media (Neo-Latin "middle coat"), or media for short, is the middle tunica (layer) of an artery or vein. It lies between the tunica intima on the inside and the tunica externa on the outside.
Artery
Tunica media is made up of smooth muscle cells, elastic tissue and collagen. It lies between the tunica intima on the inside and the tunica externa on the outside.
The middle coat (tunica media) is distinguished from the inner (tunica intima) by its color and by the transverse arrangement of its fibers.
In the smaller arteries it consists principally of smooth muscle fibers in fine bundles, arranged in lamellae and disposed circularly around the vessel. These lamellae vary in number according to the size of the vessel; the smallest arteries having only a single layer, and those slightly larger three or four layers - up to a maximum of six layers. It is to this coat that the thickness of the wall of the artery is mainly due.
In the larger arteries, as the iliac, femoral, and carotid, elastic fibers and collagen unite to form lamellae which alternate with the layers of smooth muscular fibers; these lamellae are united to one another by elastic fibers which pass between the smooth muscular bundles, and are connected with the fenestrated membrane of the inner coat.
In the largest arteries, as the aorta and brachiocephalic, the amount of elastic tissue is considerable; in these vessels a few bundles of white connective tissue also have been found in the middle coat. The muscle fiber cells are arranged in 5 to 7 layers of circular and longitudinal smooth muscle with about 50μ in length and contain well-marked, rod-shaped nuclei, which are often slightly curved. Separating the tunica media from the outer tunica externa in larger arteries is the external elastic membrane (also called the external elastic lamina). This structure is not usually seen in smaller arteries, nor is it seen in veins.
Vein
The middle coat is composed of a thick layer of connective tissue |
https://en.wikipedia.org/wiki/Orientpet | Orientpets or Orienpets are a breed of lilies, the result of crosses between orientals and trumpets. They are commonly found under the OT hybrids in stores or nurseries.
The main characteristics of OT lilies are: plant vigor, diversity of colors, fragrance and thick texture of the petals. Many are hardy to USDA zone 4 and 3, whereas orientals and trumpets are usually not.
Liliaceae
Hybrid plants |
https://en.wikipedia.org/wiki/Resurrection%20plant | A resurrection plant is any poikilohydric plant that can survive extreme dehydration, even over months or years.
Examples include:
Anastatica hierochuntica, also known as the Rose of Jericho, a plant species native to deserts of North Africa
Asteriscus (plant);
Boea hygrometrica,
Craterostigma, members of the Linderniaceae/Scrophulariaceae with snapdragon-like flowers
Haberlea rhodopensis
Lichen, a symbiosis that can survive in extreme desiccation,
Mesembryanthemum, the plant can revive within a short period of time after a drought
Myrothamnus flabellifolius, a plant species native to Southern Africa
Pleopeltis polypodioides, also known as resurrection fern
Ramonda serbica, a species in the family Gesneriaceae
Selaginella lepidophylla, a plant species native to North America, Central and South America, and sold as a novelty
Tillandsia
Xerophyta, a monocotyledonous genus typically occurring on rock outcrops in Southern African grasslands
Certain resurrection plants have long been sold in their dry, "lifeless" form as curiosities. This custom was noted by many 19th-century authors, and continues today.
In December, 2015, resurrection plants were featured in a TED talk given by Professor Jill Farrant
, Molecular and Cell Biology, University of Cape Town, South Africa, who performs targeted genetic modification of crop plants to make them tolerate desiccation by activating genes that are already there but not natively expressed in response to drought.
See also
Dehydration
Cryptobiosis
Anhydrobiosis
Hygrochasy |
https://en.wikipedia.org/wiki/Seccomp | seccomp (short for secure computing mode) is a computer security facility in the Linux kernel. seccomp allows a process to make a one-way transition into a "secure" state where it cannot make any system calls except exit(), sigreturn(), read() and write() to already-open file descriptors. Should it attempt any other system calls, the kernel will either just log the event or terminate the process with SIGKILL or SIGSYS. In this sense, it does not virtualize the system's resources but isolates the process from them entirely.
seccomp mode is enabled via the system call using the PR_SET_SECCOMP argument, or (since Linux kernel 3.17) via the system call. seccomp mode used to be enabled by writing to a file, /proc/self/seccomp, but this method was removed in favor of prctl(). In some kernel versions, seccomp disables the RDTSC x86 instruction, which returns the number of elapsed processor cycles since power-on, used for high-precision timing.
seccomp-bpf is an extension to seccomp that allows filtering of system calls using a configurable policy implemented using Berkeley Packet Filter rules. It is used by OpenSSH and vsftpd as well as the Google Chrome/Chromium web browsers on ChromeOS and Linux. (In this regard seccomp-bpf achieves similar functionality, but with more flexibility and higher performance, to the older systrace—which seems to be no longer supported for Linux.)
Some consider seccomp comparable to OpenBSD pledge(2) and FreeBSD capsicum(4).
History
seccomp was first devised by Andrea Arcangeli in January 2005 for use in public grid computing and was originally intended as a means of safely running untrusted compute-bound programs. It was merged into the Linux kernel mainline in kernel version 2.6.12, which was released on March 8, 2005.
Software using seccomp or seccomp-bpf
Android uses a seccomp-bpf filter in the zygote since Android 8.0 Oreo.
systemd's sandboxing options are based on seccomp.
QEMU, the Quick Emulator, the core component to the |
https://en.wikipedia.org/wiki/Kummer%E2%80%93Vandiver%20conjecture | In mathematics, the Kummer–Vandiver conjecture, or Vandiver conjecture, states that a prime p does not divide the class number hK of the maximal real subfield of the p-th cyclotomic field.
The conjecture was first made by Ernst Kummer on 28 December 1849 and 24 April 1853 in letters to Leopold Kronecker, reprinted in , and independently rediscovered around 1920 by Philipp Furtwängler and ,
As of 2011, there is no particularly strong evidence either for or against the conjecture and it is unclear whether it is true or false, though it is likely that counterexamples are very rare.
Background
The class number h of the cyclotomic field is a product of two integers h1 and h2, called the first and second factors of the class number, where h2 is the class number of the maximal real subfield of the p-th cyclotomic field. The first factor h1 is well understood and can be computed easily in terms of Bernoulli numbers, and is usually rather large. The second factor h2 is not well understood and is hard to compute explicitly, and in the cases when it has been computed it is usually small.
Kummer showed that if a prime p does not divide the class number h, then Fermat's Last Theorem holds for exponent p.
The Kummer–Vandiver conjecture states that p does not divide the second factor h2.
Kummer showed that if p divides the second factor, then it also divides the first factor. In particular the Kummer–Vandiver conjecture holds for regular primes (those for which p does not divide the first factor).
Evidence for and against the Kummer–Vandiver conjecture
Kummer verified the Kummer–Vandiver conjecture for p less than 200, and Vandiver extended this to p less than 600.
verified it for p < 12 million. extended this to primes less than 163 million, and extended this to primes less than 231.
describes an informal probability argument, based on rather dubious assumptions about the equidistribution of class numbers mod p, suggesting that the number of primes less t |
https://en.wikipedia.org/wiki/Atari%20Mindlink | The Atari Mindlink is an unreleased video game controller for the Atari 2600, originally intended for release in 1984. The Mindlink was unique in that its headband form factor controls the game by reading the myoneural signal voltage from the player's forehead. The player's forehead movements are read by infrared sensors and transferred as movement in the game.
Specially supported games are similar to those that use the paddle controller, but with the Mindlink controller instead. Three games were in development for the Mindlink by its cancellation: Bionic Breakthrough, Telepathy, and Mind Maze. Bionic Breakthrough is basically a Breakout clone, controlled with the Mindlink. Mind Maze uses the Mindlink for a mimicry of ESP, to pretend to predict what is printed on cards. Testing showed that players frequently got headaches due to moving their eyebrows to play the game. None of these games were ever released in any other form. |
https://en.wikipedia.org/wiki/Silex | Silex is any of various forms of ground stone. In modern contexts the word refers to a finely ground, nearly pure form of silica or silicate.
In the late 16th century, it meant powdered or ground up "flints" (i.e. stones, generally meaning the class of "hard rocks")
It was later used in 1787 when describing experiments in a published paper by Antoine Lavoisier where such earths are mentioned as the source of his isolation of the element silicon. Silex is now most commonly used to describe finely ground silicates used as pigments in paint.
Archaic and foreign uses
The word "silex" was previously used to refer to flint and chert and sometimes other hard rocks.
In Latin "silex" originally referred to any hard rock, although now it often refers specifically to flint.
In many Latin languages, "silex" or a similar word is used to refer to flint. Although the modern English word "silex" has the same etymology, its current meaning has changed. These are false friends.
FK Sileks are a football club based in Kratovo, North Macedonia whose name literally means "flint." |
https://en.wikipedia.org/wiki/O-ring%20chain | The o-ring chain is a specialized type of roller chain used in the transmission of mechanical power from one sprocket to another.
Construction
The o-ring chain is named for the rubber o-rings built into the space between the outside link plate and the inside roller link plates. Chain manufacturers began to include this feature in 1971 after the application was invented by Joseph Montano while working for Whitney Chain of Hartford, Connecticut. O-rings were included as a way to improve lubrication to the links of power transmission chains, a service that is vitally important to extending their working life. These rubber fixtures form a barrier that holds factory applied lubricating grease inside the pin and bushing wear areas. Further, the rubber o-rings prevent dirt and other contaminants from entering the chain linkages, where such particles would otherwise cause significant wear.
Applications
O-ring chains are most notably used in motorcycles, one of the most demanding applications for a metal chain. High rpm and heavy loads require bulky chains, but such engineering increases the effect of friction compared to lighter chains. So lubrication plays a vital role here, but the high rpm also makes it very difficult to keep lubrication inside and on the chain. Additionally, motorcycle chains are exposed to a large volume of contaminants and particles and must be protected. O-rings, as described above, fit this application perfectly.
See also
X-ring chain
Roller chain
Motorcycle transmission
External links
The Complete Guide to Chain
Automatic chain oiler review
Chains
Motorcycle transmissions
Mechanical power transmission
Mechanical power control |
https://en.wikipedia.org/wiki/Electrooculography | Electrooculography (EOG) is a technique for measuring the corneo-retinal standing potential that exists between the front and the back of the human eye. The resulting signal is called the electrooculogram. Primary applications are in ophthalmological diagnosis and in recording eye movements. Unlike the electroretinogram, the EOG does not measure response to individual visual stimuli.
To measure eye movement, pairs of electrodes are typically placed either above and below the eye or to the left and right of the eye. If the eye moves from center position toward one of the two electrodes, this electrode "sees" the positive side of the retina and the opposite electrode "sees" the negative side of the retina. Consequently, a potential difference occurs between the electrodes. Assuming that the resting potential is constant, the recorded potential is a measure of the eye's position.
In 1951 Elwin Marg described and named electrooculogram for a technique of measuring the resting potential of the retina in the human eye.
Principle
The eye acts as a dipole in which the anterior pole is positive and the posterior pole is negative.
Left gaze: the cornea approaches the electrode near the outer canthus of the left eye, resulting in a negative-trending change in the recorded potential difference.
Right gaze: the cornea approaches the electrode near the inner canthus of the left eye, resulting in a positive-trending change in the recorded potential difference.
Ophthalmological diagnosis
The EOG is used to assess the function of the pigment epithelium. During dark adaptation, resting potential decreases slightly and reaches a minimum ("dark trough") after several minutes. When light is switched on, a substantial increase of the resting potential occurs ("light peak"), which drops off after a few minutes when the retina adapts to the light. The ratio of the voltages (i.e. light peak divided by dark trough) is known as the Arden ratio. In practice, the measurement is similar t |
https://en.wikipedia.org/wiki/FLiNaK | FLiNaK is the name of the ternary eutectic alkaline metal fluoride salt mixture LiF-NaF-KF (46.5-11.5-42 mol %). It has a melting point of 454 °C and a boiling point of 1570 °C. It is used as electrolyte for the electroplating of refractory metals and compounds like titanium, tantalum, hafnium, zirconium and their borides. FLiNaK also could see potential use as a coolant in the very high temperature reactor, a type of nuclear reactor.
Coolant
FLiNaK salt was researched heavily during the late 1950s by Oak Ridge National Laboratory as potential candidate for a coolant in the molten salt reactor because of its low melting point, its high heat capacity, and its chemical stability at high temperatures. Ultimately, its sister salt, FLiBe, was chosen as the solvent salt for the molten salt reactor due to a more desirable nuclear cross section. FLiNaK still gathers interest as an intermediate coolant for a high-temperature molten salt reactor where it could transfer heat without being in the presence of the fuel.
Chemistry
Fluoride salts, like all salts, cause corrosion in most metals and alloys. FliNak is different from FLiBe in the sense that is a basic melt—or it has an excess of fluorine ions. As FLiNak melts, all three components are alkali fluorides and therefore disassociate into positive and negative ions. The concentration of molten fluorine ions are able to corrode any metallic structures if it is energetically favorable. This is in contrast to FLiBe, which in a 66-34 mol% mixture will be a chemically neutral mix, as fluorine ions from LiF are donated to BeF2 to create the tetrafluoroberyllate ion BeF42−.
See also
Molten salt reactor
FLiBe
Fluoride volatility |
https://en.wikipedia.org/wiki/Japan%20Microgravity%20Centre | Japan Microgravity Centre (JAMIC) is a site for microgravity experiments at a 710-metre-deep abandoned coal mine at Kamisunagawa, Hokkaido. A capsule is dropped from the top to simulate "zero gravity". Jets accelerate the capsule to counteract air resistance. At the bottom, the capsule is slowed with gradual deceleration. Cushioning exists at the bottom for emergencies.
This facility is now closed. |
https://en.wikipedia.org/wiki/Ware%20Tetralogy | The Ware Tetralogy is a series of four science fiction novels by author Rudy Rucker: Software (1982), Wetware (1988), Freeware (1997) and Realware (2000).
The first two books both received the Philip K. Dick Award for best novel. The closest to the cyberpunk genre of all his works, the tetralogy explores themes such as rapid technological change, generational differences, consciousness, mortality and recreational drug use.
In 2010, Prime Books published The Ware Tetralogy: Four Novels by Rudy Rucker, which collects the entire series in a single paperback volume and includes an introduction by noted cyberpunk author William Gibson. The online version of The Ware Tetralogy was simultaneously released for free distribution under a Creative Commons Attribution-NonCommercial-No-Derivative License.
Plot summary
The events in the series are set in motion by Cobb Anderson, a computer scientist born in 1950 as part of the baby boomer generation. In the late part of the 20th century, the population bulge of the Baby Boomers causes massive unemployment. By 1995, Anderson's self-replicating robots, known as "boppers", colonize the Moon. By 2010, the United States Social Security system collapses. In response to riots, the federal government turns over the state of Florida to the elderly. This leads directly into the events of Software, in 2020.
Software
Software introduces Cobb Anderson as a retired computer scientist who was once tried for treason for figuring out how to give robots artificial intelligence and free will, creating the race of boppers. By 2020, they have created a complex society on the Moon, where the boppers developed because they depend on super-cooled superconducting Josephson effect circuits. In that year, Anderson is a pheezer — a freaky geezer, Rucker's depiction of elderly Baby Boomers — living in poverty in Florida and terrified because he lacks the money to buy a new artificial heart to replace his failing, secondhand one.
As the story begins, And |
https://en.wikipedia.org/wiki/Monte%20Carlo%20N-Particle%20Transport%20Code | Monte Carlo N-Particle Transport (MCNP) is a general-purpose, continuous-energy, generalized-geometry, time-dependent, Monte Carlo radiation transport code designed to track many particle types over broad ranges of energies and is developed by Los Alamos National Laboratory. Specific areas of application include, but are not limited to, radiation protection and dosimetry, radiation shielding, radiography, medical physics, nuclear criticality safety, detector design and analysis, nuclear oil well logging, accelerator target design, fission and fusion reactor design, decontamination and decommissioning. The code treats an arbitrary three-dimensional configuration of materials in geometric cells bounded by first- and second-degree surfaces and fourth-degree elliptical tori.
Point-wise cross section data are typically used, although group-wise data also are available. For neutrons, all reactions given in a particular cross-section evaluation (such as ENDF/B-VI) are accounted for. Thermal neutrons are described by both the free gas and S(α,β) models. For photons, the code accounts for incoherent and coherent scattering, the possibility of fluorescent emission after photoelectric absorption, absorption in pair production with local emission of annihilation radiation, and bremsstrahlung. A continuous-slowing-down model is used for electron transport that includes positrons, k x-rays, and bremsstrahlung but does not include external or self-induced fields.
Important standard features that make MCNP very versatile and easy to use include a powerful general source, criticality source, and surface source; both geometry and output tally plotters; a rich collection of variance reduction techniques; a flexible tally structure; and an extensive collection of cross-section data.
MCNP contains numerous flexible tallies: surface current & flux, volume flux (track length), point or ring detectors, particle heating, fission heating, pulse height tally for energy or charge depositi |
https://en.wikipedia.org/wiki/Helmholtz%20theorem%20%28classical%20mechanics%29 | The Helmholtz theorem of classical mechanics reads as follows:
Let be the Hamiltonian of a one-dimensional system, where is the kinetic energy and is a "U-shaped" potential energy profile which depends on a parameter .
Let denote the time average. Let
Then
Remarks
The thesis of this theorem of classical mechanics reads exactly as the heat theorem of thermodynamics. This fact shows that thermodynamic-like relations exist between certain mechanical quantities. This in turn allows to define the "thermodynamic state" of a one-dimensional mechanical system. In particular the temperature is given by time average of the kinetic energy, and the entropy by the logarithm of the action (i.e., ).
The importance of this theorem has been recognized by Ludwig Boltzmann who saw how to apply it to macroscopic systems (i.e. multidimensional systems), in order to provide a mechanical foundation of equilibrium thermodynamics. This research activity was strictly related to his formulation of the ergodic hypothesis.
A multidimensional version of the Helmholtz theorem, based on the ergodic theorem of George David Birkhoff is known as generalized Helmholtz theorem. |
https://en.wikipedia.org/wiki/StrongSwan | strongSwan is a multiplatform IPsec implementation. The focus of the project is on authentication mechanisms using X.509 public key certificates and optional storage of private keys and certificates on smartcards through a PKCS#11 interface and on TPM 2.0.
Overview
The project is maintained by Andreas Steffen who is a professor emeritus for Security in Communications with the University of Applied Sciences in Rapperswil, Switzerland.
As a descendant of the FreeS/WAN project, strongSwan continues to be released under the GPL license. It supports certificate revocation lists and the Online Certificate Status Protocol (OCSP). A unique feature is the use of X.509 attribute certificates to implement access control schemes based on group memberships. StrongSwan interoperates with other IPsec implementations, including various Microsoft Windows and macOS VPN clients. The current version of strongSwan fully implements the Internet Key Exchange (IKEv2) protocol defined by RFC 7296.
Features
strongSwan supports IKEv1 and fully implements IKEv2.
IKEv1 and IKEv2 features
strongSwan offers plugins, enhancing its functionality. The user can choose among three crypto libraries (legacy [non-US] FreeS/WAN, OpenSSL, and gcrypt).
Using the openssl plugin, strongSwan supports Elliptic Curve Cryptography (ECDH groups and ECDSA certificates and signatures) both for IKEv2 and IKEv1, so that interoperability with Microsoft's Suite B implementation on Vista, Win 7, Server 2008, etc. is possible.
Automatic assignment of virtual IP addresses to VPN clients from one or several address pools using either the IKEv1 ModeConfig or IKEv2 Configuration payload. The pools are either volatile (i.e. RAM-based) or stored in a SQLite or MySQL database (with configurable lease-times).
The ipsec pool command line utility allows the management of IP address pools and configuration attributes like internal DNS and NBNS servers.
IKEv2 only features
The IKEv2 daemon is inherently multi-threaded (16 |
https://en.wikipedia.org/wiki/Arytenoid%20cartilage | The arytenoid cartilages () are a pair of small three-sided pyramids which form part of the larynx. They are the site of attachment of the vocal cords. Each is pyramidal or ladle-shaped and has three surfaces, a base, and an apex. The arytenoid cartilages allow for movement of the vocal cords by articulating with the cricoid cartilage. They may be affected by arthritis, dislocations, or sclerosis.
Structure
The arytenoid cartilages are part of the posterior part of the larynx.
Surfaces
The posterior surface is triangular, smooth, concave, and gives attachment to the arytenoid muscle and transversus.
The antero-lateral surface is somewhat convex and rough. On it, near the apex of the cartilage, is a rounded elevation (colliculus) from which a ridge (crista arcuata) curves at first backward and then downward and forward to the vocal process. The lower part of this crest intervenes between two depressions or foveæ, an upper, triangular, and a lower oblong in shape; the latter gives attachment to the thyroarytenoid muscle (vocal muscle).
The medial surface is narrow, smooth, and flattened, covered by mucous membrane. It forms the lateral boundary of the intercartilaginous part of the rima glottidis.
Base and apex
The base of each cartilage is broad, and on it is a concave smooth surface, for articulation with the cricoid cartilage.
Its lateral angle is called the muscular process.
Its anterior angle is called the vocal process.
The apex of each cartilage is pointed, curved backward and medialward, and surmounted by a small conical, cartilaginous nodule, the corniculate cartilage. It articulates with the cricoid lamina with a ball-and-socket joint.
Function
The arytenoid cartilages allow the vocal folds to be tensed, relaxed, or approximated. They articulate with the supero-lateral parts of the cricoid cartilage lamina, forming the cricoarytenoid joints at which they can come together, move apart, tilt anteriorly or posteriorly, and rotate.
Clinical signif |
https://en.wikipedia.org/wiki/Reliability%2C%20availability%20and%20serviceability | Reliability, availability and serviceability (RAS), also known as reliability, availability, and maintainability (RAM), is a computer hardware engineering term involving reliability engineering, high availability, and serviceability design. The phrase was originally used by International Business Machines (IBM) as a term to describe the robustness of their mainframe computers.
Computers designed with higher levels of RAS have many features that protect data integrity and help them stay available for long periods of time without failure This data integrity and uptime is a particular selling point for mainframes and fault-tolerant systems.
Definitions
While RAS originated as a hardware-oriented term, systems thinking has extended the concept of reliability-availability-serviceability to systems in general, including software.
Reliability can be defined as the probability that a system will produce correct outputs up to some given time t. Reliability is enhanced by features that help to avoid, detect and repair hardware faults. A reliable system does not silently continue and deliver results that include uncorrected corrupted data. Instead, it detects and, if possible, corrects the corruption, for example: by retrying an operation for transient (soft) or intermittent errors, or else, for uncorrectable errors, isolating the fault and reporting it to higher-level recovery mechanisms (which may failover to redundant replacement hardware, etc.), or else by halting the affected program or the entire system and reporting the corruption. Reliability can be characterized in terms of mean time between failures (MTBF), with reliability = exp(-t/MTBF).
Availability means the probability that a system is operational at a given time, i.e. the amount of time a device is actually operating as the percentage of total time it should be operating. High-availability systems may report availability in terms of minutes or hours of downtime per year. Availability features allow the |
https://en.wikipedia.org/wiki/Meniscal%20cyst | Meniscal cyst is a well-defined cystic lesion located along the peripheral margin of the meniscus, a part of the knee, nearly always associated with horizontal meniscal tears.
Signs and symptoms
Pain and swelling or focal mass at the level of the joint. The pain may be related to a meniscal tear or distension of the knee capsule or both. The mass varies in consistency from soft/fluctuant to hard. Size is variable, and meniscal cysts are known to change in size with knee flexion/extension.
Cause
Various etiologies have been proposed, including trauma, hemorrhage, chronic infection, and mucoid degeneration. The most widely accepted theory describes meniscal cysts resulting from extrusion of synovial fluid through a peripherally extended horizontal meniscal tear, accumulating outside the joint capsule. They arise more commonly from the lateral joint margin, and occur most often in 20- to 40-year-old males.
Diagnosis
Magnetic resonance imaging is the modality of choice for diagnosis of meniscal cysts. In their most subtle form, meniscal cysts present as focal areas of high signal intensity within a swollen meniscus. It is not uncommon for radiologists to miss this type of meniscal cyst because the signal intensity is not quite as great as fluid on T2 weighted sequences.2 When this fluid is extruded into the adjacent soft tissues, the swollen meniscus subsequently assumes a more normal shape, and the extruded fluid demonstrates a higher T2 signal typical of parameniscal cysts.
Medial meniscus horizontal tear extending into a meniscal cyst.
Sagittal T2 images of a medial meniscus horizontal tear extending into a meniscal cyst.
Large medial meniscus cyst.
Treatment
Treatment of meniscal cysts consists of a combination of cyst decompression (intraarticular decompression versus open cystectomy) and arthroscopic repair of any meniscal abnormalities. Success rates are significantly higher when both the cyst and meniscal tear are treated compared to treating only one di |
https://en.wikipedia.org/wiki/The%20Weather%20Man | The Weather Man is a 2005 American dark comedy-drama film directed by Gore Verbinski, written by Steve Conrad, and starring Nicolas Cage in the lead role and Michael Caine and Hope Davis in supporting roles. It tells the story of a weatherman in the midst of a mid-life crisis.
The film was released on October 28, 2005, and grossed $19 million worldwide. It received mixed reviews upon release.
Plot
A successful weatherman at a Chicago news program, David Spritz (Nicolas Cage) is well paid but garners little respect from people in the area who throw fast food at him, David suspects, because they're resentful of how easy his high-paying job is. Dave also feels overshadowed by his father, Pulitzer Prize-winning author Robert Spritzel (Michael Caine), who is disappointed in Dave's apparent inability to grow up and deal with his two children. The situation worsens when Robert is diagnosed with lymphoma and given only a few months to live. As he becomes more and more depressed, Dave takes up archery, finding the activity a way to build his focus and calm his nerves.
David later remembers a conversation between himself and his father, where his father explains to him that "the harder thing to do and the right thing to do are often the same thing" and that "nothing that has meaning is easy". David appreciates this advice but struggles to implement it.
To prove himself to his father and possibly reconcile with Noreen (Hope Davis), his estranged wife, Dave pursues a weatherman position with a national talk show called Hello America. The job would nearly quadruple his salary, but means relocating to New York City. When Hello America invites him to New York, he takes his daughter, Shelly (Gemmenne de la Peña), with him and bonds with her by helping her shop for a more suitable wardrobe. While away, Dave learns that his son Mike (Nicholas Hoult) attacked his counselor, Don Bowden (Gil Bellows), claiming that the man wanted to perform oral sex on him. Despite this stress an |
https://en.wikipedia.org/wiki/Loschmidt%20constant | The Loschmidt constant or Loschmidt's number (symbol: n0) is the number of particles (atoms or molecules) of an ideal gas per volume (the number density), and usually quoted at standard temperature and pressure. The 2014 CODATA recommended value is per cubic metre at 0 °C and 1 atm and the 2006 CODATA recommended value was 2.686 7774(47) per cubic metre at 0 °C and 1 atm. It is named after the Austrian physicist Johann Josef Loschmidt, who was the first to estimate the physical size of molecules in 1865. The term "Loschmidt constant" is also sometimes used to refer to the Avogadro constant, particularly in German texts.
By ideal gas law, , and since , the Loschmidt constant is given by the relationship
where kB is the Boltzmann constant, p0 is the standard pressure, and T0 is the standard thermodynamic temperature.
Since the Avogadro constant NA satisfies , the Loschmidt constant satisfies
where R is the ideal gas constant.
Being a measure of number density, the Loschmidt constant is used to define the amagat, a practical unit of number density for gases and other substances:
1 amagat = n0 = ,
such that the Loschmidt constant is exactly 1 amagat.
Modern determinations
In the CODATA set of recommended values for physical constants, the Loschmidt constant is calculated from the gas constant and the Avogadro constant:
where A(e) is the relative atomic mass of the electron, M is the molar mass constant, c is the speed of light, α is the fine-structure constant, R is the Rydberg constant and h is the Planck constant. The pressure and temperature can be chosen freely and must be quoted with values of the Loschmidt constant. The precision to which the Loschmidt constant is currently known is limited entirely by the uncertainty in the value of the gas constant.
First determinations
Loschmidt did not actually calculate a value for the constant which now bears his name, but it is a simple and logical manipulation of his published results. James Clerk Maxwell descr |
https://en.wikipedia.org/wiki/Singly%20and%20doubly%20even | In mathematics an even integer, that is, a number that is divisible by 2, is called evenly even or doubly even if it is a multiple of 4, and oddly even or singly even if it is not. The former names are traditional ones, derived from ancient Greek mathematics; the latter have become common in recent decades.
These names reflect a basic concept in number theory, the 2-order of an integer: how many times the integer can be divided by 2. This is equivalent to the multiplicity of 2 in the prime factorization.
A singly even number can be divided by 2 only once; it is even but its quotient by 2 is odd.
A doubly even number is an integer that is divisible more than once by 2; it is even and its quotient by 2 is also even.
The separate consideration of oddly and evenly even numbers is useful in many parts of mathematics, especially in number theory, combinatorics, coding theory (see even codes), among others.
Definitions
The ancient Greek terms "even-times-even" () and "even-times-odd" ( or ) were given various inequivalent definitions by Euclid and later writers such as Nicomachus. Today, there is a standard development of the concepts. The 2-order or 2-adic order is simply a special case of the p-adic order at a general prime number p; see p-adic number for more on this broad area of mathematics. Many of the following definitions generalize directly to other primes.
For an integer n, the 2-order of n (also called valuation) is the largest natural number ν such that 2ν divides n. This definition applies to positive and negative numbers n, although some authors restrict it to positive n; and one may define the 2-order of 0 to be infinity (see also parity of zero). The 2-order of n is written ν2(n) or ord2(n). It is not to be confused with the multiplicative order modulo 2.
The 2-order provides a unified description of various classes of integers defined by evenness:
Odd numbers are those with ν2(n) = 0, i.e., integers of the form .
Even numbers are those with ν2(n) > 0 |
https://en.wikipedia.org/wiki/Kramers%E2%80%93Kronig%20relations | The Kramers–Kronig relations are bidirectional mathematical relations, connecting the real and imaginary parts of any complex function that is analytic in the upper half-plane. The relations are often used to compute the real part from the imaginary part (or vice versa) of response functions in physical systems, because for stable systems, causality implies the condition of analyticity, and conversely, analyticity implies causality of the corresponding stable physical system. The relation is named in honor of Ralph Kronig and Hans Kramers. In mathematics, these relations are known by the names Sokhotski–Plemelj theorem and Hilbert transform.
Formulation
Let be a complex function of the complex variable , where and are real. Suppose this function is analytic in the closed upper half-plane of and diminishes faster than as . Slightly weaker conditions are also possible. The Kramers–Kronig relations are given by
and
where is real and where denotes the Cauchy principal value. The real and imaginary parts of such a function are not independent, allowing the full function to be reconstructed given just one of its parts.
Derivation
The proof begins with an application of Cauchy's residue theorem for complex integration. Given any analytic function in the closed upper half-plane, the function , where is real, is analytic in the (open) upper half-plane. The residue theorem consequently states that
for any closed contour within this region. When the contour is chosen to trace the real axis, a hump over the pole at , and a large semicircle in the upper half-plane. This follows decomposition of the integral into its contributions along each of these three contour segments and pass them to limits. The length of the semicircular segment increases proportionally to , but the integral over it vanishes in the limit because vanishes faster than . We are left with the segments along the real axis and the half-circle around the pole. We pass the size of the half-circle |
https://en.wikipedia.org/wiki/Ola%20Nordmann | Ola Nordmann is a national personification of Norwegians, either for individuals or collectively. It is also used as a placeholder name. The female counterpart is Kari Nordmann, and collectively they are referred to as Ola og Kari Nordmann (Ola and Kari Nordmann).
Usage of the name Ola Nordmann
As a national personification
The media often uses "Ola Nordmann" to describe trends in the population.
For example: A headline in a newspaper that reads Norwegians consume less milk could just as well read Ola Nordmann drinks less milk.
Caricatures of Ola Nordmann as a national personification of Norway usually depict him as a blond-haired man dressed in bunad-like traditional folk clothing and wearing a woollen red top cap - the traditional headwear of a Norwegian gnome or nisse. This headwear was also worn by the traditional Norwegian farmer, mostly in the old Norwegian farm culture. In the romantic national period, the farmer often came to represent the Norwegian people as a whole, hence the representation.
As a placeholder name
Ola Nordmann is also used as a default name in examples used to guide people in how to fill in forms etc. (similar to Joe Bloggs in the UK or John Doe in America). In legal examples, Peder Ås is often used as a placeholder name instead.
Etymology
Ola is a common male first name in Norway (derived from Olav/Olaf), and Nordmann is a demonym for a Norwegian, i.e. "Ola Norwegian".
Kari Nordmann
The female equivalent or variant is personified as Kari Nordmann, "Kari and Ola Nordmann" is often used together to describe the archetypical Norwegian family or household.
See also
Joe Bloggs
John Bull
John Doe
John Q. Public
Ole and Lena
Uncle Sam |
https://en.wikipedia.org/wiki/Army%20Moves | Army Moves is a scrolling shooter game developed by Dinamic Software for the Amiga, Amstrad CPC, Atari ST, Commodore 64, MSX and ZX Spectrum. It is the first chapter of the Moves Trilogy and it was followed by Navy Moves in 1987 and Arctic Moves in 1995. It was first released in 1986 and published by Dinamic in Spain and by Imagine Software. Dinamic Software also developed a MS-DOS version of the game, published in 1989 in Spain.
Gameplay
The game contains seven levels that are divided into two main sections. The first four levels make up the first section, where the player has to drive an army unit (jeep or helicopter) through a terrain, steering clear of hostile vehicles.
In the last three levels that comprise the second main section, one plays as a soldier who shoots enemies along his way. In level 5 the soldier must jump from rock to rock in a river, shooting hostile birds. Thereafter, the soldier makes his way into the enemy headquarters with the goal of retrieving secret documents.
Army Moves was regarded as a rather bad game on the Amiga — "Almost non-existent gameplay makes this very poor value for money", according to a review in Zzap!. However, it received mixed reviews from ZX Spectrum magazines and was successful enough in Spain to spawn two follow-ups, Navy Moves in 1988 and Arctic Moves in 1995. The latter appeared only for the PC platform, and it included the first two chapters of the series, playable through a ZX Spectrum emulator, as an extra. A fourth entry in the series, Desert Moves was announced at the end of the game Arctic Moves, but never appeared.
The game music in non-Spanish versions is based on the Colonel Bogey March. |
https://en.wikipedia.org/wiki/Max%20Noether | Max Noether (24 September 1844 – 13 December 1921) was a German mathematician who worked on algebraic geometry and the theory of algebraic functions. He has been called "one of the finest mathematicians of the nineteenth century". He was the father of Emmy Noether.
Biography
Max Noether was born in Mannheim in 1844, to a Jewish family of wealthy wholesale hardware dealers. His grandfather, Elias Samuel, had started the business in Bruchsal in 1797. In 1809 the Grand Duchy of Baden established a "Tolerance Edict", which assigned a hereditary surname to the male head of every Jewish family which did not already possess one. Thus the Samuels became the Noether family, and as part of this Christianization of names, their son Hertz (Max's father) became Hermann. Max was the third of five children Hermann had with his wife Amalia Würzburger.
At 14, Max contracted polio and was afflicted by its effects for the rest of his life. Through self-study, he learned advanced mathematics and entered the University of Heidelberg in 1865. He served on the faculty there for several years, then moved to the University of Erlangen in 1888. While there, he helped to found the field of algebraic geometry.
In 1880 he married Ida Amalia Kaufmann, the daughter of another wealthy Jewish merchant family. Two years later they had their first child, named Amalia ("Emmy") after her mother. Emmy Noether went on to become a central figure in abstract algebra. In 1883 they had a son named Alfred, who later studied chemistry before dying in 1918. Their third child, Fritz Noether, was born in 1884, and like Emmy, found prominence as a mathematician; he was executed in the Soviet Union in 1941. Little is known about their fourth child, Gustav Robert, born in 1889; he suffered from continual illness and died in 1928.
Noether served as an Ordinarius (full professor) at Erlangen for many years, and died there on 13 December 1921.
Work on algebraic geometry
Brill and Max Noether developed alternative |
https://en.wikipedia.org/wiki/Multiple%20Spanning%20Tree%20Protocol | The Multiple Spanning Tree Protocol (MSTP) and algorithm, provides both simple and full connectivity assigned to any given virtual LAN (VLAN) throughout a bridged local area network. MSTP uses bridge protocol data unit (BPDUs) to exchange information between spanning-tree compatible devices, to prevent loops in each Multiple Spanning Tree instance (MSTI) and in the common and internal spanning tree (CIST), by selecting active and blocked paths. This is done as well as in Spanning Tree Protocol (STP) without the need of manually enabling backup links and getting rid of switching loop danger.
Moreover, MSTP allows frames/packets assigned to different VLANs to follow separate paths, each based on an independent MSTI, within MST regions composed of local area networks (LANs) and MST bridges. These regions and the other bridges and LANs are connected into a single common spanning tree (CST).
History and motivation
It was originally defined in IEEE 802.1s as an amendment to 802.1Q, 1998 edition and later merged into IEEE 802.1Q-2005 Standard, clearly defines an extension or an evolution of Radia Perlman's Spanning Tree Protocol (STP) and the Rapid Spanning Tree Protocol (RSTP). It has some similarities with Cisco Systems' Multiple Instances Spanning Tree Protocol (MISTP), but there are some differences.
The original STP and RSTP work on the physical link level, preventing bridge loops when redundant paths are present. However, when a LAN is virtualized using VLAN trunking, each physical link represents multiple logical connections. Blocking a physical link blocks all its logical links and forces all traffic through the remaining physical links within the spanning tree. Redundant links cannot be utilized at all. Moreover, without careful network design, seemingly redundant links on the physical level may be used to connect different VLANs and blocking any of them may disconnect one or more VLANs, causing bad paths.
Instead, MSTP provides a potentially better utiliza |
https://en.wikipedia.org/wiki/Pecunia%20non%20olet | is a Latin saying that means "money does not stink". The phrase is ascribed to the Roman emperor Vespasian (ruled AD 69–79).
History
A tax on the disposal of urine was first imposed by Emperor Nero under the name of in the 1st century AD. The tax was removed after a while, but it was re-enacted by Vespasian around 70 AD in order to fill the treasury.
Vespasian imposed a urine tax on the distribution of urine from Rome's public urinals (the Roman lower classes urinated into pots, which were later emptied into cesspools). The urine collected from these public urinals was sold as an ingredient for several chemical processes. It was used in tanning, wool production, and also by launderers as a source of ammonia to clean and whiten woollen togas. The buyers of the urine paid the tax.
The Roman historian Suetonius reports that when Vespasian's son Titus complained about the disgusting nature of the tax, his father held up a gold coin and asked whether he felt offended by its smell (). When Titus said "No", Vespasian replied, "Yet it comes from urine" ().
The phrase is still used today to say that the value of money is not tainted by its origins. Vespasian's name still attaches to public urinals in Italy () and France ().
In literature
"Vespasian's axiom" is also referred to in passing in the Balzac short story Sarrasine in connection with the mysterious origins of the wealth of a Parisian family. The proverb receives some attention in Roland Barthes's detailed analysis of the Balzac story in his critical study S/Z. It is possible that F. Scott Fitzgerald alludes to Vespasian's jest in The Great Gatsby with the phrase "non-olfactory money".
In That Hideous Strength by C. S. Lewis, the Warden of Bracton College is given the nickname "Non-Olet" for having written "a monumental report on National Sanitation. The subject had, if anything, rather recommended him to the Progressive Element. They regarded it as a slap in the face for the dilettanti and Die-hards, who re |
https://en.wikipedia.org/wiki/Pentadactyl | Pentadactyl is a discontinued Firefox extension forked from the Vimperator and designed to provide a more efficient user interface for keyboard-fluent users. The design is heavily inspired by the Vim text editor, and the authors try to maintain consistency with it wherever possible.
Features
Once activated, Pentadactyl removes all Firefox's default user interface chrome (except for the tab bar) and adds a Vim-inspired command line at the bottom of the window. The key bindings and dialog invocation are also changed to those familiar to Vim users.
Apart from Vim-like features, Pentadactyl includes the Lynx-like links hinting mode, allowing user to enter links manipulation commands referring to the links by labels or numbers.
As the key mappings of the Pentadactyl differ significantly from those typically expected by web application developers, occasional conflicts of browser- and site-defined key mapping occur. Pentadactyl deals with such cases by providing a special "pass-through" mode, which passes all the key press events (except for Esc key) directly to the site. This mode can either be activated manually or enforced on a per domain basis with a configuration file.
Development
Pentadactyl was forked from the Vimperator Firefox extension after the disagreement over the project directions and governance. After the split Pentadactyl differentiated itself with improved start timing, ability to use the extension without restarting Firefox after installation and some changes for consistency with Vim.
The extension is available as stable releases and nightly builds.
As of November 2020, the project has been on hiatus since March 2017 due to developer inactivity and noncommunication, and doesn't seem to work on Firefox 57.0 (Firefox Quantum) or newer versions. The project was reported still working for Waterfox, Basilisk and Pale Moon browsers, but has since started to degrade due to no updates and will only work after applying community made patches. For the Pal |
https://en.wikipedia.org/wiki/Residuated%20lattice | In abstract algebra, a residuated lattice is an algebraic structure that is simultaneously a lattice x ≤ y and a monoid x•y which admits operations x\z and z/y, loosely analogous to division or implication, when x•y is viewed as multiplication or conjunction, respectively. Called respectively right and left residuals, these operations coincide when the monoid is commutative. The general concept was introduced by Morgan Ward and Robert P. Dilworth in 1939. Examples, some of which existed prior to the general concept, include Boolean algebras, Heyting algebras, residuated Boolean algebras, relation algebras, and MV-algebras. Residuated semilattices omit the meet operation ∧, for example Kleene algebras and action algebras.
Definition
In mathematics, a residuated lattice is an algebraic structure such that
(i) (L, ≤) is a lattice.
(ii) is a monoid.
(iii) For all z there exists for every x a greatest y, and for every y a greatest x, such that x•y ≤ z (the residuation properties).
In (iii), the "greatest y", being a function of z and x, is denoted x\z and called the right residual of z by x. Think of it as what remains of z on the right after "dividing" z on the left by x. Dually, the "greatest x" is denoted z/y and called the left residual of z by y. An equivalent, more formal statement of (iii) that uses these operations to name these greatest values is
(iii)' for all x, y, z in L, y ≤ x\z ⇔ x•y ≤ z ⇔ x ≤ z/y.
As suggested by the notation, the residuals are a form of quotient. More precisely, for a given x in L, the unary operations x• and x\ are respectively the lower and upper adjoints of a Galois connection on L, and dually for the two functions •y and /y. By the same reasoning that applies to any Galois connection, we have yet another definition of the residuals, namely,
x•(x\y) ≤ y ≤ x\(x•y), and
(y/x)•x ≤ y ≤ (y•x)/x,
together with the requirement that x•y be monotone in x and y. (When axiomatized using (iii) or (iii)' monotonicity |
https://en.wikipedia.org/wiki/MV-algebra | In abstract algebra, a branch of pure mathematics, an MV-algebra is an algebraic structure with a binary operation , a unary operation , and the constant , satisfying certain axioms. MV-algebras are the algebraic semantics of Łukasiewicz logic; the letters MV refer to the many-valued logic of Łukasiewicz. MV-algebras coincide with the class of bounded commutative BCK algebras.
Definitions
An MV-algebra is an algebraic structure consisting of
a non-empty set
a binary operation on
a unary operation on and
a constant denoting a fixed element of
which satisfies the following identities:
and
By virtue of the first three axioms, is a commutative monoid. Being defined by identities, MV-algebras form a variety of algebras. The variety of MV-algebras is a subvariety of the variety of BL-algebras and contains all Boolean algebras.
An MV-algebra can equivalently be defined (Hájek 1998) as a prelinear commutative bounded integral residuated lattice satisfying the additional identity
Examples of MV-algebras
A simple numerical example is with operations and In mathematical fuzzy logic, this MV-algebra is called the standard MV-algebra, as it forms the standard real-valued semantics of Łukasiewicz logic.
The trivial MV-algebra has the only element 0 and the operations defined in the only possible way, and
The two-element MV-algebra is actually the two-element Boolean algebra with coinciding with Boolean disjunction and with Boolean negation. In fact adding the axiom to the axioms defining an MV-algebra results in an axiomatization of Boolean algebras.
If instead the axiom added is , then the axioms define the MV3 algebra corresponding to the three-valued Łukasiewicz logic Ł3. Other finite linearly ordered MV-algebras are obtained by restricting the universe and operations of the standard MV-algebra to the set of equidistant real numbers between 0 and 1 (both included), that is, the set which is closed under the operations and of the st |
https://en.wikipedia.org/wiki/Tee%20%28command%29 | In computing, tee is a command in command-line interpreters (shells) using standard streams which reads standard input and writes it to both standard output and one or more files, effectively duplicating its input. It is primarily used in conjunction with pipes and filters. The command is named after the T-splitter used in plumbing.
Overview
The tee command is normally used to split the output of a program so that it can be both displayed and saved in a file. The command can be used to capture intermediate output before the data is altered by another command or program.
The tee command reads standard input, then writes its content to standard output. It simultaneously copies the data into the specified file(s) or variables.
The syntax differs depending on the command's implementation.
Implementations
The command is available for Unix and Unix-like operating systems, Microware OS-9, DOS (e.g. 4DOS, FreeDOS), Microsoft Windows (e.g. 4NT, Windows PowerShell), and ReactOS. The Linux tee command was written by Mike Parker, Richard Stallman, and David MacKenzie. The command is available as a separate package for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities. The FreeDOS version was developed by Jim Hall and is licensed under the GPL.
The command has also been ported to the IBM i operating system.
Additionally the sponge command offers similar capabilities.
Unix and Unix-like
tee [ -a ] [ -i ] [ File ... ]
Arguments:
File ... A list of files, each of which receives the output.
Flags:
-a Appends the output to each file, rather than overwriting it.
-i Ignores interrupts.
The command returns the following exit values (exit status):
0 The standard input was successfully copied to all output files.
>0 An error occurred.
Using process substitution lets more than one process read the standard output of the originating process.
Read this example from GNU Coreutils, tee invocation.
Note: If a write to any suc |
https://en.wikipedia.org/wiki/Schottky%20effect | The Schottky effect or field enhanced thermionic emission is a phenomenon in condensed matter physics named after Walter H. Schottky. In electron emission devices, especially electron guns, the thermionic electron emitter will be biased negative relative to its surroundings. This creates an electric field of magnitude F at the emitter surface. Without the field, the surface barrier seen by an escaping Fermi-level electron has height W equal to the local work-function. The electric field lowers the surface barrier by an amount ΔW, and increases the emission current. It can be modeled by a simple modification of the Richardson equation, by replacing W by (W − ΔW). This gives the equation
where J is the emission current density, T is the temperature of the metal, W is the work function of the metal, k is the Boltzmann constant, qe is the Elementary charge, ε0 is the vacuum permittivity, and AG is the product of a universal constant A0 multiplied by a material-specific correction factor λR which is typically of order 0.5.
Electron emission that takes place in the field-and-temperature-regime where this modified equation applies is often called Schottky emission. This equation is relatively accurate for electric field strengths lower than about 108 V m−1. For electric field strengths higher than 108 V m−1, so-called Fowler–Nordheim (FN) tunneling begins to contribute significant emission current. In this regime, the combined effects of field-enhanced thermionic and field emission can be modeled by the Murphy–Good equation for thermo-field (T-F) emission. At even higher fields, FN tunneling becomes the dominant electron emission mechanism, and the emitter operates in the so-called "cold field electron emission (CFE)" regime.
Thermionic emission can also be enhanced by interaction with other forms of excitation such as light. For example, excited Cs-vapours in thermionic converters form clusters of Cs-Rydberg matter which yield a decrease of collector emitting work fu |
https://en.wikipedia.org/wiki/Algorithmic%20trading | Algorithmic trading is a method of executing orders using automated pre-programmed trading instructions accounting for variables such as time, price, and volume. This type of trading attempts to leverage the speed and computational resources of computers relative to human traders. In the twenty-first century, algorithmic trading has been gaining traction with both retail and institutional traders. A study in 2019 showed that around 92% of trading in the Forex market was performed by trading algorithms rather than humans.
It is widely used by investment banks, pension funds, mutual funds, and hedge funds that may need to spread out the execution of a larger order or perform trades too fast for human traders to react to. However, it is also available to private traders using simple retail tools.
The term algorithmic trading is often used synonymously with automated trading system. These encompass a variety of trading strategies, some of which are based on formulas and results from mathematical finance, and often rely on specialized software.
Examples of strategies used in algorithmic trading include systematic trading, market making, inter-market spreading, arbitrage, or pure speculation, such as trend following. Many fall into the category of high-frequency trading (HFT), which is characterized by high turnover and high order-to-trade ratios. HFT strategies utilize computers that make elaborate decisions to initiate orders based on information that is received electronically, before human traders are capable of processing the information they observe. As a result, in February 2012, the Commodity Futures Trading Commission (CFTC) formed a special working group that included academics and industry experts to advise the CFTC on how best to define HFT. Algorithmic trading and HFT have resulted in a dramatic change of the market microstructure and in the complexity and uncertainty of the market macrodynamic, particularly in the way liquidity is provided.
History
E |
https://en.wikipedia.org/wiki/Trendelenburg%27s%20sign | Trendelenburg's sign is found in people with weak or paralyzed abductor muscles of the hip, namely gluteus medius and gluteus minimus. It is named after the German surgeon Friedrich Trendelenburg. It is often incorrectly referenced as the Trendelenburg test which is a test for vascular insufficiency in the lower extremities.
The Trendelenburg sign is said to be positive if, when standing on one leg (the 'stance leg'), the pelvis severely drops on the side opposite to the stance leg (the 'swing limb'). The muscle weakness is present on the side of the stance leg. If the patient compensates for this weakness (by tilting their trunk/thorax to the affected side), then the pelvis will be raised, rather than dropped, on the side opposite to the stance leg. Ergo, in the same situation, the patient's hip may be dropped or raised, dependent upon whether the patient is actively compensating, as above, or not. Compensation shifts the centre of gravity to the affected side, and also decreases the angle between the hip adductor muscles and femur, both of which decrease the forces needing to be applied by the hip adductor muscles to maintain relevant posture.
The gluteus medius is very important during the stance phase of the gait cycle to maintain both hips at the same level. Moreover, one leg stance accounts for about 60% of the gait cycle. Furthermore, during the stance phase of the gait cycle, there is approximately three times the body weight transmitted to the hip joint. The hip abductors' action accounts for two thirds of that body weight. A Trendelenburg sign can occur when there is presence of a muscular dysfunction (weakness of the gluteus medius or minimus) or when someone is experiencing pain.
See also
Gait abnormality
Superior gluteal nerve
Trendelenburg gait |
https://en.wikipedia.org/wiki/Non-covalent%20interaction | In chemistry, a non-covalent interaction differs from a covalent bond in that it does not involve the sharing of electrons, but rather involves more dispersed variations of electromagnetic interactions between molecules or within a molecule. The chemical energy released in the formation of non-covalent interactions is typically on the order of 1–5 kcal/mol (1000–5000 calories per 6.02 molecules). Non-covalent interactions can be classified into different categories, such as electrostatic, π-effects, van der Waals forces, and hydrophobic effects.
Non-covalent interactions are critical in maintaining the three-dimensional structure of large molecules, such as proteins and nucleic acids. They are also involved in many biological processes in which large molecules bind specifically but transiently to one another (see the properties section of the DNA page). These interactions also heavily influence drug design, crystallinity and design of materials, particularly for self-assembly, and, in general, the synthesis of many organic molecules.
The non-covalent interactions may occur between different parts of the same molecule (e.g. during protein folding) or between different molecules and therefore are discussed also as intermolecular forces.
Electrostatic interactions
Ionic
Ionic interactions involve the attraction of ions or molecules with full permanent charges of opposite signs. For example, sodium fluoride involves the attraction of the positive charge on sodium (Na+) with the negative charge on fluoride (F−). However, this particular interaction is easily broken upon addition to water, or other highly polar solvents. In water ion pairing is mostly entropy driven; a single salt bridge usually amounts to an attraction value of about ΔG =5 kJ/mol at an intermediate ion strength I, at I close to zero the value increases to about 8 kJ/mol. The ΔG values are usually additive and largely independent of the nature of the participating ions, except for transition metal io |
https://en.wikipedia.org/wiki/Java%20Card%20OpenPlatform | Java Card OpenPlatform (JCOP) is a smart card operating system for the Java Card platform developed by IBM Zürich Research Laboratory.
On 31 January 2006 the development and support responsibilities transferred to the IBM Smart Card Technology team in Böblingen, Germany.
Since July 2007 support and development activities for the JCOP operating system on NXP / Philips silicon are serviced by NXP Semiconductors.
The title originates from the standards it complies with:
Java Card specifications
GlobalPlatform (formerly known as Visa Inc OpenPlatform) specifications
A Java Card JCOP has a Java Card Virtual Machine (JCVM) which allows it to run applications written in Java programming language.
History
First JC/OP Masks
Mask 0 : 1998 (spring)
First prototype on Atmel 8-bit uC – Flash memory, slow
Mask 1 : 1998
Siemens/Infineon SLE66 IC – Public key cryptography
Mask 2 and 3 : 1999
Gemplus International (now Gemalto) licensed JC/OP
Base mask for GemXpresso product line
Public key generation
Visa OpenPlatform
Mask 4 : 1999
Contactless JC/OP on Philips MifarePro chip
256 bytes RAM, 20 KB ROM and 8 KB EEPROM
Dual interface
JCOP01 and Cooperation with Philips
Mask 5 : 2000
Philips P8WE smartcard microcontroller
‘JCOP01’ is the foundation for all later versions
JCOP licensed by IBM
JCOP Tools for development
Visa breakthrough program
To counter MasterCard’s MULTOS
Cooperation between IBM (OS), Visa (OpenPlatform) and Philips (IC)
JCOP v1 owned by Visa
JCOP v2
Owned by IBM, sold by Philips
Philips SmartMX controller (SMX)
JCOP v2.2
GlobalPlatform 2.1.1
Java Card 2.2.1
Elliptic Curve Cryptography (ECC) F2M support
JCOP Tools Eclipse based
JCOP Transfer
JCOP v2.2.1 – JCOP v2.3.1
Owned by IBM, sold by Philips/NXP
Development transferred to IBM in Böblingen, Germany
USB interface
JCOP v2.3.2
JCOP technology owned by IBM
Policy change at IBM
Source code license acquired by NXP Semiconductors
To serve customer requests and projects
JCOP |
https://en.wikipedia.org/wiki/Nils%20Olav | Major General Sir Nils Olav III, Baron of the Bouvet Islands () is a king penguin who resides in Edinburgh Zoo, Scotland. He is the mascot and colonel-in-chief of the Norwegian King's Guard. The name 'Nils Olav' and associated ranks have been passed down through three king penguins since 1972 – the current holder being Nils Olav III.
History
The family of Norwegian shipping magnate Christian Salvesen gave a king penguin to Edinburgh Zoo when the zoo opened in 1913.
When the Norwegian King's Guard visited the Edinburgh Military Tattoo of 1961 for a drill display, a lieutenant named Nils Egelien became interested in the zoo's penguin colony. When the King's Guard returned to Edinburgh in 1972, Egelien arranged for the regiment to adopt a penguin. This penguin was named Nils Olav in honour of Nils Egelien, commander of the drill platoon, and Olav Siggerud, contingent commander of HMKG in 1972.
Nils Olav was initially given the rank of visekorporal (lance corporal) in the regiment. He has been promoted each time the King's Guard has returned to the zoo. In 1982 he was made a corporal, and promoted to sergeant in 1987. Nils Olav I died shortly after his promotion to sergeant in 1987, and his place was taken by Nils Olav II, a two-year-old near-double. He was promoted in 1993 to the rank of regimental sergeant major and in 2001 promoted to 'honourable regimental sergeant major'. On 18 August 2005, he was appointed as colonel-in-chief of the same regiment, outranking his namesake, Nils Egelien. During the 2005 visit, a bronze statue of Nils Olav was presented to Edinburgh Zoo. The statue's inscription includes references to both the King's Guard and to the Military Tattoo. A statue also stands at the King's Guard compound at Huseby, Oslo.
The next honour was a knighthood, awarded during a visit by soldiers from the Norwegian King's Guard on 15 August 2008. The knighthood was approved by King Harald V and Nils was the first penguin to receive such an honour in the Nor |
https://en.wikipedia.org/wiki/List%20of%20baryons | Baryons are composite particles made of three quarks, as opposed to mesons, which are composite particles made of one quark and one antiquark. Baryons and mesons are both hadrons, which are particles composed solely of quarks or both quarks and antiquarks. The term baryon is derived from the Greek "βαρύς" (barys), meaning "heavy", because, at the time of their naming, it was believed that baryons were characterized by having greater masses than other particles that were classed as matter.
Until a few years ago, it was believed that some experiments showed the existence of pentaquarks – baryons made of four quarks and one antiquark. Prior to 2006 the particle physics community as a whole did not view the existence of pentaquarks as likely. On 13 July 2015, the LHCb collaboration at CERN reported results consistent with pentaquark states in the decay of bottom lambda baryons (Λ).
Since baryons are composed of quarks, they participate in the strong interaction. Leptons, on the other hand, are not composed of quarks and as such do not participate in the strong interaction. The best known baryons are protons and neutrons, which make up most of the mass of the visible matter in the universe, whereas electrons, the other major component of atoms, are leptons. Each baryon has a corresponding antiparticle, known as an antibaryon, in which quarks are replaced by their corresponding antiquarks. For example, a proton is made of two up quarks and one down quark, while its corresponding antiparticle, the antiproton, is made of two up antiquarks and one down antiquark.
Baryon properties
These lists detail all known and predicted baryons in total angular momentum J = and J = configurations with positive parity.
Baryons composed of one type of quark (uuu, ddd, ...) can exist in J = configuration, but J = is forbidden by the Pauli exclusion principle.
Baryons composed of two types of quarks (uud, uus, ...) can exist in both J = and J = configurations.
Baryons composed o |
https://en.wikipedia.org/wiki/Gallery%20of%20flags%20of%20dependent%20territories | This overview contains the flags of dependent territories and other areas of special sovereignty.
Australia
Chile
China
Denmark
Finland
France
Overseas collectivities and territory
Overseas departments and regions
Netherlands
Constituent countries
Special municipalities
New Zealand
Dependent territories
Special territorial authority
Portugal
Spain
United Kingdom
British Overseas Territories
Saint Helena, Ascension and Tristan da Cunha
flag of Scotland
Crown Dependencies
United States
See also
Armorial of dependent territories
Armorial of sovereign states
Flags of micronations
Gallery of sovereign state flags
List of country subdivision flags
List of former sovereign states
Lists of city flags
Lists and galleries of flags |
https://en.wikipedia.org/wiki/Systems%20Management%20Architecture%20for%20Server%20Hardware | The Systems Management Architecture for Server Hardware (SMASH) is a suite of specifications that deliver industry standard protocols to increase productivity of the management of a data center.
Distributed Management Task Force developed SMASH Standard- which includes the Server Management Command Line Protocol specification - is a suite of specifications that deliver architectural semantics, industry standard protocols and profiles to unify the management of the data center. Through the development of conformance testing programs, the SMASH Forum will extend these capabilities by helping deliver additional compatibility in cross-platform servers.
External links
DMTF SMASH initiative
DMTF standards
Networking standards
System administration
Out-of-band management |
https://en.wikipedia.org/wiki/Stochastic%20drift | In probability theory, stochastic drift is the change of the average value of a stochastic (random) process. A related concept is the drift rate, which is the rate at which the average changes. For example, a process that counts the number of heads in a series of fair coin tosses has a drift rate of 1/2 per toss. This is in contrast to the random fluctuations about this average value. The stochastic mean of that coin-toss process is 1/2 and the drift rate of the stochastic mean is 0, assuming 1 = heads and 0 = tails.
Stochastic drifts in population studies
Longitudinal studies of secular events are frequently conceptualized as consisting of a trend component fitted by a polynomial, a cyclical component often fitted by an analysis based on autocorrelations or on a Fourier series, and a random component (stochastic drift) to be removed.
In the course of the time series analysis, identification of cyclical and stochastic drift components is often attempted by alternating autocorrelation analysis and differencing of the trend. Autocorrelation analysis helps to identify the correct phase of the fitted model while the successive differencing transforms the stochastic drift component into white noise.
Stochastic drift can also occur in population genetics where it is known as genetic drift. A finite population of randomly reproducing organisms would experience changes from generation to generation in the frequencies of the different genotypes. This may lead to the fixation of one of the genotypes, and even the emergence of a new species. In sufficiently small populations, drift can also neutralize the effect of deterministic natural selection on the population.
Stochastic drift in economics and finance
Time series variables in economics and finance — for example, stock prices, gross domestic product, etc. — generally evolve stochastically and frequently are non-stationary. They are typically modelled as either trend-stationary or difference stationary. A trend s |
https://en.wikipedia.org/wiki/Inferior%20cerebellar%20peduncle | The upper part of the posterior district of the medulla oblongata is occupied by the inferior cerebellar peduncle, a thick rope-like strand situated between the lower part of the fourth ventricle and the roots of the glossopharyngeal and vagus nerves.
Each cerebellar inferior peduncle connects the spinal cord and medulla oblongata with the cerebellum, and comprises the juxtarestiform body and restiform body.
Important fibers running through the inferior cerebellar peduncle include the dorsal spinocerebellar tract and axons from the inferior olivary nucleus, among others.
Function
The inferior cerebellar peduncle carries many types of input and output fibers that are mainly concerned with integrating proprioceptive sensory input with motor vestibular functions such as balance and posture maintenance. It consists of the following fiber tracts entering cerebellum:
Posterior spinocerebellar tract: unconscious proprioceptive information from the lower part of trunk and lower limb. This tract originates at the ipsilateral Clarke's nucleus (T1-L1) and travels upward to reach the inferior cerebellar peduncle and synapses within the spinocerebellum (also known as the paleocerebellum).
Cuneocerebellar tract: unconscious proprioceptive information from the upper limb and neck. This tract originates at the ipsilateral accessory cuneate nucleus and travels through the inferior cerebellar peduncle to reach the spinocerebellum part of the cerebellum.
Trigeminocerebellar tract: unconscious proprioceptive information from the face.
Olivocerebellar tract: "error signal" in movement originates from the cerebral cortex and spinal cord. This tract originates at contralateral inferior olivary nucleus and enters the cerebellum as a climbing fiber.
Vestibulocerebellar tract: vestibular information projects onto the vestibulocerebellum (also known as the archicerebellum).
This peduncle also carries information leaving cerebellum: from the Purkinje cells to the vestibular nucl |
https://en.wikipedia.org/wiki/Globose%20nucleus | The globose nucleus is one of the deep cerebellar nuclei. It is located medial to the emboliform nucleus, and lateral to the fastigial nucleus. The globose nucleus and emboliform nucleus are known collectively as the interposed nuclei.
The globose nucleus is part of a neural circuit giving rise to descending motor tracts involved in motor control of distal musculature of the upper and lower limbs.
Anatomy
Afferents
Purkinje cells of (the paravermal cortex of) the spinocerebellum
Anterior spinocerebellar tract (via restiform body of inferior cerebellar peduncle)
Efferents
Contralateral (magnocellular division of) red nucleus (via the superior cerebellar peduncle). This is the major major efferent projection of the globose nucleus. The red nucleus in turn gives rise to the rubrospinal tract.
Ipsilateral ventral lateral nucleus of thalamus. The VL nucleus in turn projects to the primary motor cortex and premotor cortex (which then give rise to the lateral corticospinal tract).
Cell biology
This nucleus contains primarily large and small multipolar neurons. |
https://en.wikipedia.org/wiki/Fastigial%20nucleus | The fastigial nucleus is located in the cerebellum. It is one of the four deep cerebellar nuclei (the others being the nucleus dentatus, nucleus emboliformis and nucleus globosus), and is grey matter embedded in the white matter of the cerebellum.
It refers specifically to the concentration of gray matter nearest to the middle line at the anterior end of the superior vermis, and immediately over the roof of the fourth ventricle (the peak of which is called the fastigium), from which it is separated by a thin layer of white matter. It is smaller than the nucleus dentatus, but somewhat larger than the nucleus emboliformis and nucleus globosus.
Although it is one dense mass, it is made up of two sections: the rostral fastigial nucleus and the caudal fastigial nucleus.
Structure
The Purkinje cells of the cerebellar cortex project into the deep cerebellar nuclei and inhibit the excitatory output system via GABAergic synapses. The fastigial nucleus receives its input from Purkinje cells in the vermis. Most of its efferent connections travel via the inferior cerebellar peduncle to the vestibular nuclei, which are located at the junction of the pons and the medulla oblongata.
The fastigial nucleus sends excitatory projections beyond the cerebellum. The likely neurotransmitters of fastigial nucleus axons are glutamate and aspartate.
Rostral fastigial nucleus
The rostral fastigial nucleus (rFN) is related to the vestibular system. It receives input from the vestibular nuclei and contributes to vestibular neuronal activity. The rFN interprets body motion and places it on spatial planes to estimate the movement of the body through space. It deals with antigravity muscle groups and other synergies involved with standing and walking.
Caudal fastigial nucleus
The caudal fastigial nucleus (cFN) is related to saccadic eye movements. The Purkinje cell output from the oculomotor vermis relays through the cFN, where neurons directly related to saccadic eye movements are locate |
https://en.wikipedia.org/wiki/MELCOR | MELCOR is a fully integrated, engineering-level computer code developed by Sandia National Laboratories for the U.S. Nuclear Regulatory Commission to model the progression of severe accidents in nuclear power plants. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. MELCOR applications include estimation of severe accident source terms, and their sensitivities and uncertainties in a variety of applications.
See also
Nuclear engineering
Monte Carlo method
Nuclear reactor
MCNP
External links
SNL MELCOR website
NRC "Obtaining MELCOR" site
Wikiversity: Nuclear Engineering
Nuclear safety and security
Physics software |
https://en.wikipedia.org/wiki/Foramen%20of%20Panizza | The foramen of Panizza (named for anatomist Bartolomeo Panizza) is a hole that connects the left and right aorta as they leave the heart of all animals of the order Crocodilia.
Crocodilians have a completely separated ventricle with deoxygenated blood from the body, or systemic circulation, in the right ventricle and oxygenated blood from the lungs, or pulmonary circulation, in the left ventricle, as in birds and mammals.
Two vessels, the left aorta and the pulmonary artery, exit the right ventricle. Blood from the right ventricle goes to the lungs through the pulmonary artery, as in mammals and birds. However, when a unique active valve leading to the pulmonary artery contracts, pressure in the right ventricle can increase, and blood can leave the right ventricle, enter the left aortic arch, and therefore bypass the pulmonary circulation.
The foramen of Panizza connects the left and right aorta. Deoxygenated blood from the right ventricle, sitting in the left aorta, can flow into the right aorta through the foramen of Panizza. When the heart is relaxed, some oxygenated blood from the left ventricle, sitting in the right aorta, can flow into the left aorta across the foramen of Panizza. However, some species of Crocodilians have regulatory sphincters that prevent unwanted flow of blood through the foramen of Panizza during non-diving.
Footnotes |
https://en.wikipedia.org/wiki/Desktop%20Management%20Interface | The Desktop Management Interface (DMI) generates a standard framework for managing and tracking components in a desktop, notebook or server computer, by abstracting these components from the software that manages them. The development of DMI, 2.0 version June 24, 1998, marked the first move by the Distributed Management Task Force (DMTF) into desktop-management standards.
Before the introduction of DMI, no standardized source of information could provide details about components in a personal computer.
Due to the rapid development of DMTF technologies, such as Common Information Model (CIM), the DMTF defined an "End of Life" process for DMI, which ended on March 31, 2005.
From 1999, Microsoft required OEMs and BIOS vendors to support the DMI interface/data-set in order to have Microsoft certification.
DMI and SMBIOS
DMI exposes system data (including the System Management BIOS (SMBIOS) data) to management software, but the two specifications function independently.
DMI is commonly confused with SMBIOS, which was actually called DMIBIOS in its first revisions.
Optional additional services: MIF data and MIF routines
When software queries a memory-resident agent that resides in the background, it responds by sending data in MIFs (Management Information Format) or activating MIF routines. Static data in a MIF would contain items such as model ID, serial number, memory- and port-addresses. A MIF routine could read memory and report its contents.
DMI and SNMP
DMI can co-exist with SNMP and other management protocols. For example, when an SNMP query arrives, DMI can fill out the SNMP MIB with data from its MIF. A single workstation or server can serve as a proxy agent that would contain the SNMP module and service an entire LAN segment of DMI-capable machines.
See also
dmidecode
Desktop management
lspci
System Management BIOS
Web-Based Enterprise Management (WBEM)
WS-Management |
https://en.wikipedia.org/wiki/Ankle%E2%80%93brachial%20pressure%20index | The ankle-brachial pressure index (ABPI) or ankle-brachial index (ABI) is the ratio of the blood pressure at the ankle to the blood pressure in the upper arm (brachium). Compared to the arm, lower blood pressure in the leg suggests blocked arteries due to peripheral artery disease (PAD). The ABPI is calculated by dividing the systolic blood pressure at the ankle by the systolic blood pressure in the arm.
Method
The patient must be placed supine, without the head or any extremities dangling over the edge of the table. Measurement of ankle blood pressures in a seated position will grossly overestimate the ABI (by approximately 0.3).
A Doppler ultrasound blood flow detector, commonly called Doppler wand or Doppler probe, and a sphygmomanometer (blood pressure cuff) are usually needed. The blood pressure cuff is inflated proximal to the artery in question. Measured by the Doppler wand, the inflation continues until the pulse in the artery ceases. The blood pressure cuff is then slowly deflated. When the artery's pulse is re-detected through the Doppler probe the pressure in the cuff at that moment indicates the systolic pressure of that artery.
The higher systolic reading of the left and right arm brachial artery is generally used in the assessment. The pressures in each foot's posterior tibial artery and dorsalis pedis artery are measured with the higher of the two values used as the ABI for that leg.
Where PLeg is the systolic blood pressure of dorsalis pedis or posterior tibial arteries
and PArm is the highest of the left and right arm brachial systolic blood pressure
The ABPI test is a popular tool for the non-invasive assessment of Peripheral vascular disease (PVD). Studies have shown the sensitivity of ABPI is 90% with a corresponding 98% specificity for detecting hemodynamically significant (stenosis of more than 50%) in major leg arteries, defined by angiogram.
However, ABPI has known issues:
ABPI is known to be unreliable on patients with arterial calci |
https://en.wikipedia.org/wiki/Test%20theory | In experimental physics, a test theory tells experimenters how to perform particular comparisons between specific theories or classes of theory.
Without a good reference test theory, these experiments can be difficult to construct. Different theories often define relationships and parameters in different, often incompatible, ways. Sometimes, physical theories and models that nominally produce significantly diverging predictions can be found to produce very similar, even identical, predictions, once definitional differences are taken into account.
A good test theory should identify potential sources of definitional bias in the way that experiments are constructed. It should also be able to deal with a wide range of possible objections to experimental tests based upon it. Discovery that a test theory has serious omissions can undermine the validity of experimental work that is designed according to that theory.
Examples
Parameterized post-Newtonian formalism is used to compare theories of gravity.
Test theories of special relativity are useful when designing experiments to look for possible violations of Poincare symmetry.
Physics experiments |
https://en.wikipedia.org/wiki/E-Amusement | e-Amusement, stylized as e-amusement, is an online service operated by Konami, used primarily for online functionality on its arcade video games. The system is used primarily to save progress and unlockable content between games, participate in internet high score lists, access other exclusive features depending on the game, and access the Paseli digital currency service.
The system uses online user accounts tied to a contactless smart card system called the "e-Amusement Pass". Users log into an e-Amusement enabled game by holding their pass up to the card reader and using a PIN.
The system is similar to parts of the functionality of the rival Taito NESYS and SEGA ALL.Net systems.
Cards
Magnetic cards
Prior to 2006, e-Amusement used magnetic stripe cards called Entry Passes that were sold separately for each game using the platform, either from an arcade desk or through a vending machine. Each card held data for one player, and typically came in 5 designs specific to the game (usually featuring character artwork). "Special" cards were also distributed from time to time, often alongside the console versions of certain games; these cards could sometimes be used to unlock special content in their respective game.
e-Amusement Pass
In 2006, Konami began to phase out the original magnetic card system in favor of the e-Amusement Pass; an IC contactless smartcard that works across all games that were upgraded to use the new system. The new cards also use a 4-digit PIN for security. In the event the pass is lost, its existing data can be transferred over to a new pass through Konami's website.
The pass can also be linked to a mobile phone "Konami NetDX" account, allowing players to access their scores and other data on their mobile phone. On some games, customization of the game can also be done through the NetDX system. However, only smartphones sold to Japan with FeliCa RFID support can use this function.
Beatmania IIDX
Beatmania IIDX is one of the prominent gam |
https://en.wikipedia.org/wiki/Twin-scaling | Twin-scaling is a method of propagating plant bulbs that have a basal plate, such as:
Hippeastrum, Narcissus, Galanthus and other members of the Amaryllidaceae;
some members of the lily family Liliaceae;
Lachenalia, Veltheimia and other members of the Hyacinthaceae.
Purpose
Twin-scaling is practiced by professional growers and skilled amateurs to increase bulbs that would naturally propagate very slowly, or to speed up the production of desirable cultivars. Using twin-scaling, it is possible to multiply one bulb into 16 to 32 (or more) viable bulbs in a couple of years, whereas natural propagation might only lead to a doubling every two years or so. It is one of a number of propagation techniques (such as "scooping", "scoring" and "chipping") based on the fact that an accidentally damaged bulb will often regenerate by forming small bulblets or bulbils on the damaged surface. Commercial growers have obtained as many as 100 twin-scales from a single bulb.
Method
The dormant bulb which is to be twin-scaled has its surface sterilized by removing its dry tunic and carefully trimming off its roots and any dead tissue, while leaving a layer of sound basal plate intact, then dipping the clean bulb in dilute bleach (or another suitable disinfectant). The bulb is then sliced cleanly from top to bottom several times, creating 8 or 16 segments, depending on the size of the bulb. At this stage, the segments are called "chips" (many growers are content with simply chipping a bulb into 4 or 8 and do not divide the bulb further).
True twin-scaling involves further subdivision of the chips to create pairs of scales, joined by a small part of the basal plate. The twin-scales are then treated with fungicide before being mixed with moist, sterile Vermiculite, sealed in plastic bags and left in a fairly warm, dark location until new bulblets form. Some species may require alternate periods of warm and cool storage to initiate bulblet growth.
The tiny bulbs are planted into pots o |
https://en.wikipedia.org/wiki/Snow%20algae | Snow algae are a group of freshwater micro-algae which grow in the alpine and polar regions of the earth. These algae have been observed to come in a variety of colors associated with both the individual species, stage of life or topography/geography. A typical snow algae in the alps and polar regions is Chlamydomonas nivalis. This variation is associated with both albedo differences of the snowy habitat and the presence of micro-invertebrates. Snow algae play a critical role in the trophic organization as primary producers who in turn are consumed primarily by tardigrades and rotifers. Snow algae have also been found to travel great distances being carried by winds. |
https://en.wikipedia.org/wiki/Trophic%20egg | A trophic egg is an egg whose function is not reproduction but nutrition; in essence, the trophic egg serves as food for offspring hatched from viable eggs. In most species that produce them, a trophic egg is usually an unfertilised egg. The production of trophic eggs has been observed in a highly diverse range of species, including fish, amphibians, spiders and insects. The function is not limited to any particular level of parental care, but occurs in some sub-social species of insects, the spider A. ferox, and a few other species like the frogs Leptodactylus fallax and Oophaga, and the catfish Bagrus meridionalis.
Parents of some species deliver trophic eggs directly to their offspring, whereas some other species simply produce the trophic eggs after laying the viable eggs; they then leave the trophic eggs where the viable offspring are likely to find them.
The mackerel sharks present the most extreme example of proximity between reproductive eggs and trophic eggs; their viable offspring feed on trophic eggs in utero.
Despite the diversity of species and life strategies in which trophic eggs occur, all trophic egg functions are similarly derived from similar ancestral functions, which once amounted to the sacrifice of potential future offspring in order to provide food for the survival of rival (usually earlier) offspring. In more derived examples the trophic eggs are not viable, being neither fertilised, nor even fully formed in some cases, so they do not represent actually potential offspring, although they still represent parental investment corresponding to the amount of food it took to produce them.
Morphology
Trophic eggs are not always morphologically distinct from normal reproductive eggs; however if there is no physical distinction there tends to be some kind of specialised behaviour in the way that trophic eggs are delivered by the parents.
In some beetles, trophic eggs are paler in colour and softer in texture than reproductive eggs, with a smooth |
https://en.wikipedia.org/wiki/Polyhedron%20model | A polyhedron model is a physical construction of a polyhedron, constructed from cardboard, plastic board, wood board or other panel material, or, less commonly, solid material.
Since there are 75 uniform polyhedra, including the five regular convex polyhedra, five polyhedral compounds, four Kepler-Poinsot polyhedra, and thirteen Archimedean solids, constructing or collecting polyhedron models has become a common mathematical recreation. Polyhedron models are found in mathematics classrooms much as globes in geography classrooms.
Polyhedron models are notable as three-dimensional proof-of-concepts of geometric theories. Some polyhedra also make great centerpieces, tree toppers, Holiday decorations, or symbols. The Merkaba religious symbol, for example, is a stellated octahedron. Constructing large models offer challenges in engineering structural design.
Construction
Construction begins by choosing a size of the model, either the length of its edges or the height of the model. The size will dictate the material, the adhesive for edges, the construction time and the method of construction.
The second decision involves colours. A single-colour cardboard model is easiest to construct — and some models can be made by folding a pattern, called a net, from a single sheet of cardboard. Choosing colours requires geometric understanding of the polyhedron. One way is to colour each face differently. A second way is to colour all square faces the same, all pentagonal faces the same, and so forth. A third way is to colour opposite faces the same. Many polyhedra are also coloured such that no same-coloured faces touch each other along an edge or at a vertex.
For example, a 20-face icosahedron can use twenty colours, one colour, ten colours, or five colours, respectively.
An alternative way for polyhedral compound models is to use a different colour for each polyhedron component.
Net templates are then made. One way is to copy templates from a polyhedron-making bo |
https://en.wikipedia.org/wiki/MESM | MESM (Ukrainian: MEOM, Мала Електронна Обчислювальна Машина; Russian: МЭСМ, Малая Электронно-Счетная Машина; 'Small Electronic Calculating Machine') was the first universally programmable electronic computer in the Soviet Union. By some authors it was also depicted as the first one in continental Europe, even though the electromechanical computers Zuse Z4 and the Swedish BARK preceded it.
Overview
MESM was created by a team of scientists under the direction of Sergei Alekseyevich Lebedev from the Kiev Institute of Electrotechnology in the Ukrainian SSR, at Feofaniya (near Kyiv).
Initially, MESM was conceived as a layout or model of a Large Electronic Calculating Machine and letter "M" in the title meant "model" (prototype).
Work on the machine was research in nature, in order to experimentally test the principles of constructing universal digital computers. After the first successes and in order to meet the extensive governmental needs of computer technology, it was decided to complete the layout of a full-fledged machine capable of "solving real problems". MESM became operational in 1950. It had about 6,000 vacuum tubes and consumed 25 kW of power. It could perform approximately 3,000 operations per minute.
Creation and operation history
Principal computer architecture scheme was ready by the end of 1949. As well as a few schematic diagrams of an individual blocks.
In 1950 the computer was mounted in a two-story building of the former hostel of a convent in Feofania, where a psychiatric hospital was located before the second world war.
November 6, 1950: team performed the first test launch. Test task was:
January 4, 1951: First useful calculations performed. Calculate the factorial of a number, raise number in a power. Computer was shown to special commission of the USSR State Academy of Sciences. Team was led by Mstislav Keldysh.
December 25, 1951: Official government testing passed successfully. USSR Academy of Sciences and Mstislav Keldysh began regul |
https://en.wikipedia.org/wiki/DNA%20mismatch%20repair | DNA mismatch repair (MMR) is a system for recognizing and repairing erroneous insertion, deletion, and mis-incorporation of bases that can arise during DNA replication and recombination, as well as repairing some forms of DNA damage.
Mismatch repair is strand-specific. During DNA synthesis the newly synthesised (daughter) strand will commonly include errors. In order to begin repair, the mismatch repair machinery distinguishes the newly synthesised strand from the template (parental). In gram-negative bacteria, transient hemimethylation distinguishes the strands (the parental is methylated and daughter is not). However, in other prokaryotes and eukaryotes, the exact mechanism is not clear. It is suspected that, in eukaryotes, newly synthesized lagging-strand DNA transiently contains nicks (before being sealed by DNA ligase) and provides a signal that directs mismatch proofreading systems to the appropriate strand. This implies that these nicks must be present in the leading strand, and evidence for this has recently been found.
Recent work has shown that nicks are sites for RFC-dependent loading of the replication sliding clamp, proliferating cell nuclear antigen (PCNA), in an orientation-specific manner, such that one face of the donut-shape protein is juxtaposed toward the 3'-OH end at the nick. Loaded PCNA then directs the action of the MutLalpha endonuclease to the daughter strand in the presence of a mismatch and MutSalpha or MutSbeta.
Any mutational event that disrupts the superhelical structure of DNA carries with it the potential to compromise the genetic stability of a cell. The fact that the damage detection and repair systems are as complex as the replication machinery itself highlights the importance evolution has attached to DNA fidelity.
Examples of mismatched bases include a G/T or A/C pairing (see DNA repair). Mismatches are commonly due to tautomerization of bases during DNA replication. The damage is repaired by recognition of the deformity |
https://en.wikipedia.org/wiki/Base%20excision%20repair | Base excision repair (BER) is a cellular mechanism, studied in the fields of biochemistry and genetics, that repairs damaged DNA throughout the cell cycle. It is responsible primarily for removing small, non-helix-distorting base lesions from the genome. The related nucleotide excision repair pathway repairs bulky helix-distorting lesions. BER is important for removing damaged bases that could otherwise cause mutations by mispairing or lead to breaks in DNA during replication. BER is initiated by DNA glycosylases, which recognize and remove specific damaged or inappropriate bases, forming AP sites. These are then cleaved by an AP endonuclease. The resulting single-strand break can then be processed by either short-patch (where a single nucleotide is replaced) or long-patch BER (where 2–10 new nucleotides are synthesized).
Lesions processed by BER
Single bases in DNA can be chemically damaged by a variety of mechanisms, the most common ones being deamination, oxidation, and alkylation. These modifications can affect the ability of the base to hydrogen-bond, resulting in incorrect base-pairing, and, as a consequence, mutations in the DNA. For example, incorporation of adenine across from 8-oxoguanine (right) during DNA replication causes a G:C base pair to be mutated to T:A. Other examples of base lesions repaired by BER include:
Oxidized bases: 8-oxoguanine, 2,6-diamino-4-hydroxy-5-formamidopyrimidine (FapyG, FapyA)
Alkylated bases: 3-methyladenine, 7-methylguanosine
Deaminated bases: hypoxanthine formed from deamination of adenine. Xanthine formed from deamination of guanine. (Thymidine products following deamination of 5-methylcytosine are more difficult to recognize, but can be repaired by mismatch-specific glycosylases)
Uracil inappropriately incorporated in DNA or formed by deamination of cytosine
In addition to base lesions, the downstream steps of BER are also utilized to repair single-strand breaks.
The choice between long-patch and short-patch r |
https://en.wikipedia.org/wiki/Shore%20power | Shore power or shore supply is the provision of shoreside electrical power to a ship at berth while its main and auxiliary engines are shut down. While the term denotes shore as opposed to off-shore, it is sometimes applied to aircraft or land-based vehicles (such as campers, heavy trucks with sleeping compartments and tour buses), which may plug into grid power when parked for idle reduction.
The source for land-based power may be grid power from an electric utility company, but also possibly an external remote generator. These generators may be powered by diesel or renewable energy sources such as wind or solar.
Shore power saves consumption of fuel that would otherwise be used to power vessels while in port, and eliminates the air pollution associated with consumption of that fuel. A port city may have anti-idling laws that require ships to use shore power. Use of shore power may facilitate maintenance of the ship's engines and generators, and reduces noise.
Oceangoing ships
"Cold ironing" is specifically a shipping industry term that came into use when all ships had coal-fired engines. When a ship tied up at port, there was no need to continue to feed the fire and the iron engines would cool down, eventually going completely cold – hence the term "cold ironing". Commercial ships can use shore-supplied power for services such as cargo handling, pumping, ventilation and lighting while in port, they need not run their own diesel engines, reducing air pollution emissions. Examples are ferries and cruise ships for hotel electric power, and a salmon feeder ship uses shore power while at the salmon farm.
Small craft
On small private boats, electrical power supply on board is usually provided by 12 or 24 volt DC batteries whilst at sea unless the vessel has a generator. When the vessel is berthed in a marina or harbourside, mains electricity is often offered via a shore power connection. This allows the vessel to use a battery charger to recharge batteries and also |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.