source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Tim%20Paterson | Tim Paterson (born 1 June 1956) is an American computer programmer, best known for creating 86-DOS, an operating system for the Intel 8086. This system emulated the application programming interface (API) of CP/M, which was created by Gary Kildall. 86-DOS later formed the basis of MS-DOS, the most widely used personal computer operating system in the 1980s.
Biography
Paterson was educated in the Seattle Public Schools, graduating from Ingraham High School in 1974. He attended the University of Washington, working as a repair technician for The Retail Computer Store in the Green Lake area of Seattle, Washington, and graduated magna cum laude with a degree in Computer Science in June 1978. He went to work for Seattle Computer Products as a designer and engineer. He designed the hardware of Microsoft's Z-80 SoftCard which had a Z80 CPU and ran the CP/M operating system on an Apple II.
A month later, Intel released the 8086 CPU, and Paterson went to work designing an S-100 8086 board, which went to market in November 1979. The only commercial software that existed for the board was Microsoft's Standalone Disk BASIC-86. The standard CP/M operating system at the time was not available for this CPU and without a true operating system, sales were slow. Paterson began work on QDOS (Quick and Dirty Operating System) in April 1980 to fill that void, copying the APIs of CP/M from references, including the published CP/M manual, so that it would be highly compatible. QDOS was soon renamed as 86-DOS. Version 0.10 was complete by July 1980. By version 1.14, 86-DOS had grown to lines of assembly code. In December 1980, Microsoft secured the rights to market 86-DOS to other hardware manufacturers.
While acknowledging that he made 86-DOS compatible with CP/M, Paterson has maintained that the 86-DOS program was his original work and has denied allegations that he referred to CP/M code while writing it. When a book appeared in 2004 claiming that 86-DOS was an unoriginal "rip-off" o |
https://en.wikipedia.org/wiki/Mackey%20topology | In functional analysis and related areas of mathematics, the Mackey topology, named after George Mackey, is the finest topology for a topological vector space which still preserves the continuous dual. In other words the Mackey topology does not make linear functions continuous which were discontinuous in the default topology. A topological vector space (TVS) is called a Mackey space if its topology is the same as the Mackey topology.
The Mackey topology is the opposite of the weak topology, which is the coarsest topology on a topological vector space which preserves the continuity of all linear functions in the continuous dual.
The Mackey–Arens theorem states that all possible dual topologies are finer than the weak topology and coarser than the Mackey topology.
Definition
Definition for a pairing
Given a pairing the Mackey topology on induced by denoted by is the polar topology defined on by using the set of all -compact disks in
When is endowed with the Mackey topology then it will be denoted by or simply or if no ambiguity can arise.
A linear map is said to be Mackey continuous (with respect to pairings and ) if is continuous.
Definition for a topological vector space
The definition of the Mackey topology for a topological vector space (TVS) is a specialization of the above definition of the Mackey topology of a pairing.
If is a TVS with continuous dual space then the evaluation map on is called the canonical pairing.
The Mackey topology on a TVS denoted by is the Mackey topology on induced by the canonical pairing
That is, the Mackey topology is the polar topology on obtained by using the set of all weak*-compact disks in
When is endowed with the Mackey topology then it will be denoted by or simply if no ambiguity can arise.
A linear map between TVSs is Mackey continuous if is continuous.
Examples
Every metrizable locally convex with continuous dual carries the Mackey topology, that is or to put it more succin |
https://en.wikipedia.org/wiki/Thermogenic%20plant | Thermogenic plants have the ability to raise their temperature above that of the surrounding air. Heat is generated in the mitochondria, as a secondary process of cellular respiration called thermogenesis. Alternative oxidase and uncoupling proteins similar to those found in mammals enable the process, which is still poorly understood.
The role of thermogenesis
Botanists are not completely sure why thermogenic plants generate large amounts of excess heat, but most agree that it has something to do with increasing pollination rates. The most widely accepted theory states that the endogenous heat helps in spreading chemicals that attract pollinators to the plant. For example, the Voodoo lily uses heat to help spread its smell of rotting meat. This smell draws in flies which begin to search for the source of the smell. As they search the entire plant for the dead carcass, they pollinate the plant.
Other theories state that the heat may provide a heat reward for the pollinator: pollinators are drawn to the flower for its warmth. This theory has less support because most thermogenic plants are found in tropical climates.
Yet another theory is that the heat helps protect against frost damage, allowing the plant to germinate and sprout earlier than otherwise. For example, the skunk cabbage generates heat, which allows it to melt its way through a layer of snow in early spring. The heat, however, is mostly used to help spread its pungent odor and attract pollinators.
Characteristics of thermogenic plants
Most thermogenic plants tend to be rather large. This is because the smaller plants do not have enough volume to create a considerable amount of heat. Large plants, on the other hand, have a lot of mass to create and retain heat.
Thermogenic plants are also protogynous, meaning that the female part of the plant matures before the male part of the same plant. This reduces inbreeding considerably, as such a plant can be fertilized only by pollen from a different plant. |
https://en.wikipedia.org/wiki/Knotted%20protein | Knotted proteins are proteins whose backbones entangle themselves in a knot. One can imagine pulling a protein chain from both termini, as though pulling a string from both ends. When a knotted protein is “pulled” from both termini, it does not get disentangled. Knotted proteins are very rare, making up only about one percent of the proteins in the Protein Data Bank, and their folding mechanisms and function are not well understood. Although there are experimental and theoretical studies that hint to some answers, systematic answers to these questions have not yet been found.
Although number of computational methods have been developed to detect protein knots, there are still no completely automatic methods to detect protein knots without necessary manual intervention due to the missing residues or chain breaks in the X-ray structures or the nonstandard PDB formats.
Most of the knots discovered in proteins are deep trefoil (31) knots. Figure eight knots (41), three-twist knots (52), and Stevedore knots (61) have also been discovered. Recently, use of machine learning techniques for predicting protein structure, resulted in highly accurate prediction of 6₃ knot. Furthermore, using same techniques, composite knots (namely 31#31) were found.
Mathematical interpretation
Mathematically, a knot is defined as a subset of three-dimensional space that is homeomorphic to a circle. According to this definition, a knot must exist in a closed loop, while knotted proteins instead exist within open, unclosed chains. In order to apply mathematical knot theory to knotted proteins, various strategies can be used to create an artificial closed loop. One such strategy is to choose a point in space at infinite distance to be connected to the protein's N- and C-termini through a virtual bond, thus the protein can be treated as a closed loop. Another such strategy is to use stochastic methods that create random closures.
Depth of the knot
The depth of a protein knot relates to the abi |
https://en.wikipedia.org/wiki/Applied%20general%20equilibrium | In mathematical economics, applied general equilibrium (AGE) models were pioneered by Herbert Scarf at Yale University in 1967, in two papers, and a follow-up book with Terje Hansen in 1973, with the aim of empirically estimating the Arrow–Debreu model of general equilibrium theory with empirical data, to provide "“a general method for the explicit numerical solution of the neoclassical model”
(Scarf with Hansen 1973: 1)
Scarf's method iterated a sequence of simplicial subdivisions which would generate a decreasing sequence of simplices around any solution of the general equilibrium problem. With sufficiently many steps, the sequence would produce a price vector that clears the market.
Brouwer's Fixed Point theorem states that a continuous mapping of a simplex into itself has at least one fixed point. This paper describes a numerical algorithm for approximating, in a sense to be explained below, a fixed point of such a mapping (Scarf 1967a: 1326).
Scarf never built an AGE model, but hinted that “these novel numerical techniques might be useful in assessing consequences for the economy of a change in the economic environment” (Kehoe et al. 2005, citing Scarf 1967b). His students elaborated the Scarf algorithm into a tool box, where the price vector could be solved for any changes in policies (or exogenous shocks), giving the equilibrium ‘adjustments’ needed for the prices. This method was first used by Shoven and Whalley (1972 and 1973), and then was developed through the 1970s by Scarf’s students and others.
Most contemporary applied general equilibrium models are numerical
analogs of traditional two-sector general equilibrium models popularized
by James Meade, Harry Johnson, Arnold Harberger, and others in the
1950s and 1960s. Earlier analytic work with these models has examined
the distortionary effects of taxes, tariffs, and other policies, along with
functional incidence questions. More recent applied models, including
those discussed here, provide numerical |
https://en.wikipedia.org/wiki/GeneSweep | GeneSweep or Gene Sweepstake was a sweepstake and scientific wager for scientists to bet on the total number of genes in the human genome. The sweepstake was started at a Cold Spring Harbor Laboratory conference in 2000. Initially, bets could be placed for $1, which was raised to $5 in 2001 and to $20 in 2002. The cost of placing a bet increased significantly because later participants were expected to have much more accurate information available to inform their guesses. By May 23, 2000, 228 bets had been placed, with the average number of predicted genes among them being 62,598.
Winning bets in 2003
On May 30, 2003, Ewan Birney of the European Bioinformatics Institute, who had organized the pool, announced the winner: Lee Rowen of the Institute for Systems Biology. Rowen had guessed that the human genome would contain 25,947 genes, which was the closest to the estimated number of 24,847 given by the Ensembl genome database project. In addition to being the winning guess, this was also the lowest of the more than 460 bets that were placed. Rowen split the $1,200 prize pool with Paul Dear of the Medical Research Council (MRC) and Olivier Jaillon of Genoscope. Rowen credited Jean Weissenbach of Genoscope with convincing her that the true number of human genes would be relatively low. All three winners shared the prize because they were the only betters who guessed under 30,000, and Birney was certain that the total number of genes was less than that. The sweepstakes had always been planned to end in 2003, because Birney had expected that Ensembl would have completed counting the number of human genes by then. Once it became clear that they would need more time to arrive at an exact number, Birney initially planned on extending the sweepstakes for five more years. However, David Stewart convinced Birney to choose a winner by pointing out that the rules specified that a winner had to be chosen in 2003, with no exceptions. Birney noted that, though the exact number was |
https://en.wikipedia.org/wiki/Truncation | In mathematics and computer science, truncation is limiting the number of digits right of the decimal point.
Truncation and floor function
Truncation of positive real numbers can be done using the floor function. Given a number to be truncated and , the number of elements to be kept behind the decimal point, the truncated value of x is
However, for negative numbers truncation does not round in the same direction as the floor function: truncation always rounds toward zero, the floor function rounds towards negative infinity. For a given number , the function ceil is used instead
.
In some cases is written as . See Notation of floor and ceiling functions.
Causes of truncation
With computers, truncation can occur when a decimal number is typecast as an integer; it is truncated to zero decimal digits because integers cannot store non-integer real numbers.
In algebra
An analogue of truncation can be applied to polynomials. In this case, the truncation of a polynomial P to degree n can be defined as the sum of all terms of P of degree n or less. Polynomial truncations arise in the study of Taylor polynomials, for example.
See also
Arithmetic precision
Quantization (signal processing)
Precision (computer science)
Truncation (statistics) |
https://en.wikipedia.org/wiki/FCGR1A | High affinity immunoglobulin gamma Fc receptor I is a protein that in humans is encoded by the FCGR1A gene.
Interactions
The FcgRI binds the Fc portion of IgG and causes activation of the host cell via an intercellular ITAM motif. |
https://en.wikipedia.org/wiki/Internet%20Key%20Exchange | In computing, Internet Key Exchange (IKE, versioned as IKEv1 and IKEv2) is the protocol used to set up a security association (SA) in the IPsec protocol suite. IKE builds upon the Oakley protocol and ISAKMP. IKE uses X.509 certificates for authentication ‒ either pre-shared or distributed using DNS (preferably with DNSSEC) ‒ and a Diffie–Hellman key exchange to set up a shared session secret from which cryptographic keys are derived. In addition, a security policy for every peer which will connect must be manually maintained.
History
The Internet Engineering Task Force (IETF) originally defined IKE in November 1998 in a series of publications (Request for Comments) known as RFC 2407, RFC 2408 and RFC 2409:
defined the Internet IP Security Domain of Interpretation for ISAKMP.
defined the Internet Security Association and Key Management Protocol (ISAKMP).
defined the Internet Key Exchange (IKE).
updated IKE to version two (IKEv2) in December 2005. clarified some open details in October 2006. combined these two documents plus additional clarifications into the updated IKEv2, published in September 2010. A later update upgraded the document from Proposed Standard to Internet Standard, published as in October 2014.
The parent organization of the IETF, the Internet Society (ISOC), has maintained the copyrights of these standards as freely available to the Internet community.
Architecture
Most IPsec implementations consist of an IKE daemon that runs in user space and an IPsec stack in the kernel that processes the actual IP packets.
User-space daemons have easy access to mass storage containing configuration information, such as the IPsec endpoint addresses, keys and certificates, as required. Kernel modules, on the other hand, can process packets efficiently and with minimum overhead—which is important for performance reasons.
The IKE protocol uses UDP packets, usually on port 500, and generally requires 4–6 packets with 2–3 round trips to create an I |
https://en.wikipedia.org/wiki/Discourse%20representation%20theory | In formal linguistics, discourse representation theory (DRT) is a framework for exploring meaning under a formal semantics approach. One of the main differences between DRT-style approaches and traditional Montagovian approaches is that DRT includes a level of abstract mental representations (discourse representation structures, DRS) within its formalism, which gives it an intrinsic ability to handle meaning across sentence boundaries. DRT was created by Hans Kamp in 1981. A very similar theory was developed independently by Irene Heim in 1982, under the name of File Change Semantics (FCS). Discourse representation theories have been used to implement semantic parsers and natural language understanding systems.
Discourse representation structures
DRT uses discourse representation structures (DRS) to represent a hearer's mental representation of a discourse as it unfolds over time. There are two critical components to a DRS:
A set of discourse referents representing entities that are under discussion.
A set of DRS conditions representing information that has been given about discourse referents.
Consider Sentence (1) below:
(1) A farmer owns a donkey.
The DRS of (1) can be notated as (2) below:
(2) [x,y: farmer(x), donkey(y), owns(x,y)]
What (2) says is that there are two discourse referents, x and y, and three discourse conditions farmer, donkey, and owns, such that the condition farmer holds of x, donkey holds of y, and owns holds of the pair x and y.
Informally, the DRS in (2) is true in a given model of evaluation if and only if there are entities in that model that satisfy the conditions. So, if a model contains two individuals, and one is a farmer, the other is a donkey, and the first owns the second, the DRS in (2) is true in that model.
Uttering subsequent sentences results in the existing DRS being updated.
(3) He beats it.
Uttering (3) after (1) results in the DRS in (2) being updated as follows, in (4) (assuming a way to disambiguate which pr |
https://en.wikipedia.org/wiki/Control%20of%20chaos | In lab experiments that study chaos theory, approaches designed to control chaos are based on certain observed system behaviors. Any chaotic attractor contains an infinite number of unstable, periodic orbits. Chaotic dynamics, then, consists of a motion where the system state moves in the neighborhood of one of these orbits for a while, then falls close to a different unstable, periodic orbit where it remains for a limited time and so forth. This results in a complicated and unpredictable wandering over longer periods of time.
Control of chaos is the stabilization, by means of small system perturbations, of one of these unstable periodic orbits. The result is to render an otherwise chaotic motion more stable and predictable, which is often an advantage. The perturbation must be tiny compared to the overall size of the attractor of the system to avoid significant modification of the system's natural dynamics.
Several techniques have been devised for chaos control, but most are developments of two basic approaches: the Ott–Grebogi–Yorke (OGY) method and Pyragas continuous control. Both methods require a previous determination of the unstable periodic orbits of the chaotic system before the controlling algorithm can be designed.
OGY method
Edward Ott, Celso Grebogi and James A. Yorke were the first to make the key observation that the infinite number of unstable periodic orbits typically embedded in a chaotic attractor could be taken advantage of for the purpose of achieving control by means of applying only very small perturbations. After making this general point, they illustrated it with a specific method, since called the Ott–Grebogi–Yorke (OGY) method of achieving stabilization of a chosen unstable periodic orbit. In the OGY method, small, wisely chosen, kicks are applied to the system once per cycle, to maintain it near the desired unstable periodic orbit.
To start, one obtains information about the chaotic system by analyzing a slice of the chaotic attractor |
https://en.wikipedia.org/wiki/John%20Rushby | John Rushby (born 1949) is a British computer scientist now based in the United States and working for SRI International. He previously taught and did research for Manchester University and later Newcastle University.
Early life and education
John Rushby was born and brought up in London, where he attended Dartford Grammar School. He studied at Newcastle University in the United Kingdom, gaining his computer science BSc there in 1971 and his PhD in 1977.
Career
From 1974 to 1975, he was a lecturer in the Computer Science Department at Manchester University. From 1979 to 1982, he was a research associate in the Department of Computing Science at the Newcastle University.
Rushby joined SRI International in Menlo Park, California in 1983. Currently he is Program Director for Formal Methods and Dependable Systems in the Computer Science Laboratory at SRI. He developed the Prototype Verification System, which is a theorem prover.
Awards and memberships
Rushby was the recipient of the 2011 Harlan D. Mills Award from the IEEE Computer Society. |
https://en.wikipedia.org/wiki/Deutschsprachige%20Mykologische%20Gesellschaft | Deutschsprachige Mykologische Gesellschaft (DMykG) e.V. (German-Speaking Mycological Society) has been acknowledged as a non-profit organisation. The society was founded in 1961 as a platform for all scientists of the German-speaking area who are interested in mycology either from a medical or veterinary standpoint, i.e. medical mycology or veterinary mycology. To promote science and research is a prime concern. The society is based in the city of Essen.
The society currently has about 500 members and organises yearly meetings. These meetings are held for several days each year and are dubbed Myk. Working parties for “clinical mycology” as well as “mycological laboratory diagnostics” make major contributions to the work of the society. The scientific organ of the society publishes the internationally renowned journal Mycoses. Moreover, the society publishes a scientific magazine dubbed Mykologie Forum which is distributed 4 times a year in a circulation of about 5.000 issues to members as well as further interested groups of physicians.
Mycological quality management forms a major part of the work of the society. In this context there is a focus on the preparation and updating of guidelines. Currently, the society provides under the canopy of Arbeitsgemeinschaft der Medizinisch-Wissenschaftlichen Fachgesellschaften (AWMF) 6 guidelines (for the electronic version see www.awmf-leitlinien.de), namely „tinea of glabrous skin“, „onychomycosis“, „vulvovaginal candidosis“, „cutaneous candidosis“, “oral candidosis”, and “tinea capitis”. Just recently, English versions of the German language guidelines have started to be published in mycoses.
On international grounds Deutschsprachige Mykologische Gesellschaft cooperates with the International Society for Human and Animal Mycology (ISHAM) as well as the European Confederation of Medical Mycology (ECMM).
Promotion of the career of younger mycologists is a major concern. In this context a prize for the promotion of mycol |
https://en.wikipedia.org/wiki/Squared%20triangular%20number | In number theory, the sum of the first cubes is the square of the th triangular number. That is,
The same equation may be written more compactly using the mathematical notation for summation:
This identity is sometimes called Nicomachus's theorem, after Nicomachus of Gerasa (c. 60 – c. 120 CE).
History
Nicomachus, at the end of Chapter 20 of his Introduction to Arithmetic, pointed out that if one writes a list of the odd numbers, the first is the cube of 1, the sum of the next two is the cube of 2, the sum of the next three is the cube of 3, and so on. He does not go further than this, but from this it follows that the sum of the first cubes equals the sum of the first odd numbers, that is, the odd numbers from 1 to . The average of these numbers is obviously , and there are of them, so their sum is
Many early mathematicians have studied and provided proofs of Nicomachus's theorem. claims that "every student of number theory surely must have marveled at this miraculous fact". finds references to the identity not only in the works of Nicomachus in what is now Jordan in the first century CE, but also in those of Aryabhata in India in the fifth century, and in those of Al-Karaji circa 1000 in Persia. mentions several additional early mathematical works on this formula, by Al-Qabisi (tenth century Arabia), Gersonides (circa 1300 France), and Nilakantha Somayaji (circa 1500 India); he reproduces Nilakantha's visual proof.
Numeric values; geometric and probabilistic interpretation
The sequence of squared triangular numbers is
These numbers can be viewed as figurate numbers, a four-dimensional hyperpyramidal generalization of the triangular numbers and square pyramidal numbers.
As observes, these numbers also count the number of rectangles with horizontal and vertical sides formed in an grid. For instance, the points of a grid (or a square made up of three smaller squares on a side) can form 36 different rectangles. The number of squares in a square gri |
https://en.wikipedia.org/wiki/Hologenomics | Hologenomics is the omics study of hologenomes. A hologenome is the whole set of genomes of a holobiont, an organism together with all co-habitating microbes, other life forms, and viruses. While the term hologenome originated from the hologenome theory of evolution, which postulates that natural selection occurs on the holobiont level, hologenomics uses an integrative framework to investigate interactions between the host and its associated species. Examples include gut microbe or viral genomes linked to human or animal genomes for host-microbe interaction research. Hologenomics approaches have also been used to explain genetic diversity in the microbial communities of marine sponges.
History
The origins of hologenomics revolves around the hologenome theory of evolution, which describes individual multicellular organisms, microbes, and viruses establishing symbiotic relationships and undergoing coevolution together. Richard Jefferson introduced the term 'hologenome' to describe the host-symbiont genome as an evolutionary unit. Prior to this, Lynn Margulis used the term 'holobiont' to describe hosts and their associated species as an ecological unit.
Eukaryotes-prokaryotes coevolution
Earliest evidence of multicellular-unicellular interactions are seen in sponges, which are a well studied hologenomic system. Porifera are often described as holobionts because they harbor a wide range of bacteria, archaea and algae. Microbial communities present have been observed in facilitating metabolic functions and immune responses. Offspring inherit these microbial colonies via vertical and/or horizontal transmission. Symbiont colonies are transferred through parental gametes in vertical transmission, whereas offspring acquire same colonies from their environment in horizontal transmission. Vertical transmission is also seen in terrestrial organisms like C. ocellatus, where gammaproteobacteria in the parental gut is vertically transferred through egg contamination.
Critici |
https://en.wikipedia.org/wiki/Somos%27%20quadratic%20recurrence%20constant | In mathematics, Somos' quadratic recurrence constant, named after Michael Somos, is the number
This can be easily re-written into the far more quickly converging product representation
which can then be compactly represented in infinite product form by:
The constant σ arises when studying the asymptotic behaviour of the sequence
with first few terms 1, 1, 2, 12, 576, 1658880, ... . This sequence can be shown to have asymptotic behaviour as follows:
Guillera and Sondow give a representation in terms of the derivative of the Lerch transcendent:
where ln is the natural logarithm and (z, s, q) is the Lerch transcendent.
Finally,
.
Notes |
https://en.wikipedia.org/wiki/Telephone%20call%20recording%20laws | Telephone call recording laws are legislation enacted in many jurisdictions, such as countries, states, provinces, that regulate the practice of telephone call recording. Call recording or monitoring is permitted or restricted with various levels of privacy protection, law enforcement requirements, anti-fraud measures, or individual party consent.
Australia
The federal Telecommunications (Interception and Access) Act 1979 and State and Territory listening devices laws may both apply to monitoring or recording of telephone conversations. The general rule is that the call may not be recorded. Section 7 of the Telecommunications (Interception and Access) Act 1979 prohibits intercepting a telephone call. "Interception" is defined in section 6, of which one element is that it is made "without the knowledge of the person making the communication". There are exceptions to these rules in very limited circumstances, including where a warrant applies.
If a call is to be recorded or monitored, an organization must tell the other party at the beginning of the conversation so that it has the chance either to end the call, or to ask to be transferred to another line where monitoring or recording does not take place.
Reasons organizations may monitor or record conversations may include:
to protect a person's intent in dealings with the organization
to provide a record in the event of a dispute about a transaction
to improve customer service.
In the state of Queensland it is not illegal to record a telephone conversation by a party to the conversation.
Canada
Organizations
In Canada, organizations subject to the Personal Information Protection and Electronic Documents Act (PIPEDA) must comply with PIPEDA when recording calls.
In order to comply with the PIPEDA, organizations should take the following steps when recording conversations:
The individual must be informed that the conversation is being recorded at the beginning of the call. This can be done by an automated reco |
https://en.wikipedia.org/wiki/Hybrizyme | Hybrizyme is a term coined to indicate novel or normally rare gene variants (or alleles) that are associated with hybrid zones, geographic areas where two related taxa (e.g. species or subspecies) meet, mate, and produce hybrid offspring. The hybrizyme phenomenon is widespread and these alleles occur commonly, if not in all hybrid zones. Initially considered to be caused by elevated rates of mutation in hybrids, the most probable hypothesis infers that they are the result of negative (purifying) selection. Namely, in the center of the hybrid zone, negative selection purges alleles against hybrid disadvantage (e.g. hybrid inviability or infertility). Stated differently, any allele that will decrease reproductive isolation is favored and any linked alleles (genetic markers) also increase their frequency by genetic hitchhiking. If the linked alleles used to be rare variants in the parental taxa, they will become more common in the area where the hybrids are formed.
Etymology
Originally hybrizymes were defined as "unexpected allelic electromorphs associated with hybrid zones", a formal term proposed by renowned conservation geneticist and biogeographer David S. Woodruff in 1988. By suggesting a new definition for a phenomenon that had been previously widely observed Woodruff's interpretation bypasses the etiological connotation of alternative terms and avoids inappropriate context. Namely, previous studies referred to allozymes that were observed at high frequency in hybrid zones, but are absent or rare in parental taxa as "the rare allele phenomenon". These alleles can have increased frequencies up to a point of the allele becoming the most common one in the hybrid zone, rendering the term "the rare allele phenomenon" deceptive. Despite this, these two terms have been used interchangeably in literature.
Widespread phenomenon
Hybrid populations display the hybrizyme phenomenon by having increased frequencies of certain alleles that are rare or non-existent outside o |
https://en.wikipedia.org/wiki/Multinomial%20theorem | In mathematics, the multinomial theorem describes how to expand a power of a sum in terms of powers of the terms in that sum. It is the generalization of the binomial theorem from binomials to multinomials.
Theorem
For any positive integer and any non-negative integer , the multinomial formula describes how a sum with terms expands when raised to an arbitrary power :
where
is a multinomial coefficient. The sum is taken over all combinations of nonnegative integer indices through such that the sum of all is . That is, for each term in the expansion, the exponents of the must add up to . Also, as with the binomial theorem, quantities of the form that appear are taken to equal 1 (even when equals zero).
In the case , this statement reduces to that of the binomial theorem.
Example
The third power of the trinomial is given by
This can be computed by hand using the distributive property of multiplication over addition, but it can also be done (perhaps more easily) with the multinomial theorem. It is possible to "read off" the multinomial coefficients from the terms by using the multinomial coefficient formula. For example:
has the coefficient
has the coefficient
Alternate expression
The statement of the theorem can be written concisely using multiindices:
where
and
Proof
This proof of the multinomial theorem uses the binomial theorem and induction on .
First, for , both sides equal since there is only one term in the sum. For the induction step, suppose the multinomial theorem holds for . Then
by the induction hypothesis. Applying the binomial theorem to the last factor,
which completes the induction. The last step follows because
as can easily be seen by writing the three coefficients using factorials as follows:
Multinomial coefficients
The numbers
appearing in the theorem are the multinomial coefficients. They can be expressed in numerous ways, including as a product of binomial coefficients or of factorials:
Sum of all multino |
https://en.wikipedia.org/wiki/Hendecagon | In geometry, a hendecagon (also undecagon or endecagon) or 11-gon is an eleven-sided polygon. (The name hendecagon, from Greek hendeka "eleven" and –gon "corner", is often preferred to the hybrid undecagon, whose first part is formed from Latin undecim "eleven".)
Regular hendecagon
A regular hendecagon is represented by Schläfli symbol {11}.
A regular hendecagon has internal angles of 147.27 degrees (=147 degrees). The area of a regular hendecagon with side length a is given by
As 11 is not a Fermat prime, the regular hendecagon is not constructible with compass and straightedge. Because 11 is not a Pierpont prime, construction of a regular hendecagon is still impossible even with the usage of an angle trisector.
Close approximations to the regular hendecagon can be constructed. For instance, the ancient Greek mathematicians approximated the side length of a hendecagon inscribed in a unit circle as being 14/25 units long.
The hendecagon can be constructed exactly via neusis construction and also via two-fold origami.
Approximate construction
The following construction description is given by T. Drummond from 1800:
"Draw the radius A B, bisect it in C—with an opening of the compasses equal to half the radius, upon A and C as centres describe the arcs C D I and A D—with the distance I D upon I describe the arc D O and draw the line C O, which will be the extent of one side of a hendecagon sufficiently exact for practice."
On a unit circle:
Constructed hendecagon side length
Theoretical hendecagon side length
Absolute error – if is 10 m then this error is approximately 2.3 mm.
Symmetry
The regular hendecagon has Dih11 symmetry, order 22. Since 11 is a prime number there is one subgroup with dihedral symmetry: Dih1, and 2 cyclic group symmetries: Z11, and Z1.
These 4 symmetries can be seen in 4 distinct symmetries on the hendecagon. John Conway labels these by a letter and group order. Full symmetry of the regular form is r22 and no symmetry is labe |
https://en.wikipedia.org/wiki/Kurt%20Becker%20KG | Kurt Becker KG was a German manufacturer of die-cast miniature cars. The factory was located in Berlin, Germany.
History
Little is known about the history of the Kurt Becker KG, other than that it was founded by the German entrepreneur Kurt Becker from Berlin, presumably shortly after World War II. As far as known Kurt Becker KG only produced one series of miniature cars around 1947.
Racing cars
Around 1947 Kurt Becker KG produced a series of die-cast miniatures of the pre-war Auto Union racing car in scale 1:43. The miniature was of the first racing car of the Auto Union series, the Type A, more specifically of the very rare 'long-tail' version. This car was raced on the AVUS in Berlin in 1934 by German racing car driver August Momberger. The choice of model by Kurt Becker KG was unusual, since the Auto Union racing cars had their heyday in the 1930s as the flagships of Nazi-Germany, and never returned to the race tracks after the war.
This series of Auto Union racing cars by Kurt Becker KG was known as the B1300 series. The racing cars were available in six different color combinations:
Off-white/kaki color with red seat (matte paint)
Green with red seat (matte paint)
Dark red with off-white/kaki seat
Light red with blue seat
Dark blue with red seat (matte paint)
Light blue with red seat (matte paint)
Very rare and unique for miniature cars is that four of the color variations were finished in matte paint. The reason for this is believed to be that the miniatures were painted with left-overs of German military paint, which was practically the only paint available in early post-war Germany.
External links
Story about a box of unused old stock models by Kurt Becker KG discovered in Frankfurt around 1990
Information about and images of the original Auto Union Type A 'long-tail' racing car
Die-cast toys
Toy cars and trucks
Defunct manufacturing companies of Germany
Model manufacturers of Germany
Toy companies of Germany
Toy companies established in 1947
Ger |
https://en.wikipedia.org/wiki/Debugger | A debugger or debugging tool is a computer program used to test and debug other programs (the "target" program). The main use of a debugger is to run the target program under controlled conditions that permit the programmer to track its execution and monitor changes in computer resources that may indicate malfunctioning code. Typical debugging facilities include the ability to run or halt the target program at specific points, display the contents of memory, CPU registers or storage devices (such as disk drives), and modify memory or register contents in order to enter selected test data that might be a cause of faulty program execution.
The code to be examined might alternatively be running on an instruction set simulator (ISS), a technique that allows great power in its ability to halt when specific conditions are encountered, but which will typically be somewhat slower than executing the code directly on the appropriate (or the same) processor. Some debuggers offer two modes of operation, full or partial simulation, to limit this impact.
A "trap" occurs when the program cannot normally continue because of a programming bug or invalid data. For example, the program might have tried to use an instruction not available on the current version of the CPU or attempted to access unavailable or protected memory. When the program "traps" or reaches a preset condition, the debugger typically shows the location in the original code if it is a source-level debugger or symbolic debugger, commonly now seen in integrated development environments. If it is a low-level debugger or a machine-language debugger it shows the line in the disassembly (unless it also has online access to the original source code and can display the appropriate section of code from the assembly or compilation).
Features
Typically, debuggers offer a query processor, a symbol resolver, an expression interpreter, and a debug support interface at its top level. Debuggers also offer more sophisticated func |
https://en.wikipedia.org/wiki/Snarl | A snarl is a sound, often a growl or vicious utterance, often accompanied by a facial expression, where the upper lip is raised, and the nostrils widen, generally indicating hate, anger or pain. In addition to humans, other mammals including monkeys, rabbits and dogs snarl, often to warn others of their potential bite. In humans, snarling uses the levator labii superioris alaeque nasi muscle. The threatening vocalizations of snarling are often accompanied by or used synonymously with threatening facial expressions.
The word "snarl" is also used as an onomatopoeia for the threatening noise to which it refers, as in the 'snarl' of a chainsaw. This usage may derive from the common expression describing a dog as "growling and snarling". One literary use of "snarl" to mean a noise is in The Lord of the Rings in the encounter with the barrow-wight: "In the dark there was a snarling noise". |
https://en.wikipedia.org/wiki/Australian%20Continuous%20Plankton%20Recorder%20Survey | The Australian Continuous Plankton Recorder (AusCPR) survey is a joint project of the CSIRO and the Australian Antarctic Division, DEWHA, to monitor plankton communities as a guide to the health of Australia's oceans.
Plankton respond rapidly to changes in the ocean environment compared to other marine animals such as fish, birds and mammals, which makes them ideal biological indicators of ecosystem change.
AusCPR was initiated in 2007 and funding has been secured initially for four years, although it is envisaged that the survey will continue well into the future.
The aims of the AusCPR survey are to:
map plankton biodiversity and distribution
develop the first long-term phytoplankton and zooplankton baseline for Australian waters
document plankton changes in response to climate change
provide indices for fisheries management
detect harmful algal blooms
validate satellite remote sensing
initialise and test ecosystem models
The AusCPR survey uses the Continuous Plankton Recorder (CPR), a device developed by pioneering British marine biologist Sir Alister Hardy. In 1931, this device formed the basis of the ongoing CPR survey of the North Atlantic. This survey is one of the longest running marine biological surveys in the world, and many climate related changes in the plankton have been observed over the past 70 years.
The design of the CPR has remained fundamentally unchanged over time as the simple yet robust design is a key to its success as an effective plankton sampler. The key to its success as a frequent basin-scale sampler is that the device is towed behind ships of opportunity (SOOPs) unaccompanied by scientists or research staff, making it a cost-effective sampling platform. The CPR is towed at about 10 metres below the surface and for about 450 nautical miles (830 km) per ‘tow’. The plankton enters a small opening in the device and is trapped and preserved between two layers of silk mesh. In the laboratory the silk is unrolled and phytoplank |
https://en.wikipedia.org/wiki/Electroreception%20and%20electrogenesis | Electroreception and electrogenesis are the closely related biological abilities to perceive electrical stimuli and to generate electric fields. Both are used to locate prey; stronger electric discharges are used in a few groups of fishes (most famously the electric eel, which is not actually an eel but a knifefish) to stun prey. The capabilities are found almost exclusively in aquatic or amphibious animals, since water is a much better conductor of electricity than air. In passive electrolocation, objects such as prey are detected by sensing the electric fields they create. In active electrolocation, fish generate a weak electric field and sense the different distortions of that field created by objects that conduct or resist electricity. Active electrolocation is practised by two groups of weakly electric fish, the Gymnotiformes (knifefishes) and the Mormyridae (elephantfishes), and by Gymnarchus niloticus, the African knifefish. An electric fish generates an electric field using an electric organ, modified from muscles in its tail. The field is called weak if it is only enough to detect prey, and strong if it is powerful enough to stun or kill. The field may be in brief pulses, as in the elephantfishes, or a continuous wave, as in the knifefishes. Some strongly electric fish, such as the electric eel, locate prey by generating a weak electric field, and then discharge their electric organs strongly to stun the prey; other strongly electric fish, such as the electric ray, electrolocate passively. The stargazers are unique in being strongly electric but not using electrolocation.
The electroreceptive ampullae of Lorenzini evolved early in the history of the vertebrates; they are found in both cartilaginous fishes such as sharks, and in bony fishes such as coelacanths and sturgeons, and must therefore be ancient. Most bony fishes have secondarily lost their ampullae of Lorenzini, but other non-homologous electroreceptors have repeatedly evolved, including in two gr |
https://en.wikipedia.org/wiki/Sklyanin%20algebra | In mathematics, specifically the field of algebra, Sklyanin algebras are a class of noncommutative algebra named after Evgeny Sklyanin. This class of algebras was first studied in the classification of Artin-Schelter regular algebras of global dimension 3 in the 1980s. Sklyanin algebras can be grouped into two different types, the non-degenerate Sklyanin algebras and the degenerate Sklyanin algebras, which have very different properties. A need to understand the non-degenerate Sklyanin algebras better has led to the development of the study of point modules in noncommutative geometry.
Formal definition
Let be a field with a primitive cube root of unity. Let be the following subset of the projective plane :
Each point gives rise to a (quadratic 3-dimensional) Sklyanin algebra,
where,
Whenever we call a degenerate Sklyanin algebra and whenever we say the algebra is non-degenerate.
Properties
The non-degenerate case shares many properties with the commutative polynomial ring , whereas the degenerate case enjoys almost none of these properties. Generally the non-degenerate Sklyanin algebras are more challenging to understand than their degenerate counterparts.
Properties of degenerate Sklyanin algebras
Let be a degenerate Sklyanin algebra.
contains non-zero zero divisors.
The Hilbert series of is .
Degenerate Sklyanin algebras have infinite Gelfand–Kirillov dimension.
is neither left nor right Noetherian.
is a Koszul algebra.
Degenerate Sklyanin algebras have infinite global dimension.
Properties of non-degenerate Sklyanin algebras
Let be a non-degenerate Sklyanin algebra.
contains no non-zero zero divisors.
The hilbert series of is .
Non-degenerate Sklyanin algebras are Noetherian.
is Koszul.
Non-degenerate Sklyanin algebras are Artin-Schelter regular. Therefore, they have global dimension 3 and Gelfand–Kirillov dimension 3.
There exists a normal central element in every non-degenerate Sklyanin algebra.
Examples
Degenerate Skly |
https://en.wikipedia.org/wiki/Azoxystrobin | Azoxystrobin is a broad spectrum systemic fungicide widely used in agriculture to protect crops from fungal diseases. It was first marketed in 1996 using the brand name Amistar and by 1999 it had been registered in 48 countries on more than 50 crops. In the year 2000 it was announced that it had been granted UK Millennium product status.
History
In 1977, academic research groups in Germany published details of two new antifungal antibiotics they had isolated from the basidiomycete fungus Strobilurus tenacellus. They named these strobilurin A and B but did not provide detailed structures, only data based on their high-resolution mass spectra, which showed that the simpler of the two had molecular formula C16H18O3. In the following year, further details including structures were published and a related fungicide, oudemansin A from the fungus Oudemansiella mucida, whose identity had been determined by X-ray crystallography, was disclosed.
When the fungicidal effects were shown to stem from what was then a novel mode of action, chemists at the Imperial Chemical Industries (ICI) research site at Jealott's Hill became interested to use them as leads to develop new fungicides suitable for use in agriculture. The first task was to synthesize a sample of strobilurin A for testing. In doing so, it was discovered that the structure that had been published was incorrect in the stereochemistry of one of the double bonds: the strobilurins, in fact, have the E,Z,E not E,E,E configuration. Once this was realised and the correct material was made and tested it was shown, as expected, to be active in vitro but insufficiently stable to light to be active in the glasshouse. A large programme of chemistry to make analogues was begun when it was discovered that a new stilbene structure containing the β-methoxyacrylate portion (shown in blue and believed to be the toxophore) had good activity in glasshouse tests but still lacked sufficient photostability. After more than 1400 analogu |
https://en.wikipedia.org/wiki/Carry%20flag | In computer processors the carry flag (usually indicated as the C flag) is a single bit in a system status register/flag register used to indicate when an arithmetic carry or borrow has been generated out of the most significant arithmetic logic unit (ALU) bit position. The carry flag enables numbers larger than a single ALU width to be added/subtracted by carrying (adding) a binary digit from a partial addition/subtraction to the least significant bit position of a more significant word. This is typically programmed by the user of the processor on the assembly or machine code level, but can also happen internally in certain processors, via digital logic or microcode, where some processors have wider registers and arithmetic instructions than (combinatorial, or "physical") ALU. It is also used to extend bit shifts and rotates in a similar manner on many processors (sometimes done via a dedicated flag). For subtractive operations, two (opposite) conventions are employed as most machines set the carry flag on borrow while some machines (such as the 6502 and the PIC) instead reset the carry flag on borrow (and vice versa).
Uses
The carry flag is affected by the result of most arithmetic (and typically several bit wise) instructions and is also used as an input to many of them. Several of these instructions have two forms which either read or ignore the carry. In assembly languages these instructions are represented by mnemonics such as ADD/SUB, ADC/SBC (ADD/SUB including carry), SHL/SHR (bit shifts), ROL/ROR (bit rotates), RCR/RCL (rotate through carry), and so on. The use of the carry flag in this manner enables multi-word add, subtract, shift, and rotate operations.
An example is what happens if one were to add 255 and 255 using 8-bit registers. The result should be 510 which is the 9-bit value 111111110 in binary. The 8 least significant bits always stored in the register would be 11111110 binary (254 decimal) but since there is carry out of bit 7 (the eight bit) |
https://en.wikipedia.org/wiki/Broadwell%20%28microarchitecture%29 | Broadwell is the fifth generation of the Intel Core processor. It is Intel's codename for the 14 nanometer die shrink of its Haswell microarchitecture. It is a "tick" in Intel's tick–tock principle as the next step in semiconductor fabrication. Like some of the previous tick-tock iterations, Broadwell did not completely replace the full range of CPUs from the previous microarchitecture (Haswell), as there were no low-end desktop CPUs based on Broadwell.
Some of the processors based on the Broadwell microarchitecture are marketed as "5th-generation Core" i3, i5 and i7 processors. This moniker is however not used for marketing of the Broadwell-based Celeron, Pentium or Xeon chips. This microarchitecture also introduced the Core M processor branding.
Broadwell is the last Intel platform on which Windows 7 is supported by either Intel or Microsoft; however, third-party hardware vendors have offered limited Windows 7 support on more recent platforms.
Broadwell's H and C variants are used in conjunction with Intel 9 Series chipsets (Z97, H97 and HM97), in addition to retaining backward compatibility with some of the Intel 8 Series chipsets.
Design and variants
Broadwell has been launched in three major variants:
BGA package:
Broadwell-Y: system on a chip (SoC); 4.5 W and 3.5 W thermal design power (TDP) classes, for tablets and certain ultrabook-class implementations. GT2 GPU was used, while maximum supported memory is 8 GB of LPDDR3-1600. These were the first chips to roll out, in Q3/Q4 2014. At Computex 2014, Intel announced that these chips would be branded as Core M. TSX instructions are disabled in this series of processors because a bug that cannot be fixed with a microcode update exists.
Broadwell-U: SoC; two TDP classes 15 W for 2+2 and 2+3 configurations (two cores with a GT2 or GT3 GPU) as well as 28 W for 2+3 configurations. Designed to be used on motherboards with the PCH-LP chipset for Intel's ultrabook and NUC platforms. Maximum supported is up to 16 |
https://en.wikipedia.org/wiki/Patterned%20media | Patterned media (also known as bit-patterned media or BPM) is a potential future hard disk drive technology to record data in magnetic islands (one bit per island), as opposed to current hard disk drive technology where each bit is stored in within a continuous magnetic film. The islands would be patterned from a precursor magnetic film using nanolithography. It is one of the proposed technologies to succeed perpendicular recording due to the greater storage densities it would enable. BPM was introduced by Toshiba in 2010.
Comparison with existing HDD technology
In existing hard disk drives, data is stored in a thin magnetic film. This film is deposited so that it consists of isolated (weakly exchange coupled) grains of material of around diameter. One bit of data consists of around that are magnetized in the same direction (either "up" or "down", with respect to the plane of the disk). One method of increasing storage density has been to reduce the average grain volume. However, the energy barrier for thermal switching is proportional to the grain volume. With existing materials, further reductions in the grain volume would result in data loss occurring spontaneously due to superparamagnetism.
In patterned media, the thin magnetic film is first deposited so there is strong exchange coupling between the grains. Using nanolithography, it is then patterned into magnetic islands. The strong exchange coupling means that the energy barrier is now proportional to the island volume, rather than the volume of individual grains within the island. Therefore, storage density increases can be achieved by patterning islands of increasingly small diameter, whilst maintaining thermal stability. Patterned media is predicted to enable areal densities up to as opposed to the limit that exists with current HDD technology.
Differences in read/write head control strategies
In existing HDDs data bits are ideally written on concentric circular tracks. This process is different in |
https://en.wikipedia.org/wiki/List%20of%20genetic%20hybrids | This is a list of genetic hybrids which is limited to well documented cases of animals of differing species able to create hybrid offspring which may or may not be infertile.
Hybrids should not be confused with genetic chimeras, such as that between sheep and goat known as the geep. Wider interspecific hybrids can be made via in vitro fertilization or somatic hybridization, however the resulting cells are not able to develop into a full organism.
Nomenclature
The naming of hybrid animals depends on the sex and species of the parents. The father giving the first half of his species' name and the mother the second half of hers. (I.e. a pizzly bear has a polar bear father and grizzly bear mother whereas a grolar bear's parents would be reversed.)
Animals
Phylum Chordata
Chordate
Class Chondrichthyes
Order Carcharhiniformes
Family Carcharhinidae
Genus Carcharhinus
A group of about 50 hybrids between Australian blacktip shark and the larger common blacktip shark was found by Australia's East Coast in 2012. This is the only known case of hybridization in sharks.
Class Actinopterygii
Order Acipenseriformes
In 2020 hybrids were announced from different families of fish, American paddlefish (Polyodon spathula) and Russian sturgeon (Acipenser gueldenstaedtii). Accidentally created by Hungarian scientists, they are dubbed "sturddlefish."
Order Cichliformes
Family Cichlidae
Blood parrot cichlid, which is probably created by breeding a redhead cichlid and a Midas cichlid (Amphilophus citrinellus) or red devil cichlid (Amphilophus labiatus). It was bred in 1986 in Taiwan.
Order Perciformes
Family Centrarchidae
Subfamily Lepominae
Genus Lepomis
Greengill sunfish, a hybrid between a bluegill (Lepomis macrochirus) and green sunfish (Lepomis cyanellus).
Pumpkinseed x bluegill sunfish, a hybrid between a pumpkinseed (Lepomis gibbosus) and a bluegill (Lepomis macrochirus).
Class Amphibia
Order Urodela
Family Ambystomatidae
Genus Ambystoma
In 2007 hybrids of a C |
https://en.wikipedia.org/wiki/Stream%20thrust%20averaging | In fluid dynamics, stream thrust averaging is a process used to convert three-dimensional flow through a duct into one-dimensional uniform flow. It makes the assumptions that the flow is mixed adiabatically and without friction. However, due to the mixing process, there is a net increase in the entropy of the system. Although there is an increase in entropy, the stream thrust averaged values are more representative of the flow than a simple average as a simple average would violate the second Law of Thermodynamics.
Equations for a perfect gas
Stream thrust:
Mass flow:
Stagnation enthalpy:
Solutions
Solving for yields two solutions. They must both be analyzed to determine which is the physical solution. One will usually be a subsonic root and the other a supersonic root. If it is not clear which value of velocity is correct, the second law of thermodynamics may be applied.
Second law of thermodynamics:
The values and are unknown and may be dropped from the formulation. The value of entropy is not necessary, only that the value is positive.
One possible unreal solution for the stream thrust averaged velocity yields a negative entropy. Another method of determining the proper solution is to take a simple average of the velocity and determining which value is closer to the stream thrust averaged velocity. |
https://en.wikipedia.org/wiki/Metabolic%20imprinting | Metabolic imprinting refers to the long-term physiological and metabolic effects that an offspring's prenatal and postnatal environments have on them. Perinatal nutrition has been identified as a significant factor in determining an offspring's likelihood of it being predisposed to developing cardiovascular disease, obesity, and type 2 diabetes amongst other conditions.
During pregnancy, maternal glucose can cross the blood-placental barrier meaning maternal hyperglycaemia is associated with foetal hyperglycaemia. Despite maternal glucose being able to cross the blood-placental barrier, maternal insulin is not able and the foetus has to make its own. As a result, if a mother is hyperglycaemic the foetus is likely to be hyperinsulinaemic which leads to it having increased levels of growth and adiposity.
Maternal undernutrition
Maternal undernutrition has been linked with low birth weight and also a number of diseases, including Cardiovascular disease, stroke, hypertension and diabetes. When a foetus is in the womb and is not receiving sufficient nutrition, it can adapt to prioritize organ growth and increased metabolic efficiency to prepare itself for life in an energy deficient environment. Postnatally, when given the correct nutrition, babies exhibit ‘catch up growth’, potentially leading to obesity and other related complications. Studies based around restricting animals food intake throughout gestation have discovered that a reduction of just 30% of normal intake can cause low birth weight and increase sensitivity to high-fat-diet induced obesity.
In animal models, intrauterine undernutrition has been shown to be associated with hypertension later in life. This is because the formation of the kidneys is inhibited, which decreases filtration and flow rate through the nephrons, leading to increased blood pressure.
More extreme prenatal conditions such as famine have been shown to have effects on the neurodevelopment of a foetus. After the Dutch Famine of the |
https://en.wikipedia.org/wiki/Low%20%28computability%29 | In computability theory, a Turing degree [X] is low if the Turing jump [X′] is 0′. A set is low if it has low degree. Since every set is computable from its jump, any low set is computable in 0′, but the jump of sets computable in 0′ can bound any degree recursively enumerable in 0′ (Schoenfield Jump Inversion). X being low says that its jump X′ has the least possible degree in terms of Turing reducibility for the jump of a set.
There are various related properties to low degrees:
A degree is lown if its n'th jump is the n'th jump of 0.
A set X is generalized low if it satisfies X′ ≡T X + 0′, that is: if its jump has the lowest degree possible.
A degree d is generalized low n if its n'th jump is the (n-1)'st jump of the join of d with 0′.
More generally, properties of sets which describe their being computationally weak (when used as a Turing oracle) are referred to under the umbrella term lowness properties.
By the Low basis theorem of Jockusch and Soare, any nonempty class in contains a set of low degree. This implies that, although low sets are computationally weak, they can still accomplish such feats as computing a completion of Peano Arithmetic. In practice, this allows a restriction on the computational power of objects needed for recursion theoretic constructions: for example, those used in the analyzing the proof-theoretic strength of Ramsey's theorem.
See also
High (computability)
Low Basis Theorem |
https://en.wikipedia.org/wiki/Computer%20tower | In personal computing, a tower is a form factor of desktop computer case whose height is much greater than its width, thus having the appearance of an upstanding tower block, as opposed to a traditional "pizza box" computer case whose width is greater than its height and appears lying flat.
Compared to a pizza box case, the tower tends to be larger and offers more potential for internal volume for the same desk area occupied, and therefore allows more hardware installation and theoretically better airflow for cooling. Multiple size subclasses of the tower form factor have been established to differentiate their varying heights, including full-tower, mid-tower, midi-tower and mini-tower; these classifications are however nebulously defined and inconsistently applied by different manufacturers.
Although the traditional layout for a tower system is to have the case placed on top of the desk alongside the monitor and other peripherals, a far more common configuration is to place the case on the floor below the desk or in an under-desk compartment, in order to free up desktop space for other items. Computer systems housed in the horizontal "pizza box" form factor — once popularized by the IBM PC in the 1980s but fallen out of mass use since the late 1990s — have been given the term desktops to contrast them with the often underdesk-situated towers.
Subclasses
Tower cases are often categorized as mini-tower, midi-tower, mid-tower, or full-tower. The terms are subjective and inconsistently defined by different manufacturers.
Full-tower
Full-tower cases, typically or more in height, are designed for maximum scalability. For case modding enthusiasts and gamers wanting to play the most technically challenging video games, the full-tower case also makes for an ideal gaming PC case because of their ability to accommodate extensive water cooling setups and larger case fans. Traditionally, full-tower systems had between four and six externally accessible half-height 5.25- |
https://en.wikipedia.org/wiki/JMesh | JMesh is a JSON-based portable and extensible file format for the storage and interchange of unstructured geometric data, including discretized geometries such as triangular and tetrahedral meshes, parametric geometries such as NURBS curves and surfaces, and constructive geometries such as constructive solid geometry (CGS) of shape primitives and meshes. Built upon the JData specification, a JMesh file utilizes the JSON and Universal Binary JSON (UBJSON) constructs to serialize and encode geometric data structures, therefore, it can be directly processed by most existing JSON and UBJSON parsers. The JMesh specification defines a list of JSON-compatible constructs to encode geometric data, including N-dimensional (ND) vertices, curves, surfaces, solid elements, shape primitives, their interactions (such as CGS) and spatial relations, together with their associated properties, such as numerical values, colors, normals, materials, textures and other properties related to graphics data manipulation, 3-D fabrication, computer graphics rendering and animations.
JMesh File Example
The following mesh (a tetrahedral mesh of a unit cube) contains 8 3-D vertices, 12 triangular faces and 6 tetrahedral elements
The above mesh can be stored in the JMesh/JSON format as
{
"_DataInfo_":{
"JMeshVersion":0.5,
"CreationTime":"19-Dec-2021 11:53:43",
"Comment":"Created by iso2mesh 1.9.5-Rev(http:\/\/iso2mesh.sf.net)"
},
"MeshVertex3":[
[0,0,0],
[1,0,0],
[0,1,0],
[1,1,0],
[0,0,1],
[1,0,1],
[0,1,1],
[1,1,1]
],
"MeshTri3":[
[1,2,4],
[1,2,6],
[1,3,4],
[1,3,7],
[1,5,6],
[1,5,7],
[2,8,4],
[2,8,6],
[3,8,4],
[3,8,7],
[5,8,6],
[5,8,7]
],
"MeshTet4":[
[1,2,4,8],
[1,3,4,8],
[1,2,6,8],
[1,5,6,8],
[1,3,7,8],
[1,5,7,8]
]
}
The optional "_DataInfo_" record can contain additional metadata according to JData specification.
Instead of using dimension-specific mesh data constructs, i.e. MeshVertex3, MeshTri3, and MeshTet4, one can also r |
https://en.wikipedia.org/wiki/Ideographic%20Research%20Group | The Ideographic Research Group (IRG), formerly called the Ideographic Rapporteur Group, is a subgroup of the ISO/IEC Joint Technical Committee, responsible for developing aspects of The Unicode Standard pertaining to CJK unified ideographs.The IRG is composed of representatives from the Unicode Consortium, as well as experts from China, Japan, South Korea, Vietnam, and other regions that have historically used Chinese characters, as well as experts. The group holds two meetings every year lasting 4-5 days each, subsequently reporting its activities to its parent ISO/IEC JTC 1/SC 2 (WG2) committee.
History
The precursor to the IRG was the CJK Joint Research Group (CJK-JRG), established in 1990. In October 1993, this group was re-established with its present initials as a subgroup of WG2. In June 2019, the subgroup acquired its current name.
The IRG rapporteur from 1993 to 2004 was Zhang Zhoucai (), who had been convenor and chief editor of CJK-JRG from 1990 to 1993. Since 2004, the IRG rapporteur has been Hong Kong Polytechnic University professor Lu Qin () In June 2018, the title of "rapporteur" was changed to "convenor".
Overview
The IRG is responsible for reviewing proposals to add new CJK unified ideographs to the Universal Multiple-Octet Coded Character Set (ISO/IEC 10646), and equivalently the Unicode Standard, and submitting consolidated proposals for sets of unified ideographs to WG2, which are then processed for encoding in the respective standards by SC2 and the Unicode Technical Committee. National and liaison bodies represented in IRG include China, Hong Kong and Macau, Japan, North and South Korea, Singapore, the Taipei Computer Association as representatives on Taiwan, the United Kingdom, Vietnam, and the Unicode Consortium.
As of Unicode version 15.1, the IRG has been responsible for submitting the following blocks of CJK unified and compatibility ideographs for encoding:
CJK Unified Ideographs and CJK Compatibility Ideographs (version 1.0)
CJ |
https://en.wikipedia.org/wiki/Epigenetics%20%26%20Chromatin | Epigenetics & Chromatin is a peer-reviewed open access scientific journal published by BioMed Central that covers the biology of epigenetics and chromatin.
Scope
Epigenetics & Chromatin is a peer-reviewed open access scientific journal that publishes research related to epigenetic inheritance and chromatin-based interactions. First published in 2008 by BioMed Central, its overall aim is to understand the regulation of gene and chromosomal elements during the processes of cell division, cell differentiation, and any alterations in the environment. To date, 13 volumes have been published.
Usage
As of October 2020, there has been over 340,000 downloads and over 750 Altmetric mentions.
Metrics
Impact factor
According to Journal Citation Reports, it received an impact factor of 4.237 in 2019. Its current SCImago Journal Rank is 2.449.
Citation impact
Its 2-year and 5-year citation impact factor is 4.237 and 4.763, respectively. Its Source Normalized Impact per Paper (SNIP) is 0.896.
Editors
Its current Journal Authority Factor (JAF) is 111.5.
Editors-in-Chief
Editorial Board
Submission guidelines
Current submission guidelines are as follows:
Prior to submitting
Submitters must ensure that Epigenetics & Chromatin is the most suitable journal for the proposed article in addition to understanding of the costs, funding options, and copyright agreement associated with submission. Accuracy and readability of the manuscript must also be considered.
During the submission process
The manuscript must follow all formatting rules which authors must read, understand and accept.
After successful submission
Authors should review the peer-review policy. Authors should also be familiar with the process of manuscript transfers to a different journal, as well as how to promote the publication.
Speed of the submission process
On average, it takes 53 days to reach a decision for reviewed manuscripts and 35 days for all manuscripts. The process of acceptance takes |
https://en.wikipedia.org/wiki/1900%20English%20beer%20poisoning | In 1900, more than 6,000 people in England were poisoned by arsenic-tainted beer, with more than 70 of the affected dying as a result. The food safety crisis was caused by arsenic entering the supply chain through impure sugar which had been made with contaminated sulphuric acid. The illness was prevalent across the Midlands and North West England, with Manchester being the most heavily affected.
Originally misdiagnosed as alcoholic neuropathy, the main epidemic was only recognised after several months. Additionally, investigation into the outbreak found other sources of arsenic in beer, which had been unknowingly poisoning thousands in decades preceding the outbreak.
Misdiagnosis and investigation
This mass poisoning is unusual in that it was not noticed for four months. The doctors, seeing patients who were usually heavy drinkers and who showed muscle weakness and numbness of the hands or feet, initially thought that the patients had "alcoholic neuritis". Nevertheless, a marked increase in the number of cases was noted, with 41 people succumbing to peripheral neuritis, multiple neuritis or alcoholic neuritis and 66 people perishing from alcoholism in the four months of the outbreak, while the previous seven months revealed only 22.
These cases of neuritis were eventually connected to cases of skin discolouration previously thought to be unrelated. Ernest Reynolds, the doctor responsible for making the connection, also noted that only one substance would cause these symptoms: arsenic. He also noted that heavy drinkers who drank mainly spirits seemed less affected than beer drinkers. He gathered samples for analysis from the taverns frequented by his patients, which confirmed the presence of arsenic in the beer they consumed.
Source of the poisoning
Once the breweries affected were identified, investigation as to where the arsenic came from were instituted. It was found that the arsenic was present in invert sugar provided to the breweries by Bostock & Co. of Ga |
https://en.wikipedia.org/wiki/Theresa%20M.%20Reineke | Theresa M. Reineke (born January 1, 1972) is an American chemist and Distinguished McKnight University Professor at the University of Minnesota. She designs sustainable, environmentally friendly polymer-based delivery systems for targeted therapeutics. She is the associate editor of ACS Macro Letters.
Early life and education
Reineke earned her bachelor's degree at University of Wisconsin–Eau Claire. She moved to Arizona State University for her graduate studies and earned a master's degree in 1998. Reineke was a PhD student at the University of Michigan, where she was supervised by Michael O'Keeffe and Omar M. Yaghi. She was awarded the Wirt and Mary Cornell Prize for Outstanding Graduate Research. Reineke joined the California Institute of Technology as an National Institutes of Health postdoctoral fellow in 2000.
Research and career
Reineke joined the University of Minnesota in 2011. Her research group focus on the design, characterisation and functionalisation of macromolecular systems. These macromolecules include biocompatible polymers that can deliver DNA for regenerative medicine as well as targeted therapeutic treatments. She was made a Lloyd H. Reyerson Professor with tenure at the University of Minnesota in 2011. Reineke has published over 140 papers.
Nucleic acids can have an unparalleled specificity for targets inside a cell, but need to be compacted into nanostructures (polyplexes) that can enter cells. Reineke designs polymer-based transportation systems for nucleic acids. These polymer vehicles can improve the solubility and bioavailability of drugs. These often incorporate carbohydrates, which have an affinity for polyplexes and are non-toxic. She is a member of the University of Minnesota Centre for Sustainable Polymers, synthesising polymers from sustainable ingredients. The carbohydrate units within her polymer drug delivery systems are a widely available, renewable resource. The sustainable polymers designed by Reineke include poly(ester-th |
https://en.wikipedia.org/wiki/Paxo | Paxo is a brand of stuffing in the United Kingdom, currently owned by Premier Foods.
Paxo was devised in 1901 by John Crampton, a butcher from Eccles near Manchester, who wanted to have something extra to sell to his customers shopping for their Sunday lunch menus.
In the beginning sales growth of Paxo was slow because stuffing is mainly served with chickens and poultry was then traditionally regarded as a luxury. As the price of chickens dropped and that of red meats rose in the 1950s and 1960s, Paxo's popularity grew.
At Christmas, the product is advertised with the slogan "Christmas wouldn't be Christmas without the Paxo" (a play on the phrase "Christmas wouldn't be Christmas without the turkey").
Paxo was manufactured from the early 1950s in Sharston, Manchester, until 2009 when the factory was closed and production moved to the re-opened Bachelor's factory in Ashford, Kent. |
https://en.wikipedia.org/wiki/Wayne%20Velicer | Wayne Velicer (March 4, 1944 – October 15, 2017) was an American psychologist known for his research in quantitative and health psychology. He taught at the University of Rhode Island from 1973 until his death in 2017. He worked with James O. Prochaska to help to found the University of Rhode Island's Cancer Prevention Research Center, of which he subsequently served as co-director.
Honors and awards
In 2004, Velicer was one of six University of Rhode Island faculty to be named an ISI Highly Cited Researcher. In 2013, he received the Samuel J. Messick Distinguished Scientific Contributions Award from Division 5 of the American Psychological Association. In 2018, he was posthumously inducted into the University of Rhode Island's Lifetime Service Society. |
https://en.wikipedia.org/wiki/Joubert%27s%20theorem | In polynomial algebra and field theory, Joubert's theorem states that if and are fields, is a separable field extension of of degree 6, and the characteristic of is not equal to 2, then is generated over by some element λ in , such that the minimal polynomial of λ has the form = , for some constants in . The theorem is named in honor of Charles Joubert, a French mathematician, lycée professor, and Jesuit priest.
In 1867 Joubert published his theorem in his paper Sur l'équation du sixième degré in tome 64 of Comptes rendus hebdomadaires des séances de l'Académie des sciences. He seems to have made the assumption that the fields involved in the theorem are subfields of the complex field.
Using arithmetic properties of hypersurfaces, Daniel F. Coray gave, in 1987, a proof of Joubert's theorem (with the assumption that the characteristic of is neither 2 nor 3). In 2006 gave a proof of Joubert's theorem "based on an enhanced version of Joubert’s argument". In 2014 Zinovy Reichstein proved that the condition characteristic() ≠ 2 is necessary in general to prove the theorem, but the theorem's conclusion can be proved in the characteristic 2 case with some additional assumptions on and . |
https://en.wikipedia.org/wiki/Blunt%20trauma | Blunt trauma, also known as blunt force trauma or non-penetrating trauma, describes a physical trauma due to a forceful impact without penetration of the body's surface. Blunt trauma stands in contrast with penetrating trauma, which occurs when an object pierces the skin, enters body tissue, and creates an open wound. Blunt trauma occurs due to direct physical trauma or impactful force to a body part. Such incidents often occur with road traffic collisions, assaults, sports-related injuries, and are notably common among the elderly who experience falls.
Blunt trauma can lead to a wide range of injuries including contusions, concussions, abrasions, lacerations, internal or external hemorrhages, and bone fractures. The severity of these injuries depends on factors such as the force of the impact, the area of the body affected, and underlying comorbidities of the affected individual. In some cases, blunt force trauma can be life-threatening and may require immediate medical attention. Blunt trauma to the head and/or severe blood loss are the most likely causes of death due to blunt force traumatic injury.
Classification
Blunt abdominal trauma
Blunt abdominal trauma (BAT) represents 75% of all blunt trauma and is the most common example of this injury. 75% of BAT occurs in motor vehicle crashes, in which rapid deceleration may propel the driver into the steering wheel, dashboard, or seatbelt, causing contusions in less serious cases, or rupture of internal organs from briefly increased intraluminal pressure in the more serious, depending on the force applied. Initially, there may be few indications that serious internal abdominal injury has occurred, making assessment more challenging and requiring a high degree of clinical suspicion.
There are two basic physical mechanisms at play with the potential of injury to intra-abdominal organs: compression and deceleration. The former occurs from a direct blow, such as a punch, or compression against a non-yielding object |
https://en.wikipedia.org/wiki/Weather%20Stress%20Index | The Weather Stress Index, or WSI, is a relative measure of the weather conditions, often used as a comfort indicator. The index, a number between 0 and 100, represents the percentage of time in the past with temperatures below the current temperature, for a given location, day and time. This makes the index a local measure based in past weather conditions. For example, if for a given location, on the 25th of July at 13:00 UTC the WSI is 85 for a temperature of , this means that the temperature was inferior to 42 °C in 85% of the time in the past, on the same place, on the 25th of July at 13:00 UTC (and superior to 42 °C in 15% of the time on the same place, day and hour). In other words, the WSI gives the probability of finding a smaller temperature in the local weather history, at a given day and time, than that of the present measurement. Therefore, high values of WSI predict a relative discomfort from excessive heat for local inhabitants.
Using the index
The same WSI for different geographic points might not refer to the same temperature - a WSI of 99.99 for a given location near the North Pole might refer to a temperature that, in lower latitudes, could be rated with a WSI of 50 (an average temperature) for the same day and hour.
See also
Heat index
Meteorology
Atmospheric thermodynamics
Meteorological indices |
https://en.wikipedia.org/wiki/Anja%20Feldmann | Anja Feldmann (born 8 March 1966 in Bielefeld) is a German computer scientist.
Education and career
Feldmann studied computer science at Universität Paderborn and received her degree in 1990.
She continued her studies at Carnegie Mellon University, where she earned her M.Sc. in 1991 and her Ph.D. in 1995.
Following four years of postdoctoral work at AT&T Labs Research, she held research positions at Saarland University and Technical University Munich.
In 2006 she was appointed as professor of Internet Network Architectures for the Telekom Innovation Laboratories at the Technische Universität Berlin. As Professor her research focused on Internet measurement, Teletraffic engineering, traffic characterization and debugging network performance issues. She has also conducted research into intrusion detection and network architecture. She has served on more than 50 committees and was the co-chair of SIGCOMM. Alex Snoeren said that she "was instrumental in the establishment of a rigorous science of Internet measurement. Among her many contributions, she is perhaps best known for her work in traffic characterization and engineering.”
Between 2009 and 2013 Feldmann was Dean of the Computer Science and Electrical Engineering department at the Technische Universität Berlin, Germany. From 2012 until early 2018 Feldmann sat on the employer side of the supervisory board of SAP. October 2017 she was appointed as director of the Max Planck Institute for Informatics, her focus will be on researching the Internet architecture.
Other activities
Karlsruhe Institute of Technology (KIT), Member of the Supervisory Board
Honors and awards
2009: Member of Leopoldina
2011: Gottfried Wilhelm Leibniz Prize
2011: Berlin Science Award |
https://en.wikipedia.org/wiki/Building%20information%20modeling | Building information modeling (BIM) is a process involving the generation and management of digital representations of the physical and functional characteristics of places. BIM is supported by various tools, technologies and contracts. Building information models (BIMs) are computer files (often but not always in proprietary formats and containing proprietary data) which can be extracted, exchanged or networked to support decision-making regarding a built asset. BIM software is used by individuals, businesses and government agencies who plan, design, construct, operate and maintain buildings and diverse physical infrastructures, such as water, refuse, electricity, gas, communication utilities, roads, railways, bridges, ports and tunnels.
The concept of BIM has been in development since the 1970s, but it only became an agreed term in the early 2000s. The development of standards and the adoption of BIM has progressed at different speeds in different countries. Standards developed in the United Kingdom from 2007 onwards have formed the basis of the international standard ISO 19650, launched in January 2019.
History
The concept of BIM has existed since the 1970s. The first software tools developed for modeling buildings emerged in the late 1970s and early 1980s, and included workstation products such as Chuck Eastman's Building Description System and GLIDE, RUCAPS, Sonata, Reflex and Gable 4D Series. The early applications, and the hardware needed to run them, were expensive, which limited widespread adoption.
The pioneering role of applications such as RUCAPS, Sonata and Reflex has been recognized by Laiserin as well as the UK's Royal Academy of Engineering; former GMW employee Jonathan Ingram worked on all three products. What became known as BIM products differed from architectural drafting tools such as AutoCAD by allowing the addition of further information (time, cost, manufacturers' details, sustainability, and maintenance information, etc.) to the buildi |
https://en.wikipedia.org/wiki/Black%20bean%20paste | Black bean paste, commonly called dòushā () or hēidòushā (), is a sweet bean paste often used as a filling in cakes such as mooncakes or doushabao in many Chinese and Taiwanese cuisines.
Black bean paste is made from pulverized mung beans, combined with potassium chlorate, ferrous sulfate heptahydrate () crystal (which in Indonesia is known as tawas hijau, or "green crystal"), or black food colouring.
Black bean paste is similar to the more well-known red bean paste. The recorded history of black bean paste goes as far back as the Ming Dynasty. |
https://en.wikipedia.org/wiki/Sleep%20in%20animals | Sleep in animals refers to a behavioral and physiological state characterized by altered consciousness, reduced responsiveness to external stimuli, and homeostatic regulation observed in various animals. Sleep has been observed in mammals, birds, reptiles, amphibians, and some fish, and, in some form, in insects and even in simpler animals such as nematodes. The internal circadian clock promotes sleep at night for diurnal organisms (such as humans) and in the day for nocturnal organisms (such as rodents). Sleep patterns vary widely among species. It appears to be a requirement for all mammals and most other animals.
Definition
Sleep can follow a physiological or behavioral definition. In the physiological sense, sleep is a state characterized by reversible unconsciousness, special brainwave patterns, sporadic eye movement, loss of muscle tone (possibly with some exceptions; see below regarding the sleep of birds and of aquatic mammals), and a compensatory increase following deprivation of the state, this last known as sleep homeostasis (i.e., the longer a waking state lasts, the greater the intensity and duration of the sleep state thereafter). In the behavioral sense, sleep is characterized by minimal movement, non-responsiveness to external stimuli (i.e. increased sensory threshold), the adoption of a typical posture, and the occupation of a sheltered site, all of which is usually repeated on a 24-hour basis. The physiological definition applies well to birds and mammals, but in other animals whose brains are not as complex, the behavioral definition is more often used. In very simple animals, behavioral definitions of sleep are the only ones possible, and even then the behavioral repertoire of the animal may not be extensive enough to allow distinction between sleep and wakefulness. Sleep is quickly reversible, as opposed to hibernation or coma, and sleep deprivation is followed by longer or deeper rebound sleep.
Necessity
If sleep were not essential, one wou |
https://en.wikipedia.org/wiki/EnTourage%20eDGe | The enTourage eDGe is a dual-panel personal device, combining a tablet computer on one screen and an e-book reader on the other. Since 2011 it has been developed by Pleiades Publishing, Ltd.
The device runs Google's Android OS. At present Foxconn is engaged in mass manufacturing of the enTourage eDGe v2.5. Production volume is growing in line with demand (especially in scholastics), and the device is being geared to the high requirements for modern tablets.
Features
The enTourage eDGe is a dual-touchscreen device that when open looks like a book. One screen is based on e-Ink technology, and the other is a 10.1" polychromatic LCD. Both screens respond to touch, and the interactive use of a stylus. The LCD color screen is designed for multimedia display (an important advantage of the modern educational process), whereas the e-Ink screen is designed for reading and, in the corresponding mode, for taking notes, as though on paper.
The e-Ink screen used in modern e-readers works on reflected light, so is virtually harmless to the eyes, which makes it suitable for any amount of textual information. This is important for the educational process since it is possible to use electronic devices without violating health standards. Both screens are interconnected. For example, if an e-book is downloaded from the enTourage store, the book is added to the device's library and can be accessed at both the LCD and e-Ink screens. Or if a document is created on the e-Ink screen, it can be reproduced on the LCD screen.
The enTourage eDGe is equipped with a camera above the LCD screen, as well as two USB ports, which can take two flash memory drives, an external keyboard, and other compatible devices. The enTourage eDGe also comes with a stylus, which can be used for writing or interacting with both screens. Both sides of the device may be folded closed like a book, but they can also be fully folded open so that the screens are back-to-back.
Timeline
March 2010, enTourage eDGe v. 2. |
https://en.wikipedia.org/wiki/Wang%20tile | Wang tiles (or Wang dominoes), first proposed by mathematician, logician, and philosopher Hao Wang in 1961, are a class of formal systems. They are modelled visually by square tiles with a color on each side. A set of such tiles is selected, and copies of the tiles are arranged side by side with matching colors, without rotating or reflecting them.
The basic question about a set of Wang tiles is whether it can tile the plane or not, i.e., whether an entire infinite plane can be filled this way. The next question is whether this can be done in a periodic pattern.
Domino problem
In 1961, Wang conjectured that if a finite set of Wang tiles can tile the plane, then there also exists a periodic tiling, which, mathematically, is a tiling that is invariant under translations by vectors in a 2-dimensional lattice. This can be likened to the periodic tiling in a wallpaper pattern, where the overall pattern is a repetition of some smaller pattern. He also observed that this conjecture would imply the existence of an algorithm to decide whether a given finite set of Wang tiles can tile the plane. The idea of constraining adjacent tiles to match each other occurs in the game of dominoes, so Wang tiles are also known as Wang dominoes. The algorithmic problem of determining whether a tile set can tile the plane became known as the domino problem.
According to Wang's student, Robert Berger,
The Domino Problem deals with the class of all domino sets. It consists of deciding, for each domino set, whether or not it is solvable. We say that the Domino Problem is decidable or undecidable according to whether there exists or does not exist an algorithm which, given the specifications of an arbitrary domino set, will decide whether or not the set is solvable.
In other words, the domino problem asks whether there is an effective procedure that correctly settles the problem for all given domino sets.
In 1966, Berger solved the domino problem in the negative. He proved that no algorith |
https://en.wikipedia.org/wiki/List%20of%20transponder%20codes | The following list shows specific aeronautical transponder codes, and ranges of codes, that have been used for specific purposes in various countries. Traditionally each country has allocated transponder codes by their own scheme with little commonality across borders. The list is retained for historic interest.
Pilots are normally required to apply the code, allocated by air traffic control, to that specific flight. Occasionally countries may specify generic codes to be used in the absence of an allocated code. Such generic codes are specified in that country's Aeronautical Information Manual or Aeronautical Information Publication. There also are standard transponder codes for defined situations defined by the International Civil Aviation Organization (marked below as ICAO).
Transponder codes shown in this list in the color RED are for emergency use only such as an aircraft hijacking, radio communication failure or another type of emergency. |
https://en.wikipedia.org/wiki/Content%20Scramble%20System | The Content Scramble System (CSS) is a digital rights management (DRM) and encryption system employed on many commercially produced DVD-Video discs. CSS utilizes a proprietary 40-bit stream cipher algorithm. The system was introduced around 1996 and was first compromised in 1999.
CSS is one of several complementary systems designed to restrict DVD-Video access.
It has been superseded by newer DRM schemes such as Content Protection for Recordable Media (CPRM), or by Advanced Encryption Standard (AES) in the Advanced Access Content System (AACS) DRM scheme used by HD DVD and Blu-ray Disc, which have 56-bit and 128-bit key sizes, respectively, providing a much higher level of security than the less secure 40-bit key size of CSS.
Preliminary note
The content scramble system (CSS) is a collection of proprietary protection mechanisms for DVD-Video discs. CSS attempts to restrict access to the content only for licensed applications. According to the DVD Copy Control Association (CCA), which is the consortium that grants licenses, CSS is supposed to protect the intellectual property rights of the content owner.
The details of CSS are only given to licensees for a fee. The license, which binds the licensee to a non-disclosure agreement, would not permit the development of open-source software for DVD-Video playback. Instead, there is libdvdcss, a reverse engineered implementation of CSS. Libdvdcss is a source for documentation, along with the publicly available DVD-ROM and MMC specifications. There has also been some effort to collect CSS details from various sources.
A DVD-Video can be produced with or without CSS. The publisher may for instance decide to go without CSS protection to save license and production costs.
Introduction
The content scramble system deals with three participants: the disc, the drive and the player. The disc holds the purported copyright information and the encrypted feature. The drive provides the means to read the disc. The player decrypts |
https://en.wikipedia.org/wiki/Jon%20Freeman%20%28game%20designer%29 | Jon Freeman is a game designer and co-founder of software developer Automated Simulations, which was later renamed to Epyx and became a major company during the 8-bit era of home computing. He is married to game programmer Anne Westfall, and they work together as Free Fall Associates. Free Fall is best known for Archon: The Light and the Dark, one of the earliest titles from Electronic Arts.
Career
Automated Simulations and Epyx
Freeman worked as a game designer for video game developer and publisher, Epyx, which he co-founded with Jim Connelley in 1978 as Automated Simulations.
Their first game, Starfleet Orion, was a two-player only game developed mainly so Connelley could write off the cost of his Commodore PET computer. Freeman provided design while Connelley handled the programming in BASIC. Freeman was amazed when they actually had a finished product and they had to create a company to publish it. So, both he and Connelley fell into the computer game industry by accident.
It was while with this company, still known as Automated Simulations in 1980, that Freeman met his future wife, Anne Westfall, at a computer fair.
Starfleet Orion was quickly followed by Invasion Orion. What followed was a slew of very successful titles for various platforms. Freeman designed or co-designed a number of Epyx games, such as Crush, Crumble and Chomp! and Rescue at Rigel. Freeman tired of what he called "office politics" and yearned to get away from the now much larger company.
The Complete Book of Wargames
In 1980, Freeman, in collaboration with the editors of Consumer Guide, wrote The Complete Book of Wargames, which was published by Simon & Schuster under their "Fireside" imprint. In the book, Freeman, explained the history of wargames to that point, the notable companies, the usual components, and evaluated most of the major wargames in print at the time, as well as the role that computer games would play in this field.
Free Fall Associates
In 1981, Freeman and A |
https://en.wikipedia.org/wiki/Boulder%20Dash%20%28video%20game%29 | Boulder Dash is a 2D maze-puzzle video game released in 1984 by First Star Software for Atari 8-bit computers. It was created by Canadian developers Peter Liepa and Chris Gray. The player controls Rockford, who collects treasures while evading hazards.
Boulder Dash was ported to many 8-bit and 16-bit systems and turned into a coin-operated arcade game. It was followed by multiple sequels and re-releases and influenced games such as Repton and direct clones such as Emerald Mine.
As of January 2018, BBG Entertainment GmbH owns the intellectual property rights to Boulder Dash.
Gameplay
Boulder Dash takes place in a series of caves, each of which is laid out as rectangular grid of blocks. The player guides the player character, Rockford, with a joystick or cursor keys. In each cave, Rockford has to collect as many diamonds as are needed and avoid dangers, such as falling rocks. When enough diamonds have been collected, the exit door opens, and going through this exit door completes the cave.
Development
As an aspiring game developer, Peter Liepa reached out to a local publisher called Inhome Software. They put him in touch with a young man—Chris Gray—who had submitted a game programmed in BASIC that was not commercial quality, but had potential. The project began with the intention of converting this game to machine language and releasing it through Inhome, but according to Liepa, the game was very primitive. He decided to expand the concept and add more interesting dynamics, and he wrote the new version in Forth in about six months. When it became clear that the game was worth releasing, Liepa rewrote Boulder Dash in 6502 assembly language.
Dissatisfied with the lack of a contact from Inhome Software, Liepa searched for a new publisher He settled on First Star Software, which, according to him, was very happy to publish the game.
Ports
The game was licensed by Exidy for use with their Max-A-Flex arcade cabinet. Released in 1984, it allows buying 30 seconds of ga |
https://en.wikipedia.org/wiki/Leonidas%20Alaoglu | Leonidas (Leon) Alaoglu (; March 19, 1914 – August 1981) was a mathematician, known for his result, called Alaoglu's theorem on the weak-star compactness of the closed unit ball in the dual of a normed space, also known as the Banach–Alaoglu theorem.
Life and work
Alaoglu was born in Red Deer, Alberta to Greek parents. He received his BS in 1936, Master's in 1937, and PhD in 1938 (at the age of 24), all from the University of Chicago. His thesis, written under the direction of Lawrence M. Graves was entitled Weak topologies of normed linear spaces. His doctoral thesis is the source of Alaoglu's theorem. The Bourbaki–Alaoglu theorem is a generalization of this result by Bourbaki to dual topologies.
After some years teaching at Pennsylvania State College, Harvard University and Purdue University, in 1944 he became an operations analyst for the United States Air Force. In his last position, from 1953 to 1981 he worked as a senior scientist in operations research at the Lockheed Corporation in Burbank, California. In this latter period he wrote numerous research reports, some of them classified.
During the Lockheed years he took an active part in seminars and other mathematical activities at Caltech, UCLA and USC. After his death in 1981 a Leonidas Alaoglu Memorial Lecture Series was established at Caltech. Speakers have included Paul Erdős, Irving Kaplansky, Paul Halmos and Hugh Woodin.
See also
Axiom of Choice – The Banach–Alaoglu theorem is not provable from ZF without use of the Axiom of Choice.
Banach–Alaoglu theorem
Gelfand representation
List of functional analysis topics
Superabundant number – Article explains the 1944 results of Alaoglu and Erdős on this topic
Tychonoff's theorem
Weak topology – Leads to the weak-star topology to which the Banach–Alaoglu theorem applies.
Publications
Alaoglu, Leonidas (M.S. thesis, U. of Chicago, 1937). "The asymptotic Waring problem for fifth and sixth powers" (24 pages). Advisor: Leonard Eugene Dickson
Alao |
https://en.wikipedia.org/wiki/Atkinson%27s%20theorem | In operator theory, Atkinson's theorem (named for Frederick Valentine Atkinson) gives a characterization of Fredholm operators.
The theorem
Let H be a Hilbert space and L(H) the set of bounded operators on H. The following is the classical definition of a Fredholm operator: an operator T ∈ L(H) is said to be a Fredholm operator if the kernel Ker(T) is finite-dimensional, Ker(T*) is finite-dimensional (where T* denotes the adjoint of T), and the range Ran(T) is closed.
Atkinson's theorem states:
A T ∈ L(H) is a Fredholm operator if and only if T is invertible modulo compact perturbation, i.e. TS = I + C1 and ST = I + C2 for some bounded operator S and compact operators C1 and C2.
In other words, an operator T ∈ L(H) is Fredholm, in the classical sense, if and only if its projection in the Calkin algebra is invertible.
Sketch of proof
The outline of a proof is as follows. For the ⇒ implication, express H as the orthogonal direct sum
The restriction T : Ker(T)⊥ → Ran(T) is a bijection, and therefore invertible by the open mapping theorem. Extend this inverse by 0 on Ran(T)⊥ = Ker(T*) to an operator S defined on all of H. Then I − TS is the finite-rank projection onto Ker(T*), and I − ST is the projection onto Ker(T). This proves the only if part of the theorem.
For the converse, suppose now that ST = I + C2 for some compact operator C2. If x ∈ Ker(T), then STx = x + C2x = 0. So Ker(T) is contained in an eigenspace of C2, which is finite-dimensional (see spectral theory of compact operators). Therefore, Ker(T) is also finite-dimensional. The same argument shows that Ker(T*) is also finite-dimensional.
To prove that Ran(T) is closed, we make use of the approximation property: let F be a finite-rank operator such that ||F − C2|| < r. Then for every x in Ker(F),
||S||⋅||Tx|| ≥ ||STx|| = ||x + C2x|| = ||x + Fx +C2x − Fx|| ≥ ||x|| − ||C2 − F||⋅||x|| ≥ (1 − r)||x||.
Thus T is bounded below on Ker(F), which implies that T(Ker(F)) is closed. On the other hand, T(Ker |
https://en.wikipedia.org/wiki/Ordnett | Ordnett is a Norwegian online dictionary service, published and maintained by Kunnskapsforlaget, a privately held publishing house. Ordnett offers access to 50 dictionaries, covering 11 languages. This makes it the largest, commercially available dictionary database in Norway. Thirteen of the dictionaries are either oneway or twoway English (with Norwegian being the opposite language), including 3 publications from Oxford University Press.
Ordnett is available through an ordinary web browser. It is a subscription based service, offering annual or monthly subscriptions.
Dictionaries
Norwegian Dictionary
Dictionary of Foreign Words
Norwegian Dictionary of Synonyms
Orthographic Dictionary of Nynorsk
Orthographic Dictionary of Bokmål
Norwegian Dictionary of Antonyms and Synonyms
Bokmål-Nynorsk Dictionary
Dictionary of Riksmål
Medical Dictionary
Encyclopedia of Law
Norwegian-English Dictionary
English-Norwegian Dictionary
Norwegian-English Comprehensive Dictionary
English-Norwegian Comprehensive Dictionary
Norwegian-English Technical Dictionary
English-Norwegian Technical Dictionary
Norwegian-English Dictionary of Economics
English-Norwegian Dictionary of Economics
Norwegian-English Medical Dictionary
English-Norwegian Medical Dictionary
The Oxford Dictionary of English
The Oxford Sentence Dictionary
The Oxford Thesaurus of English
The Oxford Dictionary of Quotations
The Oxford Concise Medical Dictionary
The Oxford Dictionary of Economics
The Oxford Dictionary of Finance and Banking
The Oxford Dictionary of Business and Management
The Oxford Dictionary of Science
Norwegian-Swedish Dictionary
Swedish-Norwegian Dictionary
Norwegian-French Dictionary
French-Norwegian Dictionary
Monolingual French Dictionary Le Robert
Norwegian-Spanish Dictionary
Spanish-Norwegian Dictionary
Monolingual Spanish Dictionary Larousse
Norwegian-German Dictionary
German-Norwegian Dictionary
Norwegian-German Technical Dictionary
German-Norwe |
https://en.wikipedia.org/wiki/Proboscis%20%28anomaly%29 | In teratology, a proboscis is a blind-ended, tube-like structure, commonly located in the middle of the face. It is commonly seen in severe forms of holoprosencephaly that include cyclopia and is usually the result of abnormal development of the nose.
Types
Proboscis formation is classified in four general types: holoprosencephalic proboscis, lateral nasal proboscis, supernumerary probosci, and disruptive proboscis.
Holoprosencephalic proboscis
A holoprosencephalic proboscis is found in holoprosencephaly (a condition in which the forebrain of the embryo fails to develop into two hemispheres as it should). In cyclopia or ethmocephaly, proboscis is an abnormally formed nose. In cyclopia, a single eye in the middle of the face is associated with arrhinia (absence of the nose) and usually with proboscis formation above the eye. In ethmocephaly, two separate hypoteloric eyes (eyes placed very close together) are associated with arrhinia and proboscis formation above the eye. In cebocephaly, no proboscis formation occurs, but a single-nostril nose is present.
Lateral proboscis
A lateral proboscis, also known as proboscis lateralis or lateral nasal proboscis, is a tubular proboscis-like structure and represents incomplete formation of one side of the nose; it is found instead of a nostril. The olfactory bulb is usually rudimentary on the side involved in the malformation. The tear duct, nasal bone, nasal cavity, vomer (the small thin bone separating the left and right nasal passages), maxillary sinus, ethmoidal sinuses, and another nasal structure known as the cribriform plate cells are often missing on this side as well. Ocular hypertelorism (eyes set far apart) may be present. The proboscis lateralis is a rare nasal anomaly.
Supernumerary proboscis
A supernumerary proboscis, or accessory proboscis, is found when both nostrils are formed and there is a proboscis in addition to it. Accessory proboscis arise from a supernumerary olfactory placode.
Disruptive probos |
https://en.wikipedia.org/wiki/Apicius%20%282nd%20century%20AD%29 | According to the Deipnosophistae of Athenaeus, Apicius is the name of a cook who found a way of packing fresh oysters to send to the emperor Trajan while he was on campaign in Mesopotamia around 115 AD. The information comes by way of the Epitome or summary of the Deipnosophists, since the full text of this part of Athenaeus's work does not survive. If the information is correct, this is the third known Roman food specialist who was named Apicius, the earliest being the luxury-loving Apicius of the 1st century BC.
The late Roman cookbook Apicius gives a recipe for preserving oysters, among other foods. This is possibly the only detail in which the cookbook has a relationship with historical information about any of the people named Apicius.
Notes
Sources
Epitome of Athenaeus 1.7d
Apicius 1.12
Bibliography
, p. 17
Ancient Roman chefs
2nd-century deaths
Year of birth unknown
Food preservation |
https://en.wikipedia.org/wiki/Pignistic%20probability | In decision theory, a pignistic probability is a probability that a rational person will assign to an option when required to make a decision.
A person may have, at one level certain beliefs or a lack of knowledge, or uncertainty, about the options and their actual likelihoods. However, when it is necessary to make a decision (such as deciding whether to place a bet), the behaviour of the rational person would suggest that the person has assigned a set of regular probabilities to the options. These are the pignistic probabilities.
The term was coined by Philippe Smets, and stems from the Latin pignus, a bet. He contrasts the pignistic level, where one might take action, with the credal level, where one interprets the state of the world:
The transferable belief model is based on the assumption that beliefs manifest themselves at two mental levels: the ‘credal’ level where beliefs are entertained and the ‘pignistic’ level where beliefs are used to make decisions (from ‘credo’ I believe and ‘pignus’ a bet, both in Latin). Usually these two levels are not distinguished and probability functions are used to quantify beliefs at both levels. The justification for the use of probability functions is usually linked to “rational” behavior to be held by an ideal agent involved in some decision contexts.
A pignistic probability transform will calculate these pignistic probabilities from a structure that describes belief structures.
Notes
Further reading
P. Smets and R. Kennes, “The Transferable Belief Model", Artificial Intelligence (v.66, 1994) pp. 191–243
Decision theory
Probability interpretations |
https://en.wikipedia.org/wiki/RAFOS%20float | RAFOS floats are submersible devices used to map ocean currents well below the surface. They drift with these deep currents and listen for acoustic "pongs" emitted at designated times from multiple moored sound sources. By analyzing the time required for each pong to reach a float, researchers can pinpoint its position by triangulation. The floats are able to detect the pongs at ranges of hundreds of kilometers because they generally target a range of depths known as the SOFAR (Sound Fixing And Ranging) channel, which acts as a waveguide for sound. The name "RAFOS" derives from the earlier SOFAR floats, which emitted sounds that moored receivers picked up, allowing real-time underwater tracking. When the transmit and receive roles were reversed, so was the name: RAFOS is SOFAR spelled backward. Listening for sound requires far less energy than transmitting it, so RAFOS floats are cheaper and longer lasting than their predecessors, but they do not provide information in real-time: instead they store it on board, and upon completing their mission, drop a weight, rise to the surface, and transmit the data to shore by satellite.
Introduction
Of the importance of measuring ocean currents
The underwater world is still mostly unknown. The main reason for it is the difficulty to gather information in situ, to experiment, and even to reach certain places. But the ocean nonetheless is of a crucial importance for scientists, as it covers about 71% of the planet.
Knowledge of ocean currents is of crucial importance. In important scientific aspects, as the study of global warming, ocean currents are found to greatly affect the Earth's climate since they are the main heat transfer mechanism. They are the reason for heat flux between hot and cold regions, and in a larger sense drive almost every understood circulation. These currents also affect marine debris, and vice versa.
In an economical aspect, a better understanding can help reducing costs of shipping, since the curr |
https://en.wikipedia.org/wiki/Towed%20array%20sonar | A towed array sonar is a system of hydrophones towed behind a submarine or a surface ship on a cable. Trailing the hydrophones behind the vessel, on a cable that can be kilometers long, keeps the array's sensors away from the ship's own noise sources, greatly improving its signal-to-noise ratio, and hence the effectiveness of detecting and tracking faint contacts, such as quiet, low noise-emitting submarine threats, or seismic signals.
A towed array offers superior resolution and range compared with hull-mounted sonar. It also covers the baffles, the blind spot of hull-mounted sonar. However, effective use of the system limits a vessel's speed and care must be taken to protect the cable from damage.
History
During World War I, a towed sonar array known as the "Electric Eel" was developed by Harvey Hayes, a U.S. Navy physicist. This system is believed to be the first towed sonar array design. It employed two cables, each with a dozen hydrophones attached. The project was discontinued after the war.
The U.S. Navy resumed development of towed array technology during the 1960s in response to the development of nuclear-powered submarines by the Soviet Union.
Current use of towed arrays
On surface ships, towed array cables are normally stored in drums, then spooled out behind the vessel when in use. U.S. Navy submarines typically store towed arrays inside an outboard tube, mounted along the vessel's hull, with an opening on the starboard tail. There is also equipment located in a ballast tank (free flood area) while the cabinet used to operate the system is inside the submarine.
Hydrophones in a towed array system are placed at specific distances along the cable, the end elements far enough apart to gain a basic ability to triangulate on a sound source. Similarly, various elements are angled up or down giving an ability to triangulate an estimated vertical depth of target. Alternatively three or more arrays are used to aid in depth detection.
On the first few hun |
https://en.wikipedia.org/wiki/Mathematics%20and%20God | Connections between mathematics and God include the use of mathematics in arguments about the existence of God and about whether belief in God is beneficial.
Mathematical arguments for God's existence
In the 1070s, Anselm of Canterbury, an Italian medieval philosopher and theologian, created an ontological argument which sought to use logic to prove the existence of God. A more elaborate version was given by Gottfried Leibniz in the early eighteenth century. Kurt Gödel created a formalization of Leibniz' version, known as Gödel's ontological proof.
A more recent argument was made by Stephen D. Unwin in 2003, who suggested the use of Bayesian probability to estimate the probability of God's existence.
Mathematical arguments for belief
A common application of decision theory to the belief in God is Pascal's wager, published by Blaise Pascal in his 1669 work Pensées. The application was a defense of Christianity stating that "If God does not exist, the Atheist loses little by believing in him and gains little by not believing. If God does exist, the Atheist gains eternal life by believing and loses an infinite good by not believing". The atheist's wager has been proposed as a counterargument to Pascal's Wager.
See also
Existence of God
Further reading
Cohen, Daniel J., Equations from God: Pure Mathematics and Victorian Faith, Johns Hopkins University Press, 2007 .
Livio, Mario, Is God a Mathematician?, Simon & Schuster, 2011 .
Ransford, H. Chris, God and the Mathematics of Infinity: What Irreducible Mathematics Says about Godhood, Columbia University Press, 2017 . |
https://en.wikipedia.org/wiki/Neural%20circuit | A neural circuit (also known as a biological neural network BNNs) is a population of neurons interconnected by synapses to carry out a specific function when activated. Multiple neural circuits interconnect with one another to form large scale brain networks.
Neural circuits have inspired the design of artificial neural networks, though there are significant differences.
Early study
Early treatments of neural networks can be found in Herbert Spencer's Principles of Psychology, 3rd edition (1872), Theodor Meynert's Psychiatry (1884), William James' Principles of Psychology (1890), and Sigmund Freud's Project for a Scientific Psychology (composed 1895). The first rule of neuronal learning was described by Hebb in 1949, in the Hebbian theory. Thus, Hebbian pairing of pre-synaptic and post-synaptic activity can substantially alter the dynamic characteristics of the synaptic connection and therefore either facilitate or inhibit signal transmission. In 1959, the neuroscientists, Warren Sturgis McCulloch and Walter Pitts published the first works on the processing of neural networks. They showed theoretically that networks of artificial neurons could implement logical, arithmetic, and symbolic functions. Simplified models of biological neurons were set up, now usually called perceptrons or artificial neurons. These simple models accounted for neural summation (i.e., potentials at the post-synaptic membrane will summate in the cell body). Later models also provided for excitatory and inhibitory synaptic transmission.
Connections between neurons
The connections between neurons in the brain are much more complex than those of the artificial neurons used in the connectionist neural computing models of artificial neural networks. The basic kinds of connections between neurons are synapses: both chemical and electrical synapses.
The establishment of synapses enables the connection of neurons into millions of overlapping, and interlinking neural circuits. Presynaptic protei |
https://en.wikipedia.org/wiki/Discrete%20q-Hermite%20polynomials | In mathematics, the discrete q-Hermite polynomials are two closely related families hn(x;q) and ĥn(x;q) of basic hypergeometric orthogonal polynomials in the basic Askey scheme, introduced by . give a detailed list of their properties. hn(x;q) is also called discrete q-Hermite I polynomials and ĥn(x;q) is also called discrete q-Hermite II polynomials.
Definition
The discrete q-Hermite polynomials are given in terms of basic hypergeometric functions and the Al-Salam–Carlitz polynomials by
and are related by |
https://en.wikipedia.org/wiki/Bioche%27s%20rules | Bioche's rules, formulated by the French mathematician (1859–1949), are rules to aid in the computation of certain indefinite integrals in which the integrand contains sines and cosines.
In the following, is a rational expression in and . In order to calculate , consider the integrand . We consider the behavior of this entire integrand, including the , under translation and reflections of the t axis. The translations and reflections are ones that correspond to the symmetries and periodicities of the basic trigonometric functions.
Bioche's rules state that:
If , a good change of variables is .
If , a good change of variables is .
If , a good change of variables is .
If two of the preceding relations both hold, a good change of variables is .
In all other cases, use .
Because rules 1 and 2 involve flipping the t axis, they flip the sign of dt, and therefore the behavior of ω under these transformations differs from that of ƒ by a sign. Although the rules could be stated in terms of ƒ, stating them in terms of ω has a mnemonic advantage, which is that we choose the change of variables u(t) that has the same symmetry as ω.
These rules can be, in fact, stated as a theorem: one shows that the proposed change of variable reduces (if the rule applies and if f is actually of the form ) to the integration of a rational function in a new variable, which can be calculated by partial fraction decomposition.
Case of polynomials
To calculate the integral , Bioche's rules apply as well.
If p and q are odd, one uses ;
If p is odd and q even, one uses ;
If p is even and q odd, one uses ;
If not, one is reduced to lineariz.
Another version for hyperbolic functions
Suppose one is calculating .
If Bioche's rules suggest calculating by (respectively, ), in the case of hyperbolic sine and cosine, a good change of variable is (respectively, ). In every case, the change of variable allows one to reduce to a rational function, this last change of variable being mos |
https://en.wikipedia.org/wiki/WCMN-LD | WCMN-LD (channel 13) is a low-power television station licensed to both St. Cloud and Sartell, Minnesota, United States, which primarily broadcasts religious programming. Owned by StarCom, LLC, the station maintains a transmitter on Julep Road (off State Highway 23) in Waite Park, Minnesota.
History
Channel 13 began as K13VS in 1992; it was affiliated with the Main Street TV network and was the first new TV venture in St. Cloud since KXLI channel 41 started in 1982. While it also aired several local shows, it was hindered by a lack of visibility on cable systems. By 1994, the station was purchasing time on a local cable channel to make its weekday evening shows, including a local newscast available to cable homes.
StarCom sold three radio stations to Regent Broadcasting in 2000 so it could purchase and develop channel 13, which had become WCMN-LP in 1996. It returned to the air on August 20, 2001, airing All News Channel with local inserts. When ANC folded in 2002, the station switched to America One and then The Sportsman Channel.
On January 4, 2022, the station filed a license to cover application for digital facilities, stating that it is broadcasting in the ATSC 3.0 format, making it the first such station in Minnesota. It had operated in analog on VHF channel 13 until the FCC-mandated shutdown of analog LPTV stations on July 13, 2021, and did not construct an ATSC 1.0 facility. The station was licensed for digital operation effective September 21, 2022, changing its call sign to WCMN-LD. |
https://en.wikipedia.org/wiki/Baudhayana%20sutras | The (Sanskrit: बौधायन) are a group of Vedic Sanskrit texts which cover dharma, daily ritual, mathematics and is one of the oldest Dharma-related texts of Hinduism that have survived into the modern age from the 1st-millennium BCE. They belong to the Taittiriya branch of the Krishna Yajurveda school and are among the earliest texts of the genre.
The Baudhayana sūtras consist of six texts:
the , probably in 19 (questions),
the in 20 (chapters),
the in 4 ,
the Grihyasutra in 4 ,
the in 4 and
the in 3 .
The is noted for containing several early mathematical results, including an approximation of the square root of 2 and the statement of the Pythagorean theorem.
Baudhāyana Shrautasūtra
His Śrauta sūtras related to performing Vedic sacrifices have followers in some Smārta brāhmaṇas (Iyers) and some Iyengars of Tamil Nadu, Yajurvedis or Namboothiris of Kerala, Gurukkal Brahmins (Aadi Saivas) and Kongu Vellalars. The followers of this sūtra follow a different method and do 24 Tila-tarpaṇa, as Lord Krishna had done tarpaṇa on the day before amāvāsyā; they call themselves Baudhāyana Amavasya.
Baudhāyana Dharmasūtra
The Dharmasūtra of Baudhāyana like that of Apastamba also forms a part of the larger Kalpasutra. Likewise, it is composed of praśnas which literally means 'questions' or books. The structure of this Dharmasūtra is not very clear because it came down in an incomplete manner. Moreover, the text has undergone alterations in the form of additions and explanations over a period of time. The praśnas consist of the Srautasutra and other ritual treatises, the Sulvasutra which deals with vedic geometry, and the Grhyasutra which deals with domestic rituals.
There are no commentaries on this Dharmasūtra with the exception of Govindasvāmin's Vivaraṇa. The date of the commentary is uncertain but according to Olivelle it is not very ancient. Also the commentary is inferior in comparison to that of Haradatta on Āpastamba and Gautama.
This Dharmasūtr |
https://en.wikipedia.org/wiki/Longwood%20Medical%20and%20Academic%20Area | The Longwood Medical and Academic Area (also known as Longwood Medical Area, LMA, or simply Longwood) is a medical campus in Boston, Massachusetts. Flanking Longwood Avenue, LMA is adjacent to the Fenway–Kenmore, Audubon Circle, and Mission Hill neighborhoods, as well as the town of Brookline.
It is most strongly associated with Harvard Medical School, the Harvard T.H. Chan School of Public Health, the Harvard School of Dental Medicine, and other medical facilities such as Harvard's teaching hospitals, but prominent non-Harvard institutions are located there as well. Long known as a global center of research, institutions in the Longwood Medical Area secured over $1.2 billion in NIH funds alone, in FY 2018 which exceeds funding received by 44 states.
Hospitals and research institutions
Beth Israel Deaconess Medical Center
Boston Children's Hospital
Brigham and Women's Hospital
Dana–Farber Cancer Institute
Joslin Diabetes Center
Massachusetts Mental Health Center
New England Baptist Hospital
Wyss Institute for Biologically Inspired Engineering
Schools and colleges
Boston Latin School
Emmanuel College
Harvard Medical School
Harvard School of Dental Medicine
Harvard T.H. Chan School of Public Health
Massachusetts College of Art and Design
Massachusetts College of Pharmacy and Health Sciences
Simmons University
Wentworth Institute of Technology
Boston University Wheelock College of Education & Human Development
Winsor School
Transportation
LMA is served by two subway stations at opposite ends of Longwood Avenue:
"Longwood" (on the MBTA Green Line's "D" branch) and
"Longwood Medical Area" (on the "E" branch).
Several public bus routes serve the area and commuter rail service is available at nearby Ruggles Station. MASCO offers shuttle buses (generally for affiliated personnel only) around the Longwood Medical Area and between Harvard's Cambridge Campus and the Medical Campus (M2). The M2 shuttle is free for passengers holding a Harvard ID.
Energ |
https://en.wikipedia.org/wiki/Entoloma%20abortivum | Entoloma abortivum, commonly known as the aborted entoloma or shrimp of the woods, is an edible mushroom in the Entolomataceae family of fungi. Caution should be used in identifying the species before eating (similar species such as Entoloma sinuatum being poisonous). First named Clitopilus abortivus by Miles Joseph Berkeley and Moses Ashley Curtis, it was given its current name by the Dutch mycologist Marinus Anton Donk in 1949.
It was believed that the honey mushroom, Armillaria mellea, was parasitizing the entoloma. But research has indicated that the inverse may be true—the entoloma may be parasitizing the honey mushroom.
There is still some disagreement by mushroom collectors about this since it is common to see both the aborted and unaborted forms of the entoloma on wood and in leaf litter, whereas Armillaria generally only fruits on wood. Both versions of the entoloma have also been observed when there are no Armillaria fruiting.
See also
List of Entoloma species |
https://en.wikipedia.org/wiki/Uncinate%20process%20of%20pancreas | The uncinate process is a small part of the pancreas. The uncinate process is the formed prolongation of the angle of junction of the lower and left lateral borders in the head of the pancreas. The word "uncinate" comes from the Latin "uncinatus", meaning "hooked".
Structure
Development
The pancreas arises as two separate bodies, the dorsal pancreas and the ventral pancreas. The dorsal pancreas appears first, at around day 26, opposite the developing hepatic duct, and grows into the dorsal mesentery. The ventral pancreas develops at the junction of the hepatic duct and the rest of the foregut.
During development, differential growth of the wall of the stomach causes it to rotate to the left, and the liver and stomach undergo a lot of growth. This makes the two parts of the pancreas rotate around the duodenum. They then fuse; the dorsal pancreatic bud becomes the body, tail, and isthmus of the pancreas. The isthmus (also called the central pancreas) is the region of the gland that runs anterior to the superior mesenteric artery; by convention, it divides the right and left sides of the pancreas.
The ventral pancreatic bud forms the pancreatic head and uncinate process. The glands continue to develop but the duct systems anastomose. The main pancreatic duct is formed by the fusion of the dorsal and ventral pancreas.
The embryology also explains the strange zig-zag course of the main pancreatic duct and the occasional appearance of an accessory pancreatic duct.
The uncinate process, unlike the remainder of the organ, passes posteriorly to the superior mesenteric vein (it can pass posteriorly to the superior mesenteric artery, but this is less common).
Clinical significance
Sometimes the pancreas fails to develop normally and there may be congenital defects associated with the uncinate process. The uncinate process may split and encircle the duodenum, which is known as an annular pancreas. There is also a common condition called pancreas divisum where the dorsal |
https://en.wikipedia.org/wiki/Jeanne%20N.%20Clelland | Jeanne A. Nielsen Clelland (born 1970) is an American mathematician specializing in differential geometry and its applications to differential equations. She is a professor of mathematics at the University of Colorado Boulder, and the author of a textbook on moving frames, From Frenet to Cartan: The Method of Moving Frames (Graduate Studies in Mathematics 178, American Mathematical Society, 2017).
Education
Clelland graduated from Duke University in 1991, and stayed at Duke for her graduate studies, completing her doctorate there in 1996. Her dissertation, Geometry of Conservation Laws for a Class of Parabolic Partial Differential Equations, was supervised by Robert Bryant.
Recognition
Clelland was awarded the Alice T. Schafer Prize from the Association for Women in Mathematics in 1991. She is also the 2018 winner of the Burton W. Jones Distinguished Teaching Award, from the Rocky Mountain Section of the Mathematical Association of America. |
https://en.wikipedia.org/wiki/Multiple%20drug%20resistance | Multiple drug resistance (MDR), multidrug resistance or multiresistance is antimicrobial resistance shown by a species of microorganism to at least one antimicrobial drug in three or more antimicrobial categories. Antimicrobial categories are classifications of antimicrobial agents based on their mode of action and specific to target organisms. The MDR types most threatening to public health are MDR bacteria that resist multiple antibiotics; other types include MDR viruses, parasites (resistant to multiple antifungal, antiviral, and antiparasitic drugs of a wide chemical variety).
Recognizing different degrees of MDR in bacteria, the terms extensively drug-resistant (XDR) and pandrug-resistant (PDR) have been introduced. Extensively drug-resistant (XDR) is the non-susceptibility of one bacteria species to all antimicrobial agents except in two or less antimicrobial categories. Within XDR, pandrug-resistant (PDR) is the non-susceptibility of bacteria to all antimicrobial agents in all antimicrobial categories. The definitions were published in 2011 in the journal Clinical Microbiology and Infection and are openly accessible.
Common multidrug-resistant organisms (MDROs)
Common multidrug-resistant organisms are usually bacteria:
Vancomycin-Resistant Enterococci (VRE)
Methicillin-resistant Staphylococcus aureus (MRSA)
Extended-spectrum β-lactamase (ESBLs) producing Gram-negative bacteria
Klebsiella pneumoniae carbapenemase (KPC) producing Gram-negatives
Multidrug-resistant Gram negative rods (MDR GNR) MDRGN bacteria such as Enterobacter species, E.coli, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa
Multi-drug-resistant tuberculosis
Overlapping with MDRGN, a group of Gram-positive and Gram-negative bacteria of particular recent importance have been dubbed as the ESKAPE group (Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa and Enterobacter species).
Bacterial resistance |
https://en.wikipedia.org/wiki/Dovecot%20%28software%29 | Dovecot is an open-source IMAP and POP3 server for Unix-like operating systems, written primarily with security in mind. Timo Sirainen originated Dovecot and first released it in July 2002. Dovecot developers primarily aim to produce a lightweight, fast and easy-to-set-up open-source email server.
The primary purpose of Dovecot is to act as a mail storage server. The mail is delivered to the server using some mail delivery agent (MDA) and is stored for later access with an email client (mail user agent, or MUA). Dovecot can also act as mail proxy server, forwarding connection to another mail server, or act as a lightweight MUA in order to retrieve and manipulate mail on remote server for e.g. mail migration.
According to the Open Email Survey, as of 2020, Dovecot has an installed base of at least 2.9million IMAP servers, and has a global market share of 76.9% of all IMAP servers. The results of the same survey in 2019 gave figures of 2.6million and 76.2%, respectively.
Features
Dovecot can work with standard mbox, Maildir, and its own native high-performance dbox formats. It is fully compatible with UW IMAP and Courier IMAP servers’ implementation of them, as well as mail clients accessing the mailboxes directly.
Dovecot also includes a mail delivery agent (called Local delivery agent in Dovecot's documentation) and an LMTP server, with the optional Sieve filtering support.
Dovecot supports a variety of authentication schemas for IMAP, POP and message submission agent (MSA) access, including CRAM-MD5 and the more secure DIGEST-MD5.
With version 2.2, some new features have been added to Dovecot, e.g. additional IMAP command extensions, dsync has been rewritten or optimized, and shared mailboxes now support per-user flags.
Version 2.3 adds a message submission agent, Lua scripting for authentication, and some other improvements.
Apple Inc. includes Dovecot for email services since Mac OS X Server 10.6 Snow Leopard.
In 2017, Mozilla, via the Mozilla Open Sourc |
https://en.wikipedia.org/wiki/Russian%20copulation | In cryptography, Russian copulation is a method of rearranging plaintext before encryption so as to conceal stereotyped headers, salutations, introductions, endings, signatures, etc. This obscures clues for a cryptanalyst, and can be used to increase cryptanalytic difficulty in naive cryptographic schemes (however, most modern schemes contain more rigorous defences; see ciphertext indistinguishability). This is of course desirable for those sending messages and wishing them to remain confidential. Padding is another technique for obscuring such clues.
The technique is to break the starting plaintext message into two parts and then to invert the order of the parts (similar to circular shift). This puts all endings and beginnings (presumably the location of most boilerplate phrases) "somewhere in the middle" of the version of the plaintext that is actually encrypted. For some messages, mostly those not in a human language (e.g., images or tabular data), the decrypted version of the plaintext will present problems when reversing the inversion. For messages expressed in ordinary language, there is sufficient redundancy that the inversion can almost always be reversed by a human immediately on inspection.
The English phrase suggests that it originally came from an observation about Russian cryptographic practice. However, the technique is generally useful and neither was, nor is, limited to use by Russians. |
https://en.wikipedia.org/wiki/Low-temperature%20polycrystalline%20silicon | Low-temperature polycrystalline silicon (LTPS) is polycrystalline silicon that has been synthesized at relatively low temperatures (~650 °C and lower) compared to in traditional methods (above 900 °C). LTPS is important for display industries, since the use of large glass panels prohibits exposure to deformative high temperatures. More specifically, the use of polycrystalline silicon in thin-film transistors (LTPS-TFT) has high potential for large-scale production of electronic devices like flat panel LCD displays or image sensors.
Development of polycrystalline silicon
Polycrystalline silicon (p-Si) is a pure and conductive form of the element composed of many crystallites, or grains of highly ordered crystal lattice. In 1984, studies showed that amorphous silicon (a-Si) is an excellent precursor for forming p-Si films with stable structures and low surface roughness. Silicon film is synthesized by low-pressure chemical vapor deposition (LPCVD) to minimize surface roughness. First, amorphous silicon is deposited at 560–640 °C. Then it is thermally annealed (recrystallized) at 950–1000 °C. Starting with the amorphous film, rather than directly depositing crystals, produces a product with a superior structure and a desired smoothness. In 1988, researchers discovered that further lowering temperature during annealing, together with advanced plasma-enhanced chemical vapor deposition (PECVD), could facilitate even higher degrees of conductivity. These techniques have profoundly impacted the microelectronics, photovoltaic, and display enhancement industries.
Use in liquid-crystal display
Amorphous silicon TFTs have been widely used in liquid-crystal display (LCD) flat panels because they can be assembled into complex high-current driver circuits. Amorphous Si-TFT electrodes drive the alignment of crystals in LCDs. The evolution to LTPS-TFTs can have many benefits such as higher device resolution, lower synthesis temperature, and reduced price of essential substrates. |
https://en.wikipedia.org/wiki/Vertical%20handover | Vertical handover or vertical handoff refers to a network node changing the type of connectivity it uses to access a supporting infrastructure, usually to support node mobility. For example, a suitably equipped laptop might be able to use both high-speed wireless LAN and cellular technology for Internet access. Wireless LAN connections generally provide higher speeds, while cellular technologies generally provide more ubiquitous coverage. Thus the laptop user might want to use a wireless LAN connection whenever one is available and to revert to a cellular connection when the wireless LAN is unavailable. Vertical handovers refer to the automatic transition from one technology to another in order to maintain communication. This is different from a horizontal handover between different wireless access points that use the same technology.
Vertical handoffs between WLAN and UMTS (WCDMA) have attracted a great deal of attention in all the research areas of the 4G wireless network, due to the benefit of utilizing the higher bandwidth and lower cost of WLAN as well as better mobility support and larger coverage of UMTS. Vertical handovers among a range of wired and wireless access technologies including WiMAX can be achieved using Media independent handover which is standardized as IEEE 802.21.
Related issues
Dual mode card
To support vertical handover, a mobile terminal needs to have a dual mode card, for example one that can work under both WLAN and UMTS frequency bands and modulation schemes.
Interworking architecture
For the vertical handover between UMTS and WLAN, there are two main interworking architecture: tight coupling and loose coupling.
The tight coupling scheme, which 3GPP adopted, introduces two more elements: WAG (Wireless Access Gateway) and PDG (Packet Data Gateway). So the data transfers from WLAN AP to a Corresponding Node on the internet must go through the Core Network of UMTS.
Loose coupling is more used when the WLAN is not operated by cellular o |
https://en.wikipedia.org/wiki/Double%20dispatch | In software engineering, double dispatch is a special form of multiple dispatch, and a mechanism that dispatches a function call to different concrete functions depending on the runtime types of two objects involved in the call. In most object-oriented systems, the concrete function that is called from a function call in the code depends on the dynamic type of a single object and therefore they are known as single dispatch calls, or simply virtual function calls.
Dan Ingalls first described how to use double dispatching in Smalltalk, calling it multiple polymorphism.
Overview
The general problem addressed is how to dispatch a message to different methods depending not only on the receiver but also on the arguments.
To that end, systems like CLOS implement multiple dispatch. Double dispatch is another solution that gradually reduces the polymorphism on systems that do not support multiple dispatch.
Use cases
Double dispatch is useful in situations where the choice of computation depends on the runtime types of its arguments. For example, a programmer could use double dispatch in the following situations:
Sorting a mixed set of objects: algorithms require that a list of objects be sorted into some canonical order. Deciding if one element comes before another element requires knowledge of both types and possibly some subset of the fields.
Adaptive collision algorithms usually require that collisions between different objects be handled in different ways. A typical example is in a game environment where the collision between a spaceship and an asteroid is computed differently from the collision between a spaceship and a spacestation.
Painting algorithms that require the intersection points of overlapping sprites to be rendered in a different manner.
Personnel management systems may dispatch different types of jobs to different personnel. A schedule algorithm that is given a person object typed as an accountant and a job object typed as engineering rejects the sc |
https://en.wikipedia.org/wiki/Digital%20comparator | A digital comparator or magnitude comparator is a hardware electronic device that takes two numbers as input in binary form and determines whether one number is greater than, less than or equal to the other number. Comparators are used in central processing units (CPUs) and microcontrollers (MCUs). Examples of digital comparator include the CMOS 4063 and 4585 and the TTL 7485 and 74682.
An XNOR gate is a basic comparator, because its output is "1" only if its two input bits are equal.
The analog equivalent of digital comparator is the voltage comparator. Many microcontrollers have analog comparators on some of their inputs that can be read or trigger an interrupt.
Implementation
Consider two 4-bit binary numbers A and B so
Here each subscript represents one of the digits in the numbers.
Equality
The binary numbers A and B will be equal if all the pairs of significant digits of both numbers are equal, i.e.,
, , and
Since the numbers are binary, the digits are either 0 or 1 and the boolean function for equality of any two digits and can be expressed as
we can also replace it by XNOR gate in digital electronics.
is 1 only if and are equal.
For the equality of A and B, all variables (for i=0,1,2,3) must be 1.
So the equality condition of A and B can be implemented using the AND operation as
The binary variable (A=B) is 1 only if all pairs of digits of the two numbers are equal.
Inequality
In order to manually determine the greater of two binary numbers, we inspect the relative magnitudes of pairs of significant digits, starting from the most significant bit, gradually proceeding towards lower significant bits until an inequality is found. When an inequality is found, if the corresponding bit of A is 1 and that of B is 0 then we conclude that A>B.
This sequential comparison can be expressed logically as:
(A>B) and (A < B) are output binary variables, which are equal to 1 when A>B or A<B respectively.
See also
List of LM-series integrated c |
https://en.wikipedia.org/wiki/Autoimmune%20heart%20disease | Autoimmune heart diseases are the effects of the body's own immune defense system mistaking cardiac antigens as foreign and attacking them leading to inflammation of the heart as a whole, or in parts. The commonest form of autoimmune heart disease is rheumatic heart disease or rheumatic fever.
Cause
Aetiologically, these are most commonly seen in children with a history of sore throat caused by a streptococcal infection. This is similar to the post-streptococcal glomerulonephritis. Here, the anti-bacterial antibodies cross react with the heart antigens causing inflammation.
Inflammatory damage leads to the following:
Pericarditis: Here the pericardium gets inflamed. Acutely, it can cause pericardial effusion leading to cardiac tamponade and death. After healing, there may be fibrosis and adhesion of the pericardium with the heart leading to constriction of the heart and reduced cardiac function.
Myocarditis: Here the muscle bulk of the heart gets inflamed. Inflamed muscles have reduced functional capacity. This may be fatal, if left untreated as is in a case of pancarditis. On healing, there will be fibrosis and reduced functional capacity.
Endocarditis: Here the inner lining of the heart is inflamed, including the heart valves. This may cause a valve prolapse, adhesion of the adjacent cusps of these valves and occlusion of the flow tracts of blood through the heart causing diseases called valve stenosis.
Mechanism
These are the typical mechanisms of autoimmunity. Autoantibodies or auto-toxic T-lymphocyte mediated tissue destruction. The process is aided by neutrophils, the complement system, tumor necrosis factor alpha, etc.
Diagnosis
Types
These depend on the amount of inflammation. These are covered in their relevant articles.
Acute: Heart failure; pericardial effusion; etc.
Chronic: Valve diseases as noted above; Reduced cardiac output; Exercise intolerance.
Treatment
Intensive cardiac care and immunosuppressives including corticosteroids are helpful |
https://en.wikipedia.org/wiki/Pronucleus | A pronucleus (: pronuclei) denotes the nucleus found in either a sperm or egg cell during the process of fertilization. The sperm cell undergoes a transformation into a pronucleus after entering the egg cell but prior to the fusion of the genetic material of both the sperm and egg. In contrast, the egg cell possesses a pronucleus once it becomes haploid, not upon the arrival of the sperm cell. Haploid cells, such as sperm and egg cells in humans, carry half the number of chromosomes present in somatic cells, with 23 chromosomes compared to the 46 found in somatic cells. It is noteworthy that the male and female pronuclei do not physically merge, although their genetic material does. Instead, their membranes dissolve, eliminating any barriers between the male and female chromosomes, facilitating the combination of their chromosomes into a single diploid nucleus in the resulting embryo, which contains a complete set of 46 chromosomes.
The presence of two pronuclei serves as the initial indication of successful fertilization, often observed around 18 hours after insemination, or intracytoplasmic sperm injection (ICSI) during in vitro fertilization. At this stage, the zygote is termed a two-pronuclear zygote (2PN). Two-pronuclear zygotes transitioning through 1PN or 3PN states tend to yield poorer-quality embryos compared to those maintaining 2PN status throughout development, and this distinction may hold significance in the selection of embryos during in vitro fertilization (IVF) procedures.
History
The pronucleus was discovered the 1870s microscopically using staining techniques combined with microscopes with improved magnification levels. The pronucleus was originally found during the first studies on meiosis. Edouard Van Beneden published a paper in 1875 in which he first mentions the pronucleus by studying the eggs of rabbits and bats. He stated that the two pronuclei form together in the center of the cell to form the embryonic nucleus. Van Beneden also found t |
https://en.wikipedia.org/wiki/Musqu%C3%A9 | Musqué is a French term applied to certain varieties or clones of grapes used for making wine. The term means both perfumed ("musky") and Muscat-like, and indicates that the variety or clone is highly aromatic. The term musqué is usually suffixed to the name of certain grape varieties to indicate a clone with musqué properties, e.g. "Chardonnay musqué" or "Sauvignon blanc musqué". Such clones have arisen through mutation of a regular ("non-musqué") clone of the variety, and such mutations have been recorded for several different grape varieties.
The most well-known musqué grape is Gewürztraminer, which is a musqué mutation of a red-skinned Traminer, which is also known as Savagnin rose in France. Since the musqué Gewürztraminer has largely replaced non-musqué Traminer, it is generally considered a grape variety in its own right rather than a clone of Traminer or Savagnin.
The issue of whether the musqué mutations, with their distinct aromatic properties should be classified as varieties in their own right, and be allowed to be used for varietal wine labelling has created bureaucratic problems for some winemakers. |
https://en.wikipedia.org/wiki/Comparison%20of%20DVR%20software%20packages | This is a comparison of digital video recorder (DVR), also known as personal video recorder (PVR), software packages. Note: this is may be considered a comparison of DVB software, not all listed packages have recording capabilities.
General information
Basic general information for popular DVR software packages - not all actually record.
Features
Information about what common and prominent DVR features are implemented natively (without third-party add-ons unless stated otherwise):
Video format support
Information about what video codecs are implemented natively (without third-party add-ons) in the PVRs.
Information about what video codecs are implemented natively (without third-party add-ons) in the PVRs.
Network support
Each features is in context of computer-to-computer interaction.
All features must be available after the default install otherwise the feature needs a footnote.
1 Yes with registry change
2 Yes with retail third-party plugin
3 Yes with free supported third-party plugin
4 Yes with free unsupported third-party plugin
5 Yes with free third-party software Web Guide 4
6 Yes with add-on software called DVBLink Server
7 Yes with using symlinks, or just adding folders in settings
TV tuner hardware
TV gateway network tuner TV servers
DVRs require TV tuner cards to receive signals. Many DVRs, as seen above, can use multiple tuners.
HdHomerun has CableCARD Models (HDHomeRun Prime) and OTA Models (HDHomeRun Connect) that are networked TV Tuners
See also
List of free television software
Comparison of video player software
Home cinema
Home theater PC (HTPC)
Digital video recorder
Hard disk recorder
DVD recorder
Quiet PC
Media server
Notes
External links
FLOSS Media Centers Comparison Chart
PVR software packages
Television technology
Television time shifting technology |
https://en.wikipedia.org/wiki/Trypsinization | Trypsinization is the process of cell dissociation using trypsin, a proteolytic enzyme which breaks down proteins, to dissociate adherent cells from the vessel in which they are being cultured. When added to cell culture, trypsin breaks down the proteins that enable the cells to adhere to the vessel. Trypsinization is often used to pass cells to a new vessel. When the trypsinization process is complete the cells will be in suspension and appear rounded.
For experimental purposes, cells are often cultivated in containers that take the form of plastic flasks or plates. In such flasks, cells are provided with a growth medium comprising the essential nutrients required for proliferation, and the cells adhere to the container and each other as they grow.
This process of cell culture or tissue culture requires a method to dissociate the cells from the container and each other. Trypsin, an enzyme commonly found in the digestive tract, can be used to "digest" the proteins that facilitate adhesion to the container and between cells.
Once cells have detached from their container it is necessary to deactivate the trypsin, unless the trypsin is synthetic, as cell surface proteins will also be cleaved over time and this will affect cell functioning. Serum can be used to inactivate trypsin, as it contains protease inhibitors. Because of the presence of these inhibitors, the serum must be removed before treatment of a growth vessel with trypsin and must not be added again to the growth vessel until cells have detached from their growth surface - this detachment can be confirmed by visual observation using a microscope.
Trypsinization is often used to permit passage of adherent cells to a new container, observation for experimentation, or reduction of the degree of confluency in a culture flask through the removal of a percentage of the cells. |
https://en.wikipedia.org/wiki/Cyclin%20D/Cdk4 | The Cyclin D/Cdk4 complex is a multi-protein structure consisting of the proteins Cyclin D and cyclin-dependent kinase 4, or Cdk4, a serine-threonine kinase. This complex is one of many cyclin/cyclin-dependent kinase complexes that are the "hearts of the cell-cycle control system" and govern the cell cycle and its progression. As its name would suggest, the cyclin-dependent kinase is only active and able to phosphorylate its substrates when it is bound by the corresponding cyclin. The Cyclin D/Cdk4 complex is integral for the progression of the cell from the Growth 1 phase to the Synthesis phase of the cell cycle, for the Start or G1/S checkpoint.
Basic Mechanism
Under non-dividing conditions (when the cell is in the G0 phase of the cell cycle), Retinoblastoma protein (Rb) is bound with the E2F transcription factor. Once Cdk4 is activated and is bound with Cyclin D, the Cyclin D/Cdk4 complex phosphorylates Retinoblastoma protein (pRb). Once the Retinoblastoma protein has been phosphorylated, E2F is released. The released E2F is then free to act as a transcription factor and it subsequently binds to DNA promoter regions and activates the expression of proteins required in the next stages of the cell cycle and in DNA replication. Specifically, E2F helps to activate Cyclin E and Cyclin A, which are constituents of other Cdk/Cyclin complexes and are involved in the DNA replication process and other downstream mitotic processes.
Regulation
There are multiple regulation points within this signaling pathway. First and foremost, under non-dividing conditions multiple proteins can inhibit the Cyclin D/Cdk4 complex by binding Cdk4 and inhibiting its association with Cyclin D. Primarily, this is accomplished by p27 but it can also be done by p16 and p21. However, this pathway is stimulated by the upstream binding of growth factors (GF), either from within the cell itself or from neighboring cells. Stimulation by growth factors activates any of a number of receptor tyro |
https://en.wikipedia.org/wiki/LiveStation | Livestation was a platform for distributing live television and radio broadcasts over a data network. It was originally developed by Skinkers Ltd. and is now an independent company called Livestation Ltd. The service was originally based on peer-to-peer technology acquired from Microsoft Research. Between mid-June 2013 and mid-July Livestation was unavailable to some subscribers due to technical issues.
In late 2016, the service closed down without notice.
Overview
Livestation aggregated international news channels online and offered them in a number of ways:
Free to watch: a number of channels could be watched for free on the Livestation website or on their desktop player, a freely downloadable video application that presented all the channels through one interface.
Premium service: some of the free channels were also available on a subscription basis both in higher quality (800kbit/s) and in lower (256kbit/s) delivered via an international content distribution network for higher reliability.
Mobile: Livestation launched BBC World News on the iPhone in 16 European countries and Al Jazeera English globally. The apps were available in the iPhone AppStore and stream the live TV channel 24/7 on both Wi-Fi and 3G connections.
Livestation broadcast streams encoded in VC-1 format (Livestation is not currently using peer-to-peer). Playback controls were overlaid on top of the video stream. Unlike services such as Joost which offer video on demand channels, Livestation streams live broadcasts.
Livestation provided a website, mobile website and native applications for iOS, Android, Nokia and Blackberry handsets. Early models of Samsung TV were also supported. They also provided desktop software available for Windows, Mac (including PowerPC) and Linux. The cross-platform compatibility of the desktop software was facilitated by the Qt framework. Social networking features were later added that include the ability to chat with other viewers and also find out what other |
https://en.wikipedia.org/wiki/Perkinsida | Perkinsida is an order of alveolates in the phylum Perkinsozoa. |
https://en.wikipedia.org/wiki/Balking%20pattern | The balking pattern is a software design pattern that only executes an action on an object when the object is in a particular state. For example, if an object reads ZIP files and a calling method invokes a get method on the object when the ZIP file is not open, the object would "balk" at the request. In the Java programming language, for example, an IllegalStateException might be thrown under these circumstances.
There are some specialists in this field who consider balking more of an anti-pattern than a design pattern. If an object cannot support its API, it should either limit the API so that the offending call is not available, or so that the call can be made without limitation. It should:
Be created in a "sane state";
not make itself available until it is in a sane state;
become a facade and answer back an object that is in a sane state.
Usage
Objects that use this pattern are generally only in a state that is prone to balking temporarily but for an unknown amount of time. If objects are to remain in a state which is prone to balking for a known, finite period of time, then the guarded suspension pattern may be preferred.
Implementation
Below is a general, simple example for an implementation of the balking pattern. As demonstrated by the definition above, notice how the "synchronized" line is utilized. If there are multiple calls to the job method, only one will proceed while the other calls will return with nothing. Another thing to note is the jobCompleted() method. The reason it is synchronized is because the only way to guarantee another thread will see a change to a field is to synchronize all access to it. Actually, since it is a boolean variable, it could be left not explicitly synchronized, only declared volatile - to guarantee that the other thread will not read an obsolete cached value.
public class Example {
private boolean jobInProgress = false;
public void job() {
synchronized(this) {
if (jobInProgress) {
|
https://en.wikipedia.org/wiki/Barrelled%20space | In functional analysis and related areas of mathematics, a barrelled space (also written barreled space) is a topological vector space (TVS) for which every barrelled set in the space is a neighbourhood for the zero vector.
A barrelled set or a barrel in a topological vector space is a set that is convex, balanced, absorbing, and closed.
Barrelled spaces are studied because a form of the Banach–Steinhaus theorem still holds for them.
Barrelled spaces were introduced by .
Barrels
A convex and balanced subset of a real or complex vector space is called a and it is said to be , , or .
A or a in a topological vector space (TVS) is a subset that is a closed absorbing disk; that is, a barrel is a convex, balanced, closed, and absorbing subset.
Every barrel must contain the origin. If and if is any subset of then is a convex, balanced, and absorbing set of if and only if this is all true of in for every -dimensional vector subspace thus if then the requirement that a barrel be a closed subset of is the only defining property that does not depend on (or lower)-dimensional vector subspaces of
If is any TVS then every closed convex and balanced neighborhood of the origin is necessarily a barrel in (because every neighborhood of the origin is necessarily an absorbing subset). In fact, every locally convex topological vector space has a neighborhood basis at its origin consisting entirely of barrels. However, in general, there exist barrels that are not neighborhoods of the origin; "barrelled spaces" are exactly those TVSs in which every barrel is necessarily a neighborhood of the origin. Every finite dimensional topological vector space is a barrelled space so examples of barrels that are not neighborhoods of the origin can only be found in infinite dimensional spaces.
Examples of barrels and non-barrels
The closure of any convex, balanced, and absorbing subset is a barrel. This is because the closure of any convex (respectively, any balanced, any |
https://en.wikipedia.org/wiki/A2%20%28operating%20system%29 | A2 (formerly named Active Object System (AOS), and then Bluebottle) is a modular, object-oriented operating system with unconventional features including automatic garbage-collected memory management, and a zooming user interface. It was developed originally at ETH Zurich in 2002. It is free and open-source software under a BSD-like license.
History
A2 is the next generation of Native Oberon, the x86 PC version of Niklaus Wirth's operating system Oberon. It is small, fast, supports multiprocessing computers, and provides soft real-time computing operation. It is entirely written in an upward-compatible dialect of the programming language Oberon named Active Oberon. Both languages are members of the Pascal family, along with Modula-2.
A2's design allows developing efficient systems based on active objects which run directly on hardware, with no mediating interpreter or virtual machine. Active objects represent a combination of the traditional object-oriented programming (OOP) model of an object, combined with a thread that executes in the context of that object. In the Active Oberon implementation, an active object may include activity of its own, and of its ancestor objects.
Other differences between A2 and more mainstream operating systems is a very minimalist design, completely implemented in a type-safe language, with automatic memory management, combined with a powerful and flexible set of primitives (at the level of programming language and runtime system) for synchronising access to the internal properties of objects in competing execution contexts.
Above the kernel layer, A2 provides a flexible set of modules providing unified abstractions for devices and services, such as file systems, user interfaces, computer network connections, media codecs, etc.
User interface
Bluebottle replaced the older Oberon OS's unique text-based user interface (TUI) with a zooming user interface (ZUI), which is significantly more like a conventional graphical user interfac |
https://en.wikipedia.org/wiki/Smut%20%28fungus%29 | The smuts are multicellular fungi characterized by their large numbers of teliospores. The smuts get their name from a Germanic word for dirt because of their dark, thick-walled, and dust-like teliospores. They are mostly Ustilaginomycetes (phylum Basidiomycota) and comprise seven of the 15 orders of the subphylum. Most described smuts belong to two orders, Ustilaginales and Tilletiales. The smuts are normally grouped with the other basidiomycetes because of their commonalities concerning sexual reproduction.
Hosts
They can cause plant disease and can infect a broad range of hosts in several monocot and dicot plant families.
Smuts are cereal and crop pathogens that most notably affect members of the grass family (Poaceae) and sedges (Cyperaceae). Economically important hosts include maize, barley, wheat, oats, sugarcane, and forage grasses. They eventually hijack the plants' reproductive systems, forming galls which darken and burst, releasing fungal teliospores which infect other plants nearby. Before infection can occur, the smuts need to undergo a successful mating to form dikaryotic hyphae (two haploid cells fuse to form a dikaryon).
Wild rice smut
Ustilago esculenta is a species of fungus in the Ustilaginaceae, a family of smut fungi. It is in the same genus as the fungi that cause corn smut, loose smut of barley, false loose smut, covered smut of barley, loose smut of oats, and other grass diseases. This species is pathogenic as well, attacking Manchurian wild rice (Zizania latifolia), also known as Manchurian ricegrass, Asian wild rice, and wateroat. This grass is its only known host.
When the fungus invades the host plant it causes it to hypertrophy, its cells increasing in size and number. The fungus destroys the flowering structures of the plant, so it does not make seed. The crop is propagated asexually, by rhizome. New sprouts are infected by spores in the environment, which is generally a paddy. The fungus can also be transmitted directly in the r |
https://en.wikipedia.org/wiki/Mergelyan%27s%20theorem | Mergelyan's theorem is a result from approximation by polynomials in complex analysis proved by the Armenian mathematician Sergei Mergelyan in 1951.
Statement
Let K be a compact subset of the complex plane C such that C∖K is connected. Then, every continuous function f : K C, such that the restriction f to int(K) is holomorphic, can be approximated uniformly on K with polynomials. Here, int(K) denotes the interior of K.
Mergelyan's theorem also holds for open Riemann surfaces
If K is a compact set without holes in an open Riemann surface X, then every function in can be approximated uniformly on K by functions in .
Mergelyan's theorem does not always hold in higher dimensions (spaces of several complex variables), but it has some consequences.
History
Mergelyan's theorem is a generalization of the Weierstrass approximation theorem and Runge's theorem.
In the case that C∖K is not connected, in the initial approximation problem the polynomials have to be replaced by rational functions. An important step of the solution of this further rational approximation problem was also suggested by Mergelyan in 1952. Further deep results on rational approximation are due to, in particular, A. G. Vitushkin.
Weierstrass and Runge's theorems were put forward in 1885, while Mergelyan's theorem dates from 1951. After Weierstrass and Runge, many mathematicians (in particular Walsh, Keldysh, Lavrentyev, Hartogs, and Rosenthal) had been working on the same problem. The method of the proof suggested by Mergelyan is constructive, and remains the only known constructive proof of the result.
See also
Arakelyan's theorem
Hartogs–Rosenthal theorem
Oka–Weil theorem |
https://en.wikipedia.org/wiki/Karen%20Rudie | Karen Gail Rudie (born 1963) is a Canadian control theorist and electrical engineer known for her work on the decentralized control of discrete event dynamic systems. She is a professor of electrical and computer engineering in Queen's University at Kingston.
Education and career
Rudie majored in mathematics and engineering as an undergraduate at Queen's University, specializing in control and communication; she graduated in 1985. She has a Ph.D. from the University of Toronto, completed in 1992; Her dissertation, Decentralized Control of Discrete-Event Systems, was supervised by Walter Murray Wonham.
She returned to Queen's University as a faculty member in 1993, after postdoctoral research at the Institute for Mathematics and its Applications.
Recognition
In 2018, Rudie was named an IEEE Fellow, as a member of the IEEE Control Systems Society, "for contributions to the supervisory control theory of discrete event systems". |
https://en.wikipedia.org/wiki/Paul%20Davis%20%28programmer%29 | Paul Davis (formerly known as Paul Barton-Davis) is a British-American software developer best known for his work on audio software (JACK) for the Linux operating system, and for his role as one of the first two programmers at Amazon.com.
Davis grew up in the English Midlands and in London. After studying molecular biology and biophysics, he did post-graduate studies in computational biology at the Weizmann Institute of Science in Rehovot and EMBL in Heidelberg.
He immigrated to the U.S. in 1989. He lived in Seattle for seven years, where he worked for the Computer Science and Engineering Department at the University of Washington, and several smaller software companies in Seattle. While in Seattle, he helped to get Amazon.com off the ground during the period 1994–1996, making critical contributions to Amazon's backend systems alongside Shel Kaphan, before subsequently moving to Philadelphia in 1996. In 2019 he moved with his wife to Galisteo, NM
He went on to fund the development of various audio software for Linux, including Ardour and the JACK Audio Connection Kit. He works full-time on free software.
He is also an ultra-marathon runner and touring cyclist. |
https://en.wikipedia.org/wiki/MStar | MStar Semiconductor, Inc. () was a Taiwanese fabless semiconductor company specializing in mixed-mode integrated circuit technologies, based in Hsinchu Hsien. MStar made hardware for multimedia and wireless communications, in the form of display ICs and mixed-mode (i.e. combining analog and digital functions) ASIC/IPs, in addition to chip sets for GSM mobile handsets. MStar employed approx. 1300 in more than 10 branches worldwide. The company's revenue was around US$1067 million in 2010. The growth has been substantial, their revenue in 2005 was US$175 million. MStar is listed on the Taiwan Stock Exchange under the code 3697.
MStar was often referred as "Little-M" or "Morning Star" in Chinese community, as a contrary part of the bigger semiconductor company "Big-M", a.k.a. MediaTek.
MStar was a spin-off (2-1 stock split) from System General Technology in May 2002, where the power IC product line stayed in System General Technology while the employees with the display and RFID product lines transferred to the new spin-off. After the spin-off, System General Technology regretted the decision, and a 1-2 stock swap was taken to exchange the two companies back to their corresponding shareholders. Chairman and CEO of MStar was Wayne Liang (梁公偉), while Dr. Steve Yang (楊偉毅) was the executive vice president and co-founder.
In 2004, after being involved in a court case where in a ruling by the International Trade Commission (ITC), MStar Semiconductor were found guilty over infringing on a patent held by Genesis Microchip for a method to improve images on liquid-crystal-display (LCD) monitors and flat screen TVs.
On October 14, 2020, MStar came under investigation by the US International Trade Commission for allegedly infringing patents held by DIVX LLC of San Diego, California, USA.
Merger with MediaTek
On 22 June 2012 MediaTek Inc. () announced it purchased 212 million to 254 million shares of MStar ( - 40% to 48% of its outstanding shares) for 0.794 MediaTek shares a |
https://en.wikipedia.org/wiki/Needham%E2%80%93Schroeder%20protocol | The Needham–Schroeder protocol is one of the two key transport protocols intended for use over an insecure network, both proposed by Roger Needham and Michael Schroeder. These are:
The Needham–Schroeder Symmetric Key Protocol, based on a symmetric encryption algorithm. It forms the basis for the Kerberos protocol. This protocol aims to establish a session key between two parties on a network, typically to protect further communication.
The Needham–Schroeder Public-Key Protocol, based on public-key cryptography. This protocol is intended to provide mutual authentication between two parties communicating on a network, but in its proposed form is insecure.
The symmetric protocol
Here, Alice initiates the communication to Bob . is a server trusted by both parties. In the communication:
and are identities of Alice and Bob respectively
is a symmetric key known only to and
is a symmetric key known only to and
and are nonces generated by and respectively
is a symmetric, generated key, which will be the session key of the session between and
The protocol can be specified as follows in security protocol notation:
Alice sends a message to the server identifying herself and Bob, telling the server she wants to communicate with Bob.
The server generates and sends back to Alice a copy encrypted under for Alice to forward to Bob and also a copy for Alice. Since Alice may be requesting keys for several different people, the nonce assures Alice that the message is fresh and that the server is replying to that particular message and the inclusion of Bob's name tells Alice who she is to share this key with.
Alice forwards the key to Bob who can decrypt it with the key he shares with the server, thus authenticating the data.
Bob sends Alice a nonce encrypted under to show that he has the key.
Alice performs a simple operation on the nonce, re-encrypts it and sends it back verifying that she is still alive and that she holds the key.
Attacks on the pro |
https://en.wikipedia.org/wiki/Alliance%20for%20Aging%20Research | The Alliance for Aging Research is a non-profit organization based in Washington, D.C., that promotes medical research to improve the human experience of aging. Founded in 1986 by Daniel Perry, the Alliance also advocates and implements health education for consumers and health professionals.
The Alliance is governed by a board of directors. Susan Peschin is the chief executive officer and president.
Activities
Policy
Main policy areas include aging research funding, FDA funding, stem cell research funding, and improving health care for older Americans. The Alliance holds congressional briefings to increase awareness of such diseases and conditions as osteoporosis, Alzheimer's disease, oral care and diabetes.
Coalitions
The Alliance also serves on several coalitions and committees including Friends of the National Institute on Aging (NIA), the Alliance for a Stronger FDA, Coalition for the Advancement of Medical Research (CAMR), Partnership to Fight Chronic Disease, and the National Coalition on Mental Health and Aging.
White House Conference on Aging
The Alliance has been a part of the once-a-decade White House Conference on Aging, helping the President and Congress adopt resolutions to make aging research a national priority.
Task Force on Aging Research Funding
The Alliance collaborates with many patient and advocacy organizations on the annual Task Force on Aging Research Funding, a call to action to Congress and other national policymakers.
ACT-AD Coalition
ACT-AD (Accelerate Cure/ Treatments for Alzheimer's Disease) is a coalition of more than 50 organizations working to accelerate the development of treatments and a cure for Alzheimer's disease.
Programs
The Alliance produces materials focused on healthy aging and chronic disease, particularly for the Baby Boomer population. The Alliance has developed resources on the following topics: Age-related macular degeneration, Alzheimer's disease and caregivers, osteoporosis, heart disease, Parkinson's |
https://en.wikipedia.org/wiki/Alexander%20Lamb%20Cullen | Alexander Lamb Cullen, (30 April 1920 – 27 December 2013) was a British electrical engineer.
Career and research
Cullen served as the Head of Department of Electronic and Electrical Engineering at University College London where he held the Pender Chair, from 1967 to 1980. In 1988 he published his book Modern Radio Science and a biography of Harold Barlow.
Awards and honours
He was elected a Fellow of the Royal Society (FRS) in 1977 and awarded their Royal Medal in 1984 in recognition of his many distinguished contributions to microwave engineering, both theoretical and experimental, and in particular for research on microwave antennae. The same year he was awarded the Faraday Medal of the Institute of Electrical Engineers. He also the same year delivered the Clifford Paterson Lecture to the Royal Society on "Microwaves: the art and the science". He was appointed Order of the British Empire (OBE) in 1960. |
https://en.wikipedia.org/wiki/Cameleon%20%28protein%29 | Cameleon is an engineered protein based on variant of green fluorescent protein used to visualize calcium levels in living cells. It is a genetically encoded calcium sensor created by Roger Y. Tsien and coworkers. The name is a conflation of CaM (the common abbreviation of calmodulin) and chameleon to indicate the fact that the sensor protein undergoes a conformation change and radiates at an altered wavelength upon calcium binding to the calmodulin element of the Cameleon. Cameleon was the first genetically encoded calcium sensor that could be used for ratiometric measurements and the first to be used in a transgenic animal to record activity in neurons and muscle cells. Cameleon and other genetically-encoded calcium indicators (GECIs) have found many applications in neuroscience and other fields of biology. It was created by fusing BFP, calmodulin, calmodulin-binding peptide M13 and EGFP.
Mechanism
The DNA encoding cameleon fusion protein must be either stably or transiently introduced into the cell of interest. Protein made by the cell according to this DNA information then serves as a fluorescent indicator of calcium concentration. In the presence of calcium, Ca2+ binds to M13, which enables calmodulin to wrap around the M13 domain. This brings the two GFP-variant proteins closer to each other, which increases FRET efficiency between them. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.