source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Arachnoid%20cyst | Arachnoid cysts are cerebrospinal fluid covered by arachnoidal cells and collagen that may develop between the surface of the brain and the cranial base or on the arachnoid membrane, one of the three meningeal layers that cover the brain and the spinal cord. Primary arachnoid cysts are a congenital disorder whereas secondary arachnoid cysts are the result of head injury or trauma. Most cases of primary cysts begin during infancy; however, onset may be delayed until adolescence.
Signs and symptoms
Patients with arachnoid cysts may never show symptoms, even in some cases where the cyst is large. Therefore, while the presence of symptoms may provoke further clinical investigation, symptoms independent of further data cannot—and should not—be interpreted as evidence of a cyst's existence, size, location, or potential functional impact on the patient.
Symptoms vary by the size and location of the cyst(s), though small cysts usually have no symptoms and are discovered only incidentally. On the other hand, a number of symptoms may result from large cysts:
Cranial deformation or macrocephaly (enlargement of the head), particularly in children
Cysts in the suprasellar region in children have presented as bobbing and nodding of the head called bobble-head doll syndrome.
Cysts in the left middle cranial fossa have been associated with ADHD in a study on affected children.
Headaches. A patient experiencing a headache does not necessarily have an arachnoid cyst.
In a 2002 study involving 78 patients with a migraine or tension-type headache, CT scans showed abnormalities in over a third of the patients, though arachnoid cysts only accounted for 2.6% of patients in this study.
A study found 18% of patients with intracranial arachnoid cysts had non-specific headaches. The cyst was in the temporal location in 75% of these cases.
Seizures
Hydrocephalus (excessive accumulation of cerebrospinal fluid)
Increased intracranial pressure
Developmental delay
Behavioral changes |
https://en.wikipedia.org/wiki/Grassmann%20number | In mathematical physics, a Grassmann number, named after Hermann Grassmann (also called an anticommuting number or supernumber), is an element of the exterior algebra over the complex numbers. The special case of a 1-dimensional algebra is known as a dual number. Grassmann numbers saw an early use in physics to express a path integral representation for fermionic fields, although they are now widely used as a foundation for superspace, on which supersymmetry is constructed.
Informal discussion
Grassmann numbers are generated by anti-commuting elements or objects. The idea of anti-commuting objects arises in multiple areas of mathematics: they are typically seen in differential geometry, where the differential forms are anti-commuting. Differential forms are normally defined in terms of derivatives on a manifold; however, one can contemplate the situation where one "forgets" or "ignores" the existence of any underlying manifold, and "forgets" or "ignores" that the forms were defined as derivatives, and instead, simply contemplate a situation where one has objects that anti-commute, and have no other pre-defined or pre-supposed properties. Such objects form an algebra, and specifically the Grassmann algebra or exterior algebra.
The Grassmann numbers are elements of that algebra. The appellation of "number" is justified by the fact that they behave not unlike "ordinary" numbers: they can be added, multiplied and divided: they behave almost like a field. More can be done: one can consider polynomials of Grassmann numbers, leading to the idea of holomorphic functions. One can take derivatives of such functions, and then consider the anti-derivatives as well. Each of these ideas can be carefully defined, and correspond reasonably well to the equivalent concepts from ordinary mathematics. The analogy does not stop there: one has an entire branch of supermathematics, where the analog of Euclidean space is superspace, the analog of a manifold is a supermanifold, the analo |
https://en.wikipedia.org/wiki/Microsoft%20Transaction%20Server | Microsoft Transaction Server (MTS) was software that provided services to Component Object Model (COM) software components, to make it easier to create large distributed applications. The major services provided by MTS were automated transaction management, instance management (or just-in-time activation) and role-based security. MTS is considered to be the first major software to implement aspect-oriented programming.
MTS was first offered in the Windows NT 4.0 Option Pack. In Windows 2000, MTS was enhanced and better integrated with the operating system and COM, and was renamed COM+. COM+ added object pooling, loosely-coupled events and user-defined simple transactions (compensating resource managers) to the features of MTS.
COM+ is still provided with Windows Server 2003 and Windows Server 2008, and the Microsoft .NET Framework provides a wrapper for COM+ in the EnterpriseServices namespace. The Windows Communication Foundation (WCF) provides a way of calling COM+ applications with web services. However, COM+ is based on COM, and Microsoft's strategic software architecture is now web services and .NET, not COM. There are pure .NET-based alternatives for many of the features provided by COM+, and in the long term it is likely COM+ will be phased out.
Architecture
A basic MTS architecture comprises:
the MTS Executive (mtxex.dll)
the Factory Wrappers and Context Wrappers for each component
the MTS Server Component
MTS clients
auxiliary systems like:
COM runtime services
the Service Control Manager (SCM)
the Microsoft Distributed Transaction Coordinator (MS-DTC)
the Microsoft Message Queue (MSMQ)
the COM-Transaction Integrator (COM-TI)
etc.
COM components that run under the control of the MTS Executive are called MTS components. In COM+, they are referred to as COM+ Applications. MTS components are in-process DLLs. MTS components are deployed and run in the MTS Executive which manages them. As with other COM components, an object implementing the IC |
https://en.wikipedia.org/wiki/Adjunction%20formula | In mathematics, especially in algebraic geometry and the theory of complex manifolds, the adjunction formula relates the canonical bundle of a variety and a hypersurface inside that variety. It is often used to deduce facts about varieties embedded in well-behaved spaces such as projective space or to prove theorems by induction.
Adjunction for smooth varieties
Formula for a smooth subvariety
Let X be a smooth algebraic variety or smooth complex manifold and Y be a smooth subvariety of X. Denote the inclusion map by i and the ideal sheaf of Y in X by . The conormal exact sequence for i is
where Ω denotes a cotangent bundle. The determinant of this exact sequence is a natural isomorphism
where denotes the dual of a line bundle.
The particular case of a smooth divisor
Suppose that D is a smooth divisor on X. Its normal bundle extends to a line bundle on X, and the ideal sheaf of D corresponds to its dual . The conormal bundle is , which, combined with the formula above, gives
In terms of canonical classes, this says that
Both of these two formulas are called the adjunction formula.
Examples
Degree d hypersurfaces
Given a smooth degree hypersurface we can compute its canonical and anti-canonical bundles using the adjunction formula. This reads aswhich is isomorphic to .
Complete intersections
For a smooth complete intersection of degrees , the conormal bundle is isomorphic to , so the determinant bundle is and its dual is , showingThis generalizes in the same fashion for all complete intersections.
Curves in a quadric surface
embeds into as a quadric surface given by the vanishing locus of a quadratic polynomial coming from a non-singular symmetric matrix. We can then restrict our attention to curves on . We can compute the cotangent bundle of using the direct sum of the cotangent bundles on each , so it is . Then, the canonical sheaf is given by , which can be found using the decomposition of wedges of direct sums of vector bundles. Then, usi |
https://en.wikipedia.org/wiki/Remembrance%20poppy | A remembrance poppy is an artificial flower worn in some countries to commemorate their military personnel who died in war. Remembrance poppies are produced by veterans' associations, who exchange the poppies for charitable donations used to give financial, social and emotional support to members and veterans of the armed forces.
Inspired by the war poem "In Flanders Fields", and promoted by Moina Michael, they were first used near the end of World War I to commemorate British Empire and United States military casualties of the war. Madame Guérin established the first "Poppy Days" to raise funds for veterans, widows, orphans, liberty bonds, and charities such as the Red Cross.
Remembrance poppies are most commonly worn in Commonwealth countries, where it has been trademarked by veterans' associations for fundraising. Remembrance poppies in Commonwealth countries are often worn on clothing in the weeks leading up to Remembrance Day, with poppy wreaths also being laid at war memorials on that day. However, in New Zealand, remembrance poppies are most commonly worn on Anzac Day.
The red remembrance poppy has inspired the design of several other commemorative poppies that observe different aspects of war and peace.
Origins
References to war and poppies in Flanders can be found as early as the 19th century, in the book The Scottish Soldiers of Fortune by James Grant:
The opening lines of the World War I poem "In Flanders Fields" refer to Flanders poppies growing among the graves of war victims in a region of Belgium. The poem is written from the point of view of the fallen soldiers and in its last verse, the soldiers call on the living to continue the conflict. The poem was written by Canadian physician John McCrae on 3 May 1915 after witnessing the death of his friend and fellow soldier the day before. The poem was first published on 8 December 1915 in the London-based magazine Punch.
Moina Michael, who had taken leave from her professorship at the University of G |
https://en.wikipedia.org/wiki/Constant%20elasticity%20of%20substitution | Constant elasticity of substitution (CES), in economics, is a property of some production functions and utility functions. Several economists have featured in the topic and have contributed in the final finding of the constant. They include Tom McKenzie, John Hicks and Joan Robinson. The vital economic element of the measure is that it provided the producer a clear picture of how to move between different modes or types of production.
Specifically, it arises in a particular type of aggregator function which combines two or more types of consumption goods, or two or more types of production inputs into an aggregate quantity. This aggregator function exhibits constant elasticity of substitution.
CES production function
Despite having several factors of production in substitutability, the most common are the forms of elasticity of substitution. On the contrary of restricting direct empirical evaluation, the constant Elasticity of Substitution are simple to use and hence are widely used. McFadden states that; The constant E.S assumption is a restriction on the form of production possibilities, and one can characterize the class of production functions which have this property. This has been done by Arrow-Chenery-Minhas-Solow for the two-factor production case. The CES production function is a neoclassical production function that displays constant elasticity of substitution. In other words, the production technology has a constant percentage change in factor (e.g. labour and capital) proportions due to a percentage change in marginal rate of technical substitution. The two factor (capital, labor) CES production function introduced by Solow, and later made popular by Arrow, Chenery, Minhas, and Solow is:
where
= Quantity of output
= Factor productivity
= Share parameter
, = Quantities of primary production factors (Capital and Labor)
= = Substitution parameter
= = Elasticity of substitution
= degree of homogeneity |
https://en.wikipedia.org/wiki/Concept%20drift | In predictive analytics, data science, machine learning and related fields, concept drift or drift is an evolution of data that invalidates the data model. It happens when the statistical properties of the target variable, which the model is trying to predict, change over time in unforeseen ways. This causes problems because the predictions become less accurate as time passes. Drift detection and drift adaptation are of paramount importance in the fields that involve dynamically changing data and data models.
Predictive model decay
In machine learning and predictive analytics this drift phenomenon is called concept drift. In machine learning, a common element of a data model are the statistical properties, such as probability distribution of the actual data. If they deviate from the statistical properties of the training data set, then the learned predictions may become invalid, if the drift is not addressed.
Data configuration decay
Another important area is software engineering, where three types of data drift affecting data fidelity may be recognized. Changes in the software environment ("infrastructure drift") may invalidate software infrastructure configuration. "Structural drift" happens when the data schema changes, which may invalidate databases. "Semantic drift" is changes in the meaning of data while the structure does not change. In many cases this may happen in complicated applications when many independent developers introduce changes without proper awareness of the effects of their changes in other areas of the software system.
For many application systems, the nature of data on which they operate are subject to changes for various reasons, e.g., due to changes in business model, system updates, or switching the platform on which the system operates.
In the case of cloud computing, infrastructure drift that may affect the applications running on cloud may be caused by the updates of cloud software.
There are several types of detrimental effects o |
https://en.wikipedia.org/wiki/Juglone | Juglone, also called 5-hydroxy-1,4-naphthalenedione (IUPAC) is an organic compound with the molecular formula C10H6O3. In the food industry, juglone is also known as C.I. Natural Brown 7 and C.I. 75500. It is insoluble in benzene but soluble in dioxane, from which it crystallizes as yellow needles. It is an isomer of lawsone, which is the active dye compound in the henna leaf.
Juglone occurs naturally in the leaves, roots, husks, fruit (the epicarp), and bark of plants in the Juglandaceae family, particularly the black walnut (Juglans nigra), and is toxic or growth-stunting to many types of plants. It is sometimes used as an herbicide, as a dye for cloth and inks, and as a coloring agent for foods and cosmetics.
History
The allelopathic effects of walnut trees on other plants were observed as far back as the 1st century CE. Juglone itself was first isolated from black walnut in 1856, and was identified as the compound responsible for its allelopathic effects in 1881.
In 1921, a study observed that tomato plants near black walnut trees exhibited wilted leaves, suggesting an adverse interaction. In 1926, instances of apple tree damage caused by both Juglans nigra and Juglans cinerea (butternut) trees were reported in northern Virginia. Certain apple tree varieties displayed varying levels of resistance to walnut toxicity.
In 1926, it was observed that walnut trees in alfalfa fields resulted in crop death, while grass remained unaffected. Subsequent experiments indicated that the toxic compound within walnut trees exhibited limited solubility in water, implying that the compound underwent chemical changes upon leaving the tree. It was only in 1928 that the phytotoxic nature of the compound was identified for other plant species.
The scientific community faced controversy when the harmful effects of walnut trees on certain crops and trees were initially reported, following claims that the trees damaging apple trees in northern Virginia were not walnut trees at all |
https://en.wikipedia.org/wiki/Long-term%20care | Long-term care (LTC) is a variety of services which help meet both the medical and non-medical needs of people with a chronic illness or disability who cannot care for themselves for long periods. Long-term care is focused on individualized and coordinated services that promote independence, maximize patients' quality of life, and meet patients' needs over a period of time.
It is common for long-term care to provide custodial and non-skilled care, such as assisting with activities of daily living like dressing, feeding, using the bathroom, meal preparation, functional transfers and safe restroom use. Increasingly, long-term care involves providing a level of medical care that requires the expertise of skilled practitioners to address the multiple long-term conditions associated with older populations. Long-term care can be provided at home, in the community, in assisted living facilities or in nursing homes. Long-term care may be needed by people of any age, although it is a more common need for senior citizens.
Types of long-term care
Long-term care can be provided formally or informally. Facilities that offer formal LTC services typically provide living accommodation for people who require on-site delivery of around-the-clock supervised care, including professional health services, personal care, and services such as meals, laundry and housekeeping. These facilities may go under various names, such as nursing home, personal care facility, residential continuing care facility, etc. and are operated by different providers.
While the US government has been asked by the LTC (long-term care) industry not to bundle health, personal care, and services (e.g., meal, laundry, housekeeping) into large facilities, the government continues to approve that as the primary use of taxpayers' funds instead (e.g., new assisted living). Greater success has been achieved in areas such as supported housing which may still utilize older housing complexes or buildings, or may have bee |
https://en.wikipedia.org/wiki/Disk%20mirroring | In data storage, disk mirroring is the replication of logical disk volumes onto separate physical hard disks in real time to ensure continuous availability. It is most commonly used in RAID 1. A mirrored volume is a complete logical representation of separate volume copies.
In a disaster recovery context, mirroring data over long distance is referred to as storage replication. Depending on the technologies used, replication can be performed synchronously, asynchronously, semi-synchronously, or point-in-time. Replication is enabled via microcode on the disk array controller or via server software. It is typically a proprietary solution, not compatible between various data storage device vendors.
Mirroring is typically only synchronous. Synchronous writing typically achieves a recovery point objective (RPO) of zero lost data. Asynchronous replication can achieve an RPO of just a few seconds while the remaining methodologies provide an RPO of a few minutes to perhaps several hours.
Disk mirroring differs from file shadowing that operates on the file level, and disk snapshots where data images are never re-synced with their origins.
Overview
Typically, mirroring is provided in either hardware solutions such as disk arrays, or in software within the operating system (such as Linux mdadm and device mapper). Additionally, file systems like Btrfs or ZFS provide integrated data mirroring. There are additional benefits from Btrfs and ZFS, which maintain both data and metadata integrity checksums, making themselves capable of detecting bad copies of blocks, and using mirrored data to pull up data from correct blocks.
There are several scenarios for what happens when a disk fails. In a hot swap system, in the event of a disk failure, the system itself typically diagnoses a disk failure and signals a failure. Sophisticated systems may automatically activate a hot standby disk and use the remaining active disk to copy live data onto this disk. Alternatively, a new disk is |
https://en.wikipedia.org/wiki/Sample%20exclusion%20dimension | In computational learning theory, sample exclusion dimensions arise in the study of exact concept learning with queries.
In algorithmic learning theory, a concept over a domain X is a Boolean function over X. Here we only consider finite domains. A partial approximation S of a concept c is a Boolean function over such that c is an extension to S.
Let C be a class of concepts and c be a concept (not necessarily in C). Then a specifying set for c w.r.t. C, denoted by S is a partial approximation S of c such that C contains at most one extension to S. If we have observed a specifying set for some concept w.r.t. C, then we have enough information to verify a concept in C with at most one more mind change.
The exclusion dimension, denoted by XD(C), of a concept class is the maximum of the size of the minimum specifying set of c' with respect to C, where c' is a concept not in C. |
https://en.wikipedia.org/wiki/Enterocolitis | Enterocolitis is an inflammation of the digestive tract, involving enteritis of the small intestine and colitis of the colon. It may be caused by various infections, with bacteria, viruses, fungi, parasites, or other causes. Common clinical manifestations of enterocolitis are frequent diarrheal defecations, with or without nausea, vomiting, abdominal pain, fever, chills, and alteration of general condition. General manifestations are given by the dissemination of the infectious agent or its toxins throughout the body, or – most frequently – by significant losses of water and minerals, the consequence of diarrhea and vomiting.
Signs and symptoms
Cause
Among the causal agents of acute enterocolitis are:
bacteria: Salmonella, Shigella, Escherichia coli (E. coli), Campylobacter etc.
viruses: enteroviruses, rotaviruses, Norovirus, adenoviruses
fungi: candidiasis, especially in immunosuppressed patients or who have previously received prolonged antibiotic treatment
parasites: Giardia lamblia (with a high frequency of infestation in the population, but not always with clinical manifestations), Balantidium coli, Blastocystis homnis, Cryptosporidium (diarrhea in people with immunosuppression), Entamoeba histolytica (produces amebian dysentery, common in tropical areas).
Diagnosis
Types
Specific types of enterocolitis include:
necrotizing enterocolitis (most common in premature infants)
pseudomembranous enterocolitis (also called "Pseudomembranous colitis")
Treatment
Treatment depends on aetiology e.g. Antibiotics such as metronidazole for bacterial infection, antiviral drug therapy for viral infection and
anti-helminths for parasitic infections
See also
Gastroenteritis |
https://en.wikipedia.org/wiki/Eosinophiluria | Eosinophiluria is the abnormal presence of eosinophils in the urine. It can be measured by detecting levels of eosinophil cationic protein.
Associated conditions
It can be associated with a wide variety of conditions, including:
Kidney disorders such as acute interstitial nephritis and acute kidney injury from cholesterol embolism
Urinary tract infection
Eosinophilic granulomatosis with polyangiitis
Eosinophiluria (>5% of urine leukocytes ) is a common finding (~90%) in antibiotic induced allergic nephritis, however lymphocytes predominate in allergic interstitial nephritis induced by NSAIDs. Eosinophiluria is a feature of atheroembolic ARF.
In PAN, microscopic polyangitis, eosinophiluria is rare. |
https://en.wikipedia.org/wiki/Zoopharmacognosy | Zoopharmacognosy is a behaviour in which non-human animals self-medicate by selecting and ingesting or topically applying plants, soils and insects with medicinal properties, to prevent or reduce the harmful effects of pathogens, toxins, and even other animals. The term derives from Greek roots zoo ("animal"), pharmacon ("drug, medicine"), and gnosy ("knowing").
An example of zoopharmacognosy occurs when dogs eat grass to induce vomiting. However, the behaviour is more diverse than this. Animals ingest or apply non-foods such as clay, charcoal and even toxic plants and invertebrates, apparently to prevent parasitic infestation or poisoning.
Whether animals truly self-medicate remains a somewhat controversial subject because early evidence is mostly circumstantial or anecdotal. However, more recent examinations have adopted an experimental, hypothesis-driven approach.
The methods by which animals self-medicate vary, but can be classified according to function as prophylactic (preventative, before infection or poisoning) or therapeutic (after infection, to combat the pathogen or poisoning). The behaviour is believed to have widespread adaptive significance.
History and etymology
In 1978, Janzen suggested that vertebrate herbivores might benefit medicinally from the secondary metabolites in their plant food.
In 1993, the term "zoopharmacognosy" was coined, derived from the Greek roots zoo ("animal"), pharma ("drug"), and gnosy ("knowing"). The term gained popularity from academic works and in a book by Cindy Engel entitled Wild Health: How Animals Keep Themselves Well and What We Can Learn from Them.
Mechanisms
The anti-parasitic effect of zoopharmacognosy could occur by at least two mechanisms, namely demonstrated through the modes of deglutition or ingestion. First, ingested material may have pharmacological antiparasitic properties, such as phytochemicals decreasing the ability of worms to attach to the mucosal lining of the intestines or chemotaxis attract |
https://en.wikipedia.org/wiki/Peetre%20theorem | In mathematics, the (linear) Peetre theorem, named after Jaak Peetre, is a result of functional analysis that gives a characterisation of differential operators in terms of their effect on generalized function spaces, and without mentioning differentiation in explicit terms. The Peetre theorem is an example of a finite order theorem in which a function or a functor, defined in a very general way, can in fact be shown to be a polynomial because of some extraneous condition or symmetry imposed upon it.
This article treats two forms of the Peetre theorem. The first is the original version which, although quite useful in its own right, is actually too general for most applications.
The original Peetre theorem
Let M be a smooth manifold and let E and F be two vector bundles on M. Let
be the spaces of smooth sections of E and F. An operator
is a morphism of sheaves which is linear on sections such that the support of D is non-increasing: supp Ds ⊆ supp s for every smooth section s of E. The original Peetre theorem asserts that, for every point p in M, there is a neighborhood U of p and an integer k (depending on U) such that D is a differential operator of order k over U. This means that D factors through a linear mapping iD from the k-jet of sections of E into the space of smooth sections of F:
where
is the k-jet operator and
is a linear mapping of vector bundles.
Proof
The problem is invariant under local diffeomorphism, so it is sufficient to prove it when M is an open set in Rn and E and F are trivial bundles. At this point, it relies primarily on two lemmas:
Lemma 1. If the hypotheses of the theorem are satisfied, then for every x∈M and C > 0, there exists a neighborhood V of x and a positive integer k such that for any y∈V\{x} and for any section s of E whose k-jet vanishes at y (jks(y)=0), we have |Ds(y)|<C.
Lemma 2. The first lemma is sufficient to prove the theorem.
We begin with the proof of Lemma 1.
Suppose the lemma is false. Then there |
https://en.wikipedia.org/wiki/Polyhedral%20skeletal%20electron%20pair%20theory | In chemistry the polyhedral skeletal electron pair theory (PSEPT) provides electron counting rules useful for predicting the structures of clusters such as borane and carborane clusters. The electron counting rules were originally formulated by Kenneth Wade, and were further developed by others including Michael Mingos; they are sometimes known as Wade's rules or the Wade–Mingos rules. The rules are based on a molecular orbital treatment of the bonding. These rules have been extended and unified in the form of the Jemmis mno rules.
Predicting structures of cluster compounds
Different rules (4n, 5n, or 6n) are invoked depending on the number of electrons per vertex.
The 4n rules are reasonably accurate in predicting the structures of clusters having about 4 electrons per vertex, as is the case for many boranes and carboranes. For such clusters, the structures are based on deltahedra, which are polyhedra in which every face is triangular. The 4n clusters are classified as closo-, nido-, arachno- or hypho-, based on whether they represent a complete (closo-) deltahedron, or a deltahedron that is missing one (nido-), two (arachno-) or three (hypho-) vertices.
However, hypho clusters are relatively uncommon due to the fact that the electron count is high enough to start to fill antibonding orbitals and destabilize the 4n structure. If the electron count is close to 5 electrons per vertex, the structure often changes to one governed by the 5n rules, which are based on 3-connected polyhedra.
As the electron count increases further, the structures of clusters with 5n electron counts become unstable, so the 6n rules can be implemented. The 6n clusters have structures that are based on rings.
A molecular orbital treatment can be used to rationalize the bonding of cluster compounds of the 4n, 5n, and 6n types.
4n rules
The following polyhedra are closo polyhedra, and are the basis for the 4n rules; each of these have triangular faces. The number of vertices in the cl |
https://en.wikipedia.org/wiki/Planar%20lamina | In mathematics, a planar lamina (or plane lamina) is a figure representing a thin, usually uniform, flat layer of the solid. It serves also as an idealized model of a planar cross section of a solid body in integration.
Planar laminas can be used to determine moments of inertia, or center of mass of flat figures, as well as an aid in corresponding calculations for 3D bodies.
Definition
Basically, a planar lamina is defined as a figure (a closed set) of a finite area in a plane, with some mass .
This is useful in calculating moments of inertia or center of mass for a constant density, because the mass of a lamina is proportional to its area. In a case of a variable density, given by some (non-negative) surface density function the mass of the planar lamina is a planar integral of over the figure:
Properties
The center of mass of the lamina is at the point
where is the moment of the entire lamina about the y-axis and is the moment of the entire lamina about the x-axis:
with summation and integration taken over a planar domain .
Example
Find the center of mass of a lamina with edges given by the lines and where the density is given as .
For this the mass must be found as well as the moments and .
Mass is which can be equivalently expressed as an iterated integral:
The inner integral is:
Plugging this into the outer integral results in:
Similarly are calculated both moments:
with the inner integral:
which makes:
and
Finally, the center of mass is |
https://en.wikipedia.org/wiki/International%20Article%20Number | The International Article Number (also known as European Article Number or EAN) is a standard describing a barcode symbology and numbering system used in global trade to identify a specific retail product type, in a specific packaging configuration, from a specific manufacturer. The standard has been subsumed in the Global Trade Item Number standard from the GS1 organization; the same numbers can be referred to as GTINs and can be encoded in other barcode symbologies defined by GS1. EAN barcodes are used worldwide for lookup at retail point of sale, but can also be used as numbers for other purposes such as wholesale ordering or accounting. These barcodes only represent the digits 0–9, unlike some other barcode symbologies which can represent additional characters.
The most commonly used EAN standard is the thirteen-digit EAN-13, a superset of the original 12-digit Universal Product Code (UPC-A) standard developed in 1970 by George J. Laurer. An EAN-13 number includes a 3-digit GS1 prefix (indicating country of registration or special type of product). A prefix with a first digit of "0" indicates a 12-digit UPC-A code follows. A prefix with first two digits of "45" or "49" indicates a Japanese Article Number (JAN) follows.
The less commonly used 8-digit EAN-8 barcode was introduced for use on small packages, where EAN-13 would be too large. 2-digit EAN-2 and 5-digit EAN-5 are supplemental barcodes, placed on the right-hand side of EAN-13 or UPC. These are generally used in periodicals, like magazines and books, to indicate the current year's issue number and in weighed products like food, to indicate the manufacturer's suggested retail price.
Composition
The 13-digit EAN-13 number consists of four components:
GS1 prefix – 3 digits
Manufacturer code – variable length
Product code – variable length
Check digit
GS1 prefix
The first three digits of the EAN-13 (GS1 Prefix) usually identify the GS1 Member Organization which the manufacturer has joined (not ne |
https://en.wikipedia.org/wiki/Chromosomal%20polymorphism | In genetics, chromosomal polymorphism is a condition where one species contains members with varying chromosome counts or shapes. Polymorphism is a general concept in biology where more than one version of a trait is present in a population.
In some cases of differing counts, the difference in chromosome counts is the result of a single chromosome undergoing fission, where it splits into two smaller chromosomes, or two undergoing fusion, where two chromosomes join to form one.
This condition has been detected in many species. Trichomycterus davisi, for example, is an extreme case where the polymorphism was present within a single chimeric individual.
It has also been studied in alfalfa, shrews, Brazilian rodents, and an enormous variety of other animals and plants. In one instance it has been found in a human.
Another process resulting in differing chromosomal counts is polyploidy. This results in cells which contain multiple copies of complete chromosome sets.
Possessing chromosomes of varying shapes is generally the result of a chromosomal translocation or chromosomal inversion.
In a translocation, genetic material is transferred from one chromosome to another, either symmetrically or asymmetrically (a Robertsonian translocation).
In an inversion, a segment of a chromosome is flipped end-for-end.
Implications for speciation
All forms of chromosomal polymorphism can be viewed as a step towards speciation. Polymorphisms will generally result in a level of reduced fertility, because some gametes from one parent cannot successfully combine with all gametes of the other parent. However, when both parents contain matching chromosomal patterns, this obstacle does not occur. Further mutations in one group will not flow as rapidly into the other group as they do within the group in which it originally occurred.
Further mutations can also cause absolute infertility. If an interbreeding population contains one group in which (for example) chromosomes A and B have f |
https://en.wikipedia.org/wiki/Vestibular%20nerve | The vestibular nerve is one of the two branches of the vestibulocochlear nerve (the cochlear nerve being the other). In humans the vestibular nerve transmits sensory information transmitted by vestibular hair cells located in the two otolith organs (the utricle and the saccule) and the three semicircular canals via the vestibular ganglion of Scarpa. Information from the otolith organs reflects gravity and linear accelerations of the head. Information from the semicircular canals reflects rotational movement of the head. Both are necessary for the sensation of body position and gaze stability in relation to a moving environment.
Axons of the vestibular nerve synapse in the vestibular nucleus are found on the lateral floor and wall of the fourth ventricle in the pons and medulla.
It arises from bipolar cells in the vestibular ganglion which is situated in the upper part of the outer end of the internal auditory meatus.
Structure
The peripheral fibers divide into three branches (some sources list two):
the superior branch passes through the foramina in the area vestibularis superior and ends in the utricle and in the osseous ampullae of the superior and lateral semicircular ducts;
the fibers of the inferior branch traverse the foramina in the area vestibularis inferior and end in the saccule;
the posterior branch runs through the foramen singulare and supplies the ampulla of the posterior semicircular duct.
Function
The primary role of the vestibular nerve is to transform vestibular information (related to balance) into an egocentric frame of reference based on the position of the head in relation to the body. The vestibular nerve dynamically updates the frame of reference of motor movement based on the orientation of the head in relation to the body. As an example, when standing upright and facing forward, if you wished to tilt your head to the right you would need to perform a slight leftward motor movement (shifting more of your weight to your left side) to |
https://en.wikipedia.org/wiki/Order%20dimension | In mathematics, the dimension of a partially ordered set (poset) is the smallest number of total orders the intersection of which gives rise to the partial order.
This concept is also sometimes called the order dimension or the Dushnik–Miller dimension of the partial order.
first studied order dimension; for a more detailed treatment of this subject than provided here, see .
Formal definition
The dimension of a finite poset P is the least integer t for which there exists a family
of linear extensions of P so that, for every x and y in P, x precedes y in P if and only if it precedes y in all of the linear extensions. That is,
An alternative definition of order dimension is the minimal number of total orders such that P embeds into their product with componentwise ordering i.e. if and only if for all i (, ).
Realizers
A family of linear orders on X is called a realizer of a poset P = (X, <P) if
,
which is to say that for any x and y in X,
x <P y precisely when x <1 y, x <2 y, ..., and x <t y.
Thus, an equivalent definition of the dimension of a poset P is "the least cardinality of a realizer of P."
It can be shown that any nonempty family R of linear extensions is a realizer of a finite partially ordered set P if and only if, for every critical pair (x,y) of P, y <i x for some order
<i in R.
Example
Let n be a positive integer, and let P be the partial order on the elements ai and bi (for 1 ≤ i ≤ n) in which ai ≤ bj whenever i ≠ j, but no other pairs are comparable. In particular, ai and bi are incomparable in P; P can be viewed as an oriented form of a crown graph. The illustration shows an ordering of this type for n = 4.
Then, for each i, any realizer must contain a linear order that begins with all the aj except ai (in some order), then includes bi, then ai, and ends with all the remaining bj. This is so because if there were a realizer that didn't include such an order, then the intersection of that realizer's orders would have ai preceding bi, wh |
https://en.wikipedia.org/wiki/List%20of%20automation%20protocols | This is a list of communication protocols used for the automation of processes (industrial or otherwise), such as for building automation, power-system automation, automatic meter reading, and vehicular automation.
Process automation protocols
AS-i – Actuator-sensor interface, a low level 2-wire bus establishing power and communications to basic digital and analog devices
BSAP – Bristol Standard Asynchronous Protocol, developed by Bristol Babcock Inc.
CC-Link Industrial Networks – Supported by the CLPA
CIP (Common Industrial Protocol) – can be treated as application layer common to DeviceNet, CompoNet, ControlNet and EtherNet/IP
ControlNet – an implementation of CIP, originally by Allen-Bradley
DeviceNet – an implementation of CIP, originally by Allen-Bradley
DF-1 - used by Allen-Bradley ControlLogix, CompactLogix, PLC-5, SLC-500, and MicroLogix class devices
DNP3 - a protocol used to communicate by industrial control and utility SCADA systems
DirectNet – Koyo / Automation Direct proprietary, yet documented PLC interface
EtherCAT
Ethernet Global Data (EGD) – GE Fanuc PLCs (see also SRTP)
EtherNet/IP – IP stands for "Industrial Protocol". An implementation of CIP, originally created by Rockwell Automation
Ethernet Powerlink – an open protocol managed by the Ethernet POWERLINK Standardization Group (EPSG).
FINS, Omron's protocol for communication over several networks, including Ethernet.
FOUNDATION fieldbus – H1 & HSE
HART Protocol
HostLink Protocol, Omron's protocol for communication over serial links.
Interbus, Phoenix Contact's protocol for communication over serial links, now part of PROFINET IO
MECHATROLINK – open protocol originally developed by Yaskawa, supported by the MMA
MelsecNet, and MelsecNet II, /B, and /H, supported by Mitsubishi Electric.
Modbus PEMEX
Modbus Plus
Modbus RTU or ASCII or TCP
MPI – Multi Point Interface
OSGP – The Open Smart Grid Protocol, a widely use protocol for smart grid devices built on ISO/IEC 14908.1
OpenADR – Open Automated D |
https://en.wikipedia.org/wiki/Fagin%27s%20theorem | Fagin's theorem is the oldest result of descriptive complexity theory, a branch of computational complexity theory that characterizes complexity classes in terms of logic-based descriptions of their problems rather than by the behavior of algorithms for solving those problems.
The theorem states that the set of all properties expressible in existential second-order logic is precisely the complexity class NP.
It was proven by Ronald Fagin in 1973 in his doctoral thesis, and appears in his 1974 paper. The arity required by the second-order formula was improved (in one direction) in , and several results of Grandjean have provided tighter bounds on nondeterministic random-access machines.
Proof
In addition to Fagin's 1974 paper, provides a detailed proof of the theorem. It is straightforward to show that every existential second-order formula can be recognized in NP, by nondeterministically choosing the value of all existentially-qualified variables, so the main part of the proof is to show that every language in NP can be described by an existential second-order formula. To do so, one can use second-order existential quantifiers to arbitrarily choose a computation tableau. In more detail, for every timestep of an execution trace of a non-deterministic Turing machine, this tableau encodes the state of the Turing machine, its position in the tape, the contents of every tape cell, and which nondeterministic choice the machine makes at that step. A first-order formula can constrain this encoded information so that it describes a valid execution trace, one in which the tape contents and Turing machine state and position at each timestep follow from the previous timestep.
A key lemma used in the proof is that it is possible to encode a linear order of length (such as the linear orders of timesteps and tape contents at any timestep) as a relation on a universe of One way to achieve this is to choose a linear ordering of and then define to be the lexicograp |
https://en.wikipedia.org/wiki/Schnyder%27s%20theorem | In graph theory, Schnyder's theorem is a characterization of planar graphs in terms
of the order dimension of their incidence posets. It is named after Walter Schnyder, who published its proof in 1989.
The incidence poset of an undirected graph with vertex set and edge set is the partially ordered set of height 2 that has as its elements. In this partial order, there is an order relation when is a vertex, is an edge, and is one of the two endpoints of .
The order dimension of a partial order is the smallest number of total orderings whose intersection is the given partial order; such a set of orderings is called a realizer of the partial order.
Schnyder's theorem states that a graph is planar if and only if the order dimension of is at most three.
Extensions
This theorem has been generalized by to a tight bound on the dimension of the height-three partially ordered sets formed analogously from the vertices, edges and faces of a convex polyhedron, or more generally of an embedded planar graph: in both cases, the order dimension of the poset is at most four. However, this result cannot be generalized to higher-dimensional convex polytopes, as there exist four-dimensional polytopes whose face lattices have unbounded order dimension.
Even more generally, for abstract simplicial complexes, the order dimension of the face poset of the complex is at most , where is the minimum dimension of a Euclidean space in which the complex has a geometric realization .
Other graphs
As Schnyder observes, the incidence poset of a graph G has order dimension two if and only if the graph is a path or a subgraph of a path. For, in when an incidence poset has order dimension is two, its only possible realizer consists of two total orders that (when restricted to the graph's vertices) are the reverse of each other. Any other two orders would have an intersection that includes an order relation between two vertices, which is not allowed for incidence posets. For these two o |
https://en.wikipedia.org/wiki/Four%20Happiness%20Boys | The image of the Four Happiness Boys is believed to have begun during the Ming Dynasty (1368–1644) by a child prodigy by the name of Jie Jin.
By the age of five, this remarkable child had studied and mastered the ancient Chinese ‘Four Books’ and the ‘Five Classics' and soon made his way into formal studies alongside other renowned Chinese scholars of the period. The "Four Happiness Boys" is the ancient Chinese image or drawing of two interconnected boys to create the illusion of four laughing boys lying in four directions. The picture symbolizes ‘four happiness joined together’, which basically were: (a) a wedding night, (b)passing the imperial exams, (c) running into a friend in a faraway place, and (d) rain after a long drought – instances all considered to be among life's major fortunes in ancient China.
To this day, this image continues to be painted, drawn or cast in many materials including bronze, brass, and porcelain and is often given as a symbolic wedding gift for an abundant marriage, many generations of children, and good fortune and happiness.
See also
He-He er xian
Chinese numismatic charm |
https://en.wikipedia.org/wiki/Degree%20of%20anonymity | In anonymity networks (e.g., Tor, Crowds, Mixmaster, I2P, etc.), it is important to be able to measure quantitatively the guarantee that is given to the system. The degree of anonymity is a device that was proposed at the 2002 Privacy Enhancing Technology (PET) conference. Two papers put forth the idea of using entropy as the basis for formally measuring anonymity: "Towards an Information Theoretic Metric for Anonymity", and "Towards Measuring Anonymity". The ideas presented are very similar with minor differences in the final definition of .
Background
Anonymity networks have been developed and many have introduced methods of proving the anonymity guarantees that are possible, originally with simple Chaum Mixes and Pool Mixes the size of the set of users was seen as the security that the system could provide to a user. This had a number of problems; intuitively if the network is international then it is unlikely that a message that contains only Urdu came from the United States, and vice versa. Information like this and via methods like the predecessor attack and intersection attack helps an attacker increase the probability that a user sent the message.
Example With Pool Mixes
As an example consider the network shown above, in here and are users (senders), , and are servers (receivers), the boxes are mixes, and , and where denotes the anonymity set. Now as there are pool mixes let the cap on the number of incoming messages to wait before sending be ; as such if , or is communicating with and receives a message then knows that it must have come from (as the links between the mixes can only have message at a time). This is in no way reflected in 's anonymity set, but should be taken into account in the analysis of the network.
Degree of Anonymity
The degree of anonymity takes into account the probability associated with each user, it begins by defining the entropy of the system (here is where the papers differ slightly but only with notation, |
https://en.wikipedia.org/wiki/Eilenberg%E2%80%93Steenrod%20axioms | In mathematics, specifically in algebraic topology, the Eilenberg–Steenrod axioms are properties that homology theories of topological spaces have in common. The quintessential example of a homology theory satisfying the axioms is singular homology, developed by Samuel Eilenberg and Norman Steenrod.
One can define a homology theory as a sequence of functors satisfying the Eilenberg–Steenrod axioms. The axiomatic approach, which was developed in 1945, allows one to prove results, such as the Mayer–Vietoris sequence, that are common to all homology theories satisfying the axioms.
If one omits the dimension axiom (described below), then the remaining axioms define what is called an extraordinary homology theory. Extraordinary cohomology theories first arose in K-theory and cobordism.
Formal definition
The Eilenberg–Steenrod axioms apply to a sequence of functors from the category of pairs of topological spaces to the category of abelian groups, together with a natural transformation called the boundary map (here is a shorthand for . The axioms are:
Homotopy: Homotopic maps induce the same map in homology. That is, if is homotopic to , then their induced homomorphisms are the same.
Excision: If is a pair and U is a subset of A such that the closure of U is contained in the interior of A, then the inclusion map induces an isomorphism in homology.
Dimension: Let P be the one-point space; then for all .
Additivity: If , the disjoint union of a family of topological spaces , then
Exactness: Each pair (X, A) induces a long exact sequence in homology, via the inclusions and :
If P is the one point space, then is called the coefficient group. For example, singular homology (taken with integer coefficients, as is most common) has as coefficients the integers.
Consequences
Some facts about homology groups can be derived directly from the axioms, such as the fact that homotopically equivalent spaces have isomorphic homology groups.
The homology of some |
https://en.wikipedia.org/wiki/Solid-state%20lighting | Solid-state lighting (SSL) is a type of lighting that uses semiconductor light-emitting diodes (LEDs), organic light-emitting diodes (OLED), or polymer light-emitting diodes (PLED) as sources of illumination rather than electrical filaments, plasma (used in arc lamps such as fluorescent lamps), or gas.
Solid state electroluminescence is used in SSL, as opposed to incandescent bulbs (which use thermal radiation) or fluorescent tubes. Compared to incandescent lighting, SSL creates visible light with reduced heat generation and less energy dissipation. Most common "white LEDs” convert blue light from a solid-state device to an (approximate) white light spectrum using photoluminescence, the same principle used in conventional fluorescent tubes.
The typically small mass of a solid-state electronic lighting device provides for greater resistance to shock and vibration compared to brittle glass tubes/bulbs and long, thin filament wires. They also eliminate filament evaporation, potentially increasing the life span of the illumination device.
Solid-state lighting is often used in traffic lights and is also used in modern vehicle lights, street and parking lot lights, train marker lights, building exteriors, remote controls etc. Controlling the light emission of LEDs may be done most effectively by using the principles of nonimaging optics. Solid-state lighting has made significant advances in industry. In the entertainment lighting industry, standard incandescent tungsten-halogen lamps are being replaced by solid-state lighting fixtures.
See also
L Prize
LED lamp
List of light sources
Smart lighting |
https://en.wikipedia.org/wiki/Feigenbaum%20function | In the study of dynamical systems the term Feigenbaum function has been used to describe two different functions introduced by the physicist Mitchell Feigenbaum:
the solution to the Feigenbaum-Cvitanović functional equation; and
the scaling function that described the covers of the attractor of the logistic map
Feigenbaum-Cvitanović functional equation
This functional equation arises in the study of one-dimensional maps that, as a function of a parameter, go through a period-doubling cascade. Discovered by Mitchell Feigenbaum and Predrag Cvitanović, the equation is the mathematical expression of the universality of period doubling. It specifies a function g and a parameter by the relation
with the initial conditionsFor a particular form of solution with a quadratic dependence of the solution
near is one of the Feigenbaum constants.
The power series of is approximately
Renormalization
The Feigenbaum function can be derived by a renormalization argument.
The Feigenbaum function satisfiesfor any map on the real line at the onset of chaos.
Scaling function
The Feigenbaum scaling function provides a complete description of the attractor of the logistic map at the end of the period-doubling cascade. The attractor is a Cantor set, and just as the middle-third Cantor set, it can be covered by a finite set of segments, all bigger than a minimal size dn. For a fixed dn the set of segments forms a cover Δn of the attractor. The ratio of segments from two consecutive covers, Δn and Δn+1 can be arranged to approximate a function σ, the Feigenbaum scaling function.
See also
Logistic map
Presentation function
Notes
Bibliography
Bound as Order in Chaos, Proceedings of the International Conference on Order and Chaos held at the Center for Nonlinear Studies, Los Alamos, New Mexico 87545,USA 24–28 May 1982, Eds. David Campbell, Harvey Rose; North-Holland Amsterdam .
Chaos theory
Dynamical systems |
https://en.wikipedia.org/wiki/NWLink | NWLink is Microsoft's implementation of Novell's IPX/SPX protocols. NWLink includes an implementation of NetBIOS atop IPX/SPX.
NWLink packages data to be compatible with client/server services on NetWare Networks. However, NWLink does not provide access to NetWare File and Print Services. To access the File and Print Services the Client Service for NetWare needs to be installed.
NWLink connects NetWare servers through the Gateway Service for NetWare or Client Service for NetWare and provides the transport protocol that connects Windows operating systems to IPX/SPX NetWare networks and compatible operating systems. NWLink supports NetBIOS and Windows Sockets application programming interfaces (API).
NWLink protocols are as follows:
SPX/SPXII
IPX
Service Advertising Protocol (SAP)
Routing Information Protocol (RIP)
NetBIOS
Forwarder
NWLink also provides the following functionalities:
Runs other communication protocol stacks, such as Transmission Control Protocol/Internet Protocol (TCP/IP)
Uses multiple frame types for network adapter binding
Using NWLink IPX/SPX/NetBIOS
NWLink IPX/SPX/NetBIOS Compatible Transport is Microsoft's implementation of the Novell IPX/SPX (Internetwork Packet Exchange/Sequenced Packet Exchange) protocol stack. The Windows XP implementation of the IPX/SPX protocol stack adds NetBIOS support.
The main function of NWLink is to act as a transport protocol to route packets through internetworks. By itself, the NWLink protocol does not allow you to access the data across the network. If you want to access NetWare File and Print Services, you must install NWLink and Client Services for NetWare (software that works at the upper layers of the OSI model to allow access to file and print services).
One advantage of using NWLink is that is easy to install and configure.
Configuring NWLink IPX/SPX
The only options that are configured for NWLink are the internal network number and the frame type. Normally, you leave both settings at their |
https://en.wikipedia.org/wiki/Weierstrass%20point | In mathematics, a Weierstrass point on a nonsingular algebraic curve defined over the complex numbers is a point such that there are more functions on , with their poles restricted to only, than would be predicted by the Riemann–Roch theorem.
The concept is named after Karl Weierstrass.
Consider the vector spaces
where is the space of meromorphic functions on whose order at is at least and with no other poles. We know three things: the dimension is at least 1, because of the constant functions on ; it is non-decreasing; and from the Riemann–Roch theorem the dimension eventually increments by exactly 1 as we move to the right. In fact if is the genus of , the dimension from the -th term is known to be
for
Our knowledge of the sequence is therefore
What we know about the ? entries is that they can increment by at most 1 each time (this is a simple argument: has dimension as most 1 because if and have the same order of pole at , then will have a pole of lower order if the constant is chosen to cancel the leading term). There are question marks here, so the cases or need no further discussion and do not give rise to Weierstrass points.
Assume therefore . There will be steps up, and steps where there is no increment. A non-Weierstrass point of occurs whenever the increments are all as far to the right as possible: i.e. the sequence looks like
Any other case is a Weierstrass point. A Weierstrass gap for is a value of such that no function on has exactly a -fold pole at only. The gap sequence is
for a non-Weierstrass point. For a Weierstrass point it contains at least one higher number. (The Weierstrass gap theorem or Lückensatz is the statement that there must be gaps.)
For hyperelliptic curves, for example, we may have a function with a double pole at only. Its powers have poles of order and so on. Therefore, such a has the gap sequence
In general if the gap sequence is
the weight of the Weierstrass point is
This is introduced |
https://en.wikipedia.org/wiki/European%20embedded%20value | The European embedded value (EEV) is an effort by the CFO Forum to standardize the calculation of the embedded value. For this purpose the CFO Forum has released guidelines how embedded value should be calculated.
There is a lot of subjectivity involved in calculating the value of a life insurer. Insurance contracts are long-term contracts, so the value of the company now is dependent on how each of those contracts end up performing. Profit is made if the policyholder does not die, for example, and just contributes premiums over many years. Losses are possible for policies where the insured dies soon after signing the contract. And profitability is also affected by whether (and when) a policy might terminate early.
An actuary calculates an embedded value by making certain assumptions about life expectancy, persistency, investment conditions, and so on - thus making an estimate of what the company is worth now. But if each person has a different opinion on how things will turn out, you could expect a range of inconsistent estimates of the worth of the company. With this range of approaches, it is very difficult to compare EV calculations between companies.
The CFO Forum was formed to consider general issues relevant to measuring the value of insurance companies. The EEV was the output of this forum, and allows greater consistency in the such calculations, making them more useful.
Types
EEV can be "real world" or "market consistent". The former takes the best estimate for parameters that are available, whereas the latter uses a slightly constrained set of parameters which are close to best estimate, but which produce results which match market-related hedge costs.
Real-world EEV usually uses a risk discount rate made up of the risk-free rate plus a risk margin which reflects the weighted average cost of capital and Beta from the CAPM model. Using company-level economic models clearly reflects a top-down approach to determining the risk discount rate.
Market-cons |
https://en.wikipedia.org/wiki/Chennai%20Mathematical%20Institute | Chennai Mathematical Institute (CMI) is a higher education and research institute in Chennai, India. It was founded in 1989 by the SPIC Science Foundation, and offers undergraduate and postgraduate programmes in physics, mathematics and computer science. CMI is noted for its research in algebraic geometry, in particular in the area of moduli of bundles.
CMI was at first located in T. Nagar in the heart of Chennai in an office complex. It moved to a new campus in Siruseri in October 2005.
In December 2006, CMI was recognized as a university under Section 3 of the University Grants Commission (UGC) Act 1956, making it a deemed university. Until then, the teaching program was offered in association with Bhoj Open University, as it offered more flexibility.
History
CMI began as the School of Mathematics, SPIC Science Foundation, in 1989. The SPIC Science Foundation was set up in 1986 by Southern Petrochemical Industries Corporation (SPIC) Ltd., one of the major industrial houses in India, to foster the growth of science and technology in the country.
In 1996, the School of Mathematics became an independent institution and changed its name to SPIC Mathematical Institute. In 1998, in order to better reflect the emerging role of the institute, it was renamed the Chennai Mathematical Institute (CMI).
From its inception, the institute has had a Ph.D. programme in Mathematics and Computer Science. In the initial years, the Ph.D. programme was affiliated to the BITS, Pilani and the University of Madras. In December 2006, CMI was recognized as a university under Section 3 of the UGC Act 1956.
In 1998, CMI took the initiative to bridge the gap between teaching and research by starting B.Sc.(Hons.) and M.Sc. programmes in Mathematics and allied subjects. In 2001, the B.Sc. programme was extended to incorporate two courses with research components, leading to an M.Sc. degree in mathematics and an M.Sc. degree in Computer Science. In 2003, a new undergraduate course was add |
https://en.wikipedia.org/wiki/Tate%20conjecture | In number theory and algebraic geometry, the Tate conjecture is a 1963 conjecture of John Tate that would describe the algebraic cycles on a variety in terms of a more computable invariant, the Galois representation on étale cohomology. The conjecture is a central problem in the theory of algebraic cycles. It can be considered an arithmetic analog of the Hodge conjecture.
Statement of the conjecture
Let V be a smooth projective variety over a field k which is finitely generated over its prime field. Let ks be a separable closure of k, and let G be the absolute Galois group Gal(ks/k) of k. Fix a prime number ℓ which is invertible in k. Consider the ℓ-adic cohomology groups (coefficients in the ℓ-adic integers Zℓ, scalars then extended to the ℓ-adic numbers Qℓ) of the base extension of V to ks; these groups are representations of G. For any i ≥ 0, a codimension-i subvariety of V (understood to be defined over k) determines an element of the cohomology group
which is fixed by G. Here Qℓ(i ) denotes the ith Tate twist, which means that this representation of the Galois group G is tensored with the ith power of the cyclotomic character.
The Tate conjecture states that the subspace WG of W fixed by the Galois group G is spanned, as a Qℓ-vector space, by the classes of codimension-i subvarieties of V. An algebraic cycle means a finite linear combination of subvarieties; so an equivalent statement is that every element of WG is the class of an algebraic cycle on V with Qℓ coefficients.
Known cases
The Tate conjecture for divisors (algebraic cycles of codimension 1) is a major open problem. For example, let f : X → C be a morphism from a smooth projective surface onto a smooth projective curve over a finite field. Suppose that the generic fiber F of f, which is a curve over the function field k(C), is smooth over k(C). Then the Tate conjecture for divisors on X is equivalent to the Birch and Swinnerton-Dyer conjecture for the Jacobian variety of F. By contrast, the Hodge |
https://en.wikipedia.org/wiki/Hybrid%20name | In botanical nomenclature, a hybrid may be given a hybrid name, which is a special kind of botanical name, but there is no requirement that a hybrid name should be created for plants that are believed to be of hybrid origin. The International Code of Nomenclature for algae, fungi, and plants (ICNafp) provides the following options in dealing with a hybrid:
A hybrid may get a name if the author considers it really necessary (in practice, authors tend to use this option for naturally occurring hybrids), but it is rather recommended to use parents' names as they are more informative (art. H.10B.1).
A hybrid may also be indicated by a formula listing the parents. Such a formula uses the multiplication sign "×" to link the parents.
"It is usually preferable to place the names or epithets in a formula in alphabetical order. The direction of a cross may be indicated by including the sexual symbols (♀: female; ♂: male) in the formula, or by placing the female parent first. If a non-alphabetical sequence is used, its basis should be clearly indicated." (H.2A.1)
Grex names can be given to orchid hybrids.
A hybrid name is treated like other botanical names, for most purposes, but differs in that:
A hybrid name does not necessarily refer to a morphologically distinctive group, but applies to all progeny of the parents, no matter how much they vary.
E.g., Magnolia × soulangeana applies to all progeny from the cross Magnolia denudata × Magnolia liliiflora, and from the crosses of all their progeny, as well as from crosses of any of the progeny back to the parents (backcrossing). This covers quite a range in flower colour.
Grex names (for orchids only) differ in that they do not cover crosses from plants within the grex (F2 hybrids) or back-crosses (crosses between a grex member and its parent).
Hybrids can be named with ranks, like other organisms covered by the ICNafp. They are nothotaxa, from notho- (hybrid) + taxon. If the parents (or postulated parents) differ in rank |
https://en.wikipedia.org/wiki/Substring%20index | In computer science, a substring index is a data structure which gives substring search in a text or text collection in sublinear time. If you have a document of length , or a set of documents of total length , you can locate all occurrences of a pattern in time. (See Big O notation.)
The phrase full-text index is also often used for an index of all substrings of a text. But this is ambiguous, as it is also used for regular word indexes such as inverted files and document retrieval. See full text search.
Substring indexes include:
Suffix tree
Suffix array
N-gram index, an inverted file for all N-grams of the text
Compressed suffix array
FM-index
LZ-index |
https://en.wikipedia.org/wiki/Inverted%20index | In computer science, an inverted index (also referred to as a postings list, postings file, or inverted file) is a database index storing a mapping from content, such as words or numbers, to its locations in a table, or in a document or a set of documents (named in contrast to a forward index, which maps from documents to content). The purpose of an inverted index is to allow fast full-text searches, at a cost of increased processing when a document is added to the database. The inverted file may be the database file itself, rather than its index. It is the most popular data structure used in document retrieval systems, used on a large scale for example in search engines. Additionally, several significant general-purpose mainframe-based database management systems have used inverted list architectures, including ADABAS, DATACOM/DB, and Model 204.
There are two main variants of inverted indexes: A record-level inverted index (or inverted file index or just inverted file) contains a list of references to documents for each word. A word-level inverted index (or full inverted index or inverted list) additionally contains the positions of each word within a document. The latter form offers more functionality (like phrase searches), but needs more processing power and space to be created.
Applications
The inverted index data structure is a central component of a typical search engine indexing algorithm. A goal of a search engine implementation is to optimize the speed of the query: find the documents where word X occurs. Once a forward index is developed, which stores lists of words per document, it is next inverted to develop an inverted index. Querying the forward index would require sequential iteration through each document and to each word to verify a matching document. The time, memory, and processing resources to perform such a query are not always technically realistic. Instead of listing the words per document in the forward index, the inverted index data str |
https://en.wikipedia.org/wiki/Hierarchical%20state%20routing | Hierarchical state routing (HSR), proposed in Scalable Routing Strategies for Ad Hoc Wireless Networks by Iwata et al. (1999), is a typical example of a hierarchical routing protocol.
HSR maintains a hierarchical topology, where elected clusterheads at the lowest level become members of the next higher level. On the higher level, superclusters are formed, and so on. Nodes which want to communicate to a node outside of their cluster ask their clusterhead to forward their packet to the next level, until a clusterhead of the other node is in the same cluster. The packet then travels down to the destination node.
Furthermore, HSR proposes to cluster nodes in a logical way instead of in a geological way: members of the same company or in the same battlegroup are clustered together, assuming they will communicate much within the logical cluster.
HSR does not specify how a cluster is to be formed.
Routing algorithms |
https://en.wikipedia.org/wiki/Instantaneous%20phase%20and%20frequency | Instantaneous phase and frequency are important concepts in signal processing that occur in the context of the representation and analysis of time-varying functions. The instantaneous phase (also known as local phase or simply phase) of a complex-valued function s(t), is the real-valued function:
where arg is the complex argument function.
The instantaneous frequency is the temporal rate of change of the instantaneous phase.
And for a real-valued function s(t), it is determined from the function's analytic representation, sa(t):
where represents the Hilbert transform of s(t).
When φ(t) is constrained to its principal value, either the interval or , it is called wrapped phase. Otherwise it is called unwrapped phase, which is a continuous function of argument t, assuming sa(t) is a continuous function of t. Unless otherwise indicated, the continuous form should be inferred.
Examples
Example 1
where ω > 0.
In this simple sinusoidal example, the constant θ is also commonly referred to as phase or phase offset. φ(t) is a function of time; θ is not. In the next example, we also see that the phase offset of a real-valued sinusoid is ambiguous unless a reference (sin or cos) is specified. φ(t) is unambiguously defined.
Example 2
where ω > 0.
In both examples the local maxima of s(t) correspond to φ(t) = 2N for integer values of N. This has applications in the field of computer vision.
Formulations
Instantaneous angular frequency is defined as:
and instantaneous (ordinary) frequency is defined as:
where φ(t) must be the unwrapped phase; otherwise, if φ(t) is wrapped, discontinuities in φ(t) will result in Dirac delta impulses in f(t).
The inverse operation, which always unwraps phase, is:
This instantaneous frequency, ω(t), can be derived directly from the real and imaginary parts of sa(t), instead of the complex arg without concern of phase unwrapping.
2m1 and m2 are the integer multiples of necessary to add to unwrap the phase. At values of time, t, whe |
https://en.wikipedia.org/wiki/Scheinerman%27s%20conjecture | In mathematics, Scheinerman's conjecture, now a theorem, states that every planar graph is the intersection graph of a set of line segments in the plane. This conjecture was formulated by E. R. Scheinerman in his Ph.D. thesis (1984), following earlier results that every planar graph could be represented as the intersection graph of a set of simple curves in the plane . It was proven by .
For instance, the graph G shown below to the left may be represented as the intersection graph of the set of segments shown below to the right. Here, vertices of G are represented by straight line segments and edges of G are represented by intersection points.
Scheinerman also conjectured that segments with only three directions would be sufficient to represent 3-colorable graphs, and conjectured that analogously every planar graph could be represented using four directions. If a graph is represented with segments having only k directions
and no two segments belong to the same line, then the graph can be colored using k colors, one color for each direction. Therefore, if every planar graph can be represented in this way with only four directions,
then the four color theorem follows.
and proved that every bipartite planar graph can be represented as an intersection graph of horizontal and vertical line segments; for this result see also . proved that every triangle-free planar graph can be represented as an intersection graph of line segments having only three directions; this result implies Grötzsch's theorem that triangle-free planar graphs can be colored with three colors. proved that if a planar graph G can be 4-colored in such a way that no separating cycle uses all four colors, then G has a representation as an intersection graph of segments.
proved that planar graphs are in 1-STRING, the class of intersection graphs of simple curves in the plane that intersect each other in at most one crossing point per pair. This class is intermediate between the intersectio |
https://en.wikipedia.org/wiki/Damien%20Doligez | Damien Doligez is a French academic and programmer. He is best known for his role as a developer of the OCaml system, especially its garbage collector. He is a research scientist (chargé de recherche) at the French government research institution INRIA.
Activities
In 1990, Doligez and Xavier Leroy built an implementation of Caml (called Caml Light) based on a bytecode interpreter with a fast, sequential garbage collector, and began to extend it with support for concurrency. In 1996, Doligez was part of the team that built the first version of OCaml, and has been a core maintainer of the language since (as of April 2023).
In 1994, Hal Finney issued a challenge on the cypherpunk mailing to read an encrypted SSLv2 session. Doligez used spare computers at Inria, ENS and École polytechnique to break it after scanning half the key space in 8 days. He came in a close second in the competition, with the winning team announcing their result just two hours earlier.
Since 2006, Doligez has co-developed the Zenon theorem prover for first-order classic logic with equality. Zenon is the engine that drives the Focalize programming environment which can design and develop certified programs.
The environment is based on a functional language with some object-oriented features, allowing programmers to write the formal specification and the
proofs of their code within the same setting. Proof generation is assisted using Zenon and results are formally machine checked using the Coq proof checker.
In 2008, Doligez worked with Leslie Lamport and others to build the TLA+ proof manager which supports the incremental development and checking of hierarchically structured computer-assisted proofs. The proof manager project remains actively maintained and developed as of 2022. |
https://en.wikipedia.org/wiki/Serotiny | Serotiny in botany simply means 'following' or 'later'.
In the case of serotinous flowers, it means flowers which grow following the growth of leaves, or even more simply, flowering later in the season than is customary with allied species. Having serotinous leaves is also possible, these follow the flowering.
Serotiny is contrasted with coetany. Coetaneous flowers or leaves appear together with each other.
In the case of serotinous fruit, the term is used in the more general sense of plants that release their seed over a long period of time, irrespective of whether release is spontaneous; in this sense the term is synonymous with bradyspory.
In the case of certain Australian, North American, South African or Californian plants which grow in areas subjected to regular wildfires, serotinous fruit can also mean an ecological adaptation exhibited by some seed plants, in which seed release occurs in response to an environmental trigger, rather than spontaneously at seed maturation. The most common and best studied trigger is fire, and the term serotiny is used to refer to this specific case.
Possible triggers include:
Death of the parent plant or branch (necriscence)
Wetting (hygriscence)
Warming by the sun (soliscence)
Drying atmospheric conditions (xyriscence)
Fire (pyriscence) — this is the most common and best studied case, and the term serotiny is often used where pyriscence is intended.
Fire followed by wetting (pyrohydriscence)
Some plants may respond to more than one of these triggers. For example, Pinus halepensis exhibits primarily fire-mediated serotiny, but responds weakly to drying atmospheric conditions. Similarly, Sierras sequoias and some Banksia species are strongly serotinous with respect to fire, but also release some seed in response to plant or branch death.
Serotiny can occur in various degrees. Plants that retain all of their seed indefinitely in the absence of a trigger event are strongly serotinous. Plants that eventually release so |
https://en.wikipedia.org/wiki/Wagner%27s%20theorem | In graph theory, Wagner's theorem is a mathematical forbidden graph characterization of planar graphs, named after Klaus Wagner, stating that a finite graph is planar if and only if its minors include neither K5 (the complete graph on five vertices) nor K3,3 (the utility graph, a complete bipartite graph on six vertices). This was one of the earliest results in the theory of graph minors and can be seen as a forerunner of the Robertson–Seymour theorem.
Definitions and statement
A planar embedding of a given graph is a drawing of the graph in the Euclidean plane, with points for its vertices and curves for its edges, in such a way that the only intersections between pairs of edges are at a common endpoint of the two edges. A minor of a given graph is another graph formed by deleting vertices, deleting edges, and contracting edges. When an edge is contracted, its two endpoints are merged to form a single vertex. In some versions of graph minor theory the graph resulting from a contraction is simplified by removing self-loops and multiple adjacencies, while in other version multigraphs are allowed, but this variation makes no difference to Wagner's theorem.
Wagner's theorem states that every graph has either a planar embedding, or a minor of one of two types, the complete graph K5 or the complete bipartite graph K3,3. (It is also possible for a single graph to have both types of minor.)
If a given graph is planar, so are all its minors: vertex and edge deletion obviously preserve planarity, and edge contraction can also be done in a planarity-preserving way, by leaving one of the two endpoints of the contracted edge in place and routing all of the edges that were incident to the other endpoint along the path of the contracted edge.
A minor-minimal non-planar graph is a graph that is not planar, but in which all proper minors (minors formed by at least one deletion or contraction) are planar. Another way of stating Wagner's theorem is that there are only two minor-mi |
https://en.wikipedia.org/wiki/Whitney%27s%20planarity%20criterion | In mathematics, Whitney's planarity criterion is a matroid-theoretic characterization of planar graphs, named after Hassler Whitney. It states that a graph G is planar if and only if its graphic matroid is also cographic (that is, it is the dual matroid of another graphic matroid).
In purely graph-theoretic terms, this criterion can be stated as follows: There must be another (dual) graph G'=(V',E') and a bijective correspondence between the edges E' and the edges E of the original graph G, such that a subset T of E forms a spanning tree of G if and only if the edges corresponding to the complementary subset E-T form a spanning tree of G'.
Algebraic duals
An equivalent form of Whitney's criterion is that a graph G is planar if and only if it has a dual graph whose graphic matroid is dual to the graphic matroid of G.
A graph whose graphic matroid is dual to the graphic matroid of G is known as an algebraic dual of G. Thus, Whitney's planarity criterion can be expressed succinctly as: a graph is planar if and only if it has an algebraic dual.
Topological duals
If a graph is embedded into a topological surface such as the plane, in such a way that every face of the embedding is a topological disk, then the dual graph of the embedding is defined as the graph (or in some cases multigraph) H that has a vertex for every face of the embedding, and an edge for every adjacency between a pair of faces.
According to Whitney's criterion, the following conditions are equivalent:
The surface on which the embedding exists is topologically equivalent to the plane, sphere, or punctured plane
H is an algebraic dual of G
Every simple cycle in G corresponds to a minimal cut in H, and vice versa
Every simple cycle in H corresponds to a minimal cut in G, and vice versa
Every spanning tree in G corresponds to the complement of a spanning tree in H, and vice versa.
It is possible to define dual graphs of graphs embedded on nonplanar surfaces such as the torus, but these duals do not ge |
https://en.wikipedia.org/wiki/Water%20Resistant%20mark | Water Resistant is a common mark stamped on the back of wrist watches to indicate how well a watch is sealed against the ingress of water. It is usually accompanied by an indication of the static test pressure that a sample of newly manufactured watches were exposed to in a leakage test. The test pressure can be indicated either directly in units of pressure such as bar, atmospheres, or (more commonly) as an equivalent water depth in metres (in the United States sometimes also in feet).
An indication of the test pressure in terms of water depth does not mean a water-resistant watch was designed for repeated long-term use in such water depths. For example, a watch marked 30 metres water resistant cannot be expected to withstand activity for longer time periods in a swimming pool, let alone continue to function at 30 metres under water. This is because the test is conducted only once using static pressure on a sample of newly manufactured watches. As only a small sample is tested, there is a small likelihood that any individual watch is not water resistant to the certified depth or even at all.
The test for qualifying a diving watch to bear the word "diver's" on the dial is for repeated usage in a given depth and includes safety margins to take factors into account like aging of the seals, the properties of water and seawater, rapidly changing water pressure and temperature, as well as dynamic mechanical stresses encountered by a watch. Every "diver's" badged watch has to be taken through a small but highly specified battery of tests designed to simulate those stresses including being tested for continued water resistance up to 125% of the stated rating (a "200 meter" watch has to be pressured up to 250 meters water depth equivalent and show no signs of intrusion).
ISO 2281 water-resistant watches standard
The International Organization for Standardization (ISO) issued a standard for water-resistant watches which also prohibits the term waterproof to be used with w |
https://en.wikipedia.org/wiki/Operad | In mathematics, an operad is a structure that consists of abstract operations, each one having a fixed finite number of inputs (arguments) and one output, as well as a specification of how to compose these operations. Given an operad , one defines an algebra over to be a set together with concrete operations on this set which behave just like the abstract operations of . For instance, there is a Lie operad such that the algebras over are precisely the Lie algebras; in a sense abstractly encodes the operations that are common to all Lie algebras. An operad is to its algebras as a group is to its group representations.
History
Operads originate in algebraic topology; they were introduced to characterize iterated loop spaces by J. Michael Boardman and Rainer M. Vogt in 1968 and by J. Peter May in 1972.
Martin Markl, Steve Shnider, and Jim Stasheff write in their book on operads:
"The name operad and the formal definition appear first in the early 1970's in J. Peter May's "The Geometry of Iterated Loop Spaces", but a year or more earlier, Boardman and Vogt described the same concept under the name categories of operators in standard form, inspired by PROPs and PACTs of Adams and Mac Lane. In fact, there is an abundance of prehistory. Weibel [Wei] points out that the concept first arose a century ago in A.N. Whitehead's "A Treatise on Universal Algebra", published in 1898."
The word "operad" was created by May as a portmanteau of "operations" and "monad" (and also because his mother was an opera singer).
Interest in operads was considerably renewed in the early 90s when, based on early insights of Maxim Kontsevich, Victor Ginzburg and Mikhail Kapranov discovered that some duality phenomena in rational homotopy theory could be explained using Koszul duality of operads. Operads have since found many applications, such as in deformation quantization of Poisson manifolds, the Deligne conjecture, or graph homology in the work of Maxim Kontsevich and Thomas Willwac |
https://en.wikipedia.org/wiki/Inverse%20search | Inverse search (also called "reverse search") is a feature of some non-interactive typesetting programs, such as LaTeX and GNU LilyPond. These programs read an abstract, textual, definition of a document as input, and convert this into a graphical format such as DVI or PDF. In a windowing system, this typically means that the source code is entered in one editor window, and the resulting output is viewed in a different output window. Inverse search means that a graphical object in the output window works as a hyperlink, which brings you back to the line and column in the editor, where the clicked object was defined. The inverse search feature is particularly useful during proofreading.
Implementations
In TeX and LaTeX, the package srcltx provides an inverse search feature through DVI output files (e.g., with yap or Xdvi), while vpe, pdfsync and SyncTeX provide similar functionality for PDF output, among other techniques. The Comparison of TeX editors has a column on support of inverse search; most of them provide it nowadays.
GNU LilyPond provides an inverse search feature through PDF output files, since version 2.6. The program calls this feature Point-and-click,
Many integrated development environments for programming use inverse search to display compilation error messages, and during debugging when a breakpoint is hit. |
https://en.wikipedia.org/wiki/Boolean%20expression | In computer science, a Boolean expression is an expression used in programming languages that produces a Boolean value when evaluated. A Boolean value is either true or false. A Boolean expression may be composed of a combination of the Boolean constants true or false, Boolean-typed variables, Boolean-valued operators, and Boolean-valued functions.
Boolean expressions correspond to propositional formulas in logic and are a special case of Boolean circuits.
Boolean operators
Most programming languages have the Boolean operators OR, AND and NOT; in C and some languages inspired by it, these are represented by "||" (double pipe character), "&&" (double ampersand) and "!" (exclamation point) respectively, while the corresponding bitwise operations are represented by "|", "&" and "~" (tilde). In the mathematical literature the symbols used are often "+" (plus), "·" (dot) and overbar, or "∨" (vel), "∧" (et) and "¬" (not) or "′" (prime).
Some languages, e.g., Perl and Ruby, have two sets of Boolean operators, with identical functions but different precedence. Typically these languages use and, or and not for the lower precedence operators.
Some programming languages derived from PL/I have a bit string type and use BIT(1) rather than a separate Boolean type. In those languages the same operators serve for Boolean operations and bitwise operations. The languages represent OR, AND, NOT and EXCLUSIVE OR by "|", "&", "¬" (infix) and "¬" (prefix).
Short-circuit operators
Some programming languages, e.g., Ada, have short-circuit Boolean operators. These operators use a lazy evaluation, that is, if the value of the expression can be determined from the left hand Boolean expression then they do not evaluate the right hand Boolean expression. As a result, there may be side effects that only occur for one value of the left hand operand.
Examples
The expression is evaluated as .
The expression is evaluated as .
and are equivalent Boolean expressions, both of which are ev |
https://en.wikipedia.org/wiki/Arctic%E2%80%93alpine | An Arctic–alpine taxon is one whose natural distribution includes the Arctic and more southerly mountain ranges, particularly the Alps. The presence of identical or similar taxa in both the tundra of the far north, and high mountain ranges much further south is testament to the similar environmental conditions found in the two locations. Arctic–alpine plants, for instance, must be adapted to the low temperatures, extremes of temperature, strong winds and short growing season; they are therefore typically low-growing and often form mats or cushions to reduce water loss through evapotranspiration.
It is often assumed that an organism which currently has an Arctic–alpine distribution was, during colder periods of the Earth's history (such as during the Pleistocene glaciations), widespread across the area between the Arctic and the Alps. This is known from pollen records to be true for Dryas octopetala, for instance. In other cases, the disjunct distribution may be the result of long-distance dispersal.
Examples of Arctic–alpine plants include:
Arabis alpina
Betula nana
Draba incana
Dryas octopetala
Gagea serotina (syn. Lloydia serotina)
Loiseleuria procumbens
Micranthes stellaris
Oxyria digyna
Ranunculus glacialis
Salix herbacea
Saussurea alpina
Saxifraga oppositifolia
Silene acaulis
Thalictrum alpinum
Veronica alpina |
https://en.wikipedia.org/wiki/Judgment%20of%20Paris%20%28wine%29 | The Paris Wine Tasting of 1976, also known as the Judgment of Paris, was a wine competition organized in Paris on 24 May 1976 by Steven Spurrier, a British wine merchant and his colleague, Patricia Gallagher, in which French judges carried out two blind tasting comparisons: one of top-quality Chardonnays and another of red wines (Bordeaux wines from France and Cabernet Sauvignon wines from Napa, California). A Napa wine rated best in each category, which caused surprise as France was generally regarded as being the foremost producer of the world's best wines. Spurrier sold only French wine and believed that the California wines would not win.
The event's informal name "Judgment of Paris" is an allusion to the ancient Greek myth.
The wines
Red wines
White wines
The judges
The eleven judges were (in alphabetical order):
Method
Blind tasting was performed and the judges were asked to grade each wine out of 20 points. No specific grading framework was given, leaving the judges free to grade according to their own criteria.
Rankings of the wines preferred by individual judges were based on the grades they individually attributed.
An overall ranking of the wines preferred by the jury was also established in averaging the sum of each judge's individual grades (arithmetic mean). However, grades of Patricia Gallagher and Steven Spurrier were not taken into account, thus counting only grades of French judges.
The results
White wines
California Chardonnays vs. Burgundy Chardonnays
Official jury results:
Red wines
California Cabernet Sauvignon vs. Bordeaux
Official jury results:
Average Original grades: out of 20 points.
Breakdown by judge
The original grades (out of 20 points) are shown, in alphabetical order by judge.
Pierre Brejoux
Original grades: out of 20 points.
Claude Dubois-Millot
Original grades: out of 20 points.
Michel Dovaz
Original grades: out of 20 points.
Patricia Gallagher
Original grades: out of 20 points.
Odette Kahn
Original grades: ou |
https://en.wikipedia.org/wiki/Yoga%20as%20therapy | Yoga as therapy is the use of yoga as exercise, consisting mainly of postures called asanas, as a gentle form of exercise and relaxation applied specifically with the intention of improving health. This form of yoga is widely practised in classes, and may involve meditation, imagery, breath work (pranayama) and calming music as well as postural yoga.
At least three types of health claims have been made for yoga: magical claims for medieval haṭha yoga, including the power of healing; unsupported claims of benefits to organ systems from the practice of asanas; and more or less well supported claims of specific medical and psychological benefits from studies of differing sizes using a wide variety of methodologies.
Systematic reviews have found beneficial effects of yoga on low back pain and depression, but despite much investigation, little or no evidence for benefit for specific medical conditions. The study of trauma-sensitive yoga has been hampered by weak methodology.
Context
Yoga classes used as therapy usually consist of asanas (postures used for stretching), pranayama (breathing exercises), and relaxation in savasana (lying down). The physical asanas of modern yoga are related to medieval haṭha yoga tradition, but they were not widely practiced in India before the early 20th century.
The number of schools and styles of yoga in the Western world has grown rapidly from the late 20th century. By 2012, there were at least 19 widespread styles from Ashtanga Vinyasa Yoga to Viniyoga. These emphasise different aspects including aerobic exercise, precision in the asanas, and spirituality in the haṭha yoga tradition. These aspects can be illustrated by schools with distinctive styles. Bikram Yoga has an aerobic exercise style with rooms heated to and a fixed sequence of 2 breathing exercises and 26 asanas performed in every session. Iyengar Yoga emphasises correct alignment in the postures, working slowly, if necessary with props, and ending with relaxation. Siva |
https://en.wikipedia.org/wiki/Dual%20curve | In projective geometry, a dual curve of a given plane curve is a curve in the dual projective plane consisting of the set of lines tangent to . There is a map from a curve to its dual, sending each point to the point dual to its tangent line. If is algebraic then so is its dual and the degree of the dual is known as the class of the original curve. The equation of the dual of , given in line coordinates, is known as the tangential equation of . Duality is an involution: the dual of the dual of is the original curve .
The construction of the dual curve is the geometrical underpinning for the Legendre transformation in the context of Hamiltonian mechanics.
Equations
Let be the equation of a curve in homogeneous coordinates on the projective plane. Let be the equation of a line, with being designated its line coordinates in a dual projective plane. The condition that the line is tangent to the curve can be expressed in the form which is the tangential equation of the curve.
At a point on the curve, the tangent is given by
So is a tangent to the curve if
Eliminating , , , and from these equations, along with , gives the equation in , and of the dual curve.
For example, let be the conic . The dual is found by eliminating , , , and from the equations
The first three equations are easily solved for , , , and substituting in the last equation produces
Clearing from the denominators, the equation of the dual is
Consider a parametrically defined curve in projective coordinates . Its projective tangent line is a linear plane spanned by the point of tangency and the tangent vector, with linear equation coefficients given by the cross product:which in affine coordinates is:
The dual of an inflection point will give a cusp and two points sharing the same tangent line will give a self-intersection point on the dual.
From the projective description, one may compute the dual of the dual:which is projectively equivalent to the original curve .
Properties |
https://en.wikipedia.org/wiki/Hypholoma%20fasciculare | Hypholoma fasciculare, commonly known as the sulphur tuft or clustered woodlover, is a common woodland mushroom, often in evidence when hardly any other mushrooms are to be found. This saprotrophic small gill fungus grows prolifically in large clumps on stumps, dead roots or rotting trunks of broadleaved trees.
The "sulphur tuft" is bitter and poisonous; consuming it can cause vomiting, diarrhea and convulsions. The principal toxin is a steroid known as fasciculol E.
Taxonomy and naming
The specific epithet is derived from the Latin fascicularis 'in bundles' or 'clustered', referring to its habit of growing in clumps. Its name in Japanese is Nigakuritake (苦栗茸, means "Bitter kuritake").
Description
The hemispherical cap ranges from in diameter. It is smooth and sulphur yellow with an orange-brown centre and whitish margin. The crowded gills are initially yellow but darken to a distinctive green colour as the blackish spores develop on the yellow flesh. It has a purple-brown spore print. The stipe is tall and 4–10 mm wide, light yellow, orange-brown below, often with an indistinct ring zone coloured dark by the spores. The taste is very bitter, though not bitter when cooked, but still poisonous.
Similar species
The edible Hypholoma capnoides is similar, but lacks the greenish-yellow gills and bitter taste. H. sublateritium is similar as well, with a reddish cap.
Distribution and habitat
Hypholoma fasciculare grows prolifically on the dead wood of both deciduous and coniferous trees. It is more commonly found on decaying deciduous wood due to the lower lignin content of this wood relative to coniferous wood. Hypholoma fasciculare is widespread and abundant in northern Europe and North America. It has been recorded from Iran, and also eastern Anatolia in Turkey. It can appear anytime from spring to autumn.
Use in forestry
Hypholoma fasciculare has been used successfully as an experimental treatment to competitively displace a common fungal disease of conife |
https://en.wikipedia.org/wiki/Bicinchoninic%20acid%20assay | The bicinchoninic acid assay (BCA assay), also known as the Smith assay, after its inventor, Paul K. Smith at the Pierce Chemical Company, now part of Thermo Fisher Scientific, is a biochemical assay for determining the total concentration of protein in a solution (0.5 μg/mL to 1.5 mg/mL), similar to Lowry protein assay, Bradford protein assay or biuret reagent. The total protein concentration is exhibited by a color change of the sample solution from green to purple in proportion to protein concentration, which can then be measured using colorimetric techniques. The BCA assay was patented by Pierce Chemical Company in 1989 & the patent expired in 2006.
Mechanism
A stock BCA solution contains the following ingredients in a highly alkaline solution with a pH 11.25: bicinchoninic acid, sodium carbonate, sodium bicarbonate, sodium tartrate, and copper(II) sulfate pentahydrate.
The BCA assay primarily relies on two reactions. First, the peptide bonds in protein reduce Cu2+ ions from the copper(II) sulfate to Cu1+ (a temperature dependent reaction). The amount of Cu2+ reduced is proportional to the amount of protein present in the solution. Next, two molecules of bicinchoninic acid chelate with each Cu1+ ion, forming a purple-colored complex that strongly absorbs light at a wavelength of 562 nm.
The bicinchoninic acid Cu1+ complex is influenced in protein samples by the presence of cysteine/cystine, tyrosine, and tryptophan side chains. At higher temperatures (37 to 60 °C), peptide bonds assist in the formation of the reaction complex. Incubating the BCA assay at higher temperatures is recommended as a way to increase assay sensitivity while minimizing the variances caused by unequal amino acid composition.
The amount of protein present in a solution can be quantified by measuring the absorption spectra and comparing with protein solutions of known concentration.
Limitations
The BCA assay is largely incompatible with reducing agents and metal chelators, although tra |
https://en.wikipedia.org/wiki/Geon%20%28psychology%29 | Geons are the simple 2D or 3D forms such as cylinders, bricks, wedges, cones, circles and rectangles corresponding to the simple parts of an object in Biederman's recognition-by-components theory. The theory proposes that the visual input is matched against structural representations of objects in the brain. These structural representations consist of geons and their relations (e.g., an ice cream cone could be broken down into a sphere located above a cone). Only a modest number of geons (< 40) are assumed. When combined in different relations to each other (e.g., on-top-of, larger-than, end-to-end, end-to-middle) and coarse metric variation such as aspect ratio and 2D orientation, billions of possible 2- and 3-geon objects can be generated. Two classes of shape-based visual identification that are not done through geon representations, are those involved in: a) distinguishing between similar faces, and b) classifications that don’t have definite boundaries, such as that of bushes or a crumpled garment. Typically, such identifications are not viewpoint-invariant.
Properties of geons
There are 4 essential properties of geons:
View-invariance: Each geon can be distinguished from the others from almost any viewpoints except for “accidents” at highly restricted angles in which one geon projects an image that could be a different geon, as, for example, when an end-on view of a cylinder can be a sphere or circle. Objects represented as an arrangement of geons would, similarly, be viewpoint invariant.
Stability or resistance to visual noise: Because the geons are simple, they are readily supported by the Gestalt property of smooth continuation, rendering their identification robust to partial occlusion and degradation by visual noise as, for example, when a cylinder might be viewed behind a bush.
Invariance to illumination direction and surface markings and texture.
High distinctiveness: The geons differ qualitatively, with only two or three levels of an attributes, s |
https://en.wikipedia.org/wiki/Outbreeding%20depression | In biology, outbreeding depression happens when crosses between two genetically distant groups or populations result in a reduction of fitness. The concept is in contrast to inbreeding depression, although the two effects can occur simultaneously. Outbreeding depression is a risk that sometimes limits the potential for genetic rescue or augmentations. It is considered postzygotic response because outbreeding depression is noted usually in the performance of the progeny.
Outbreeding depression manifests in two ways:
Generating intermediate genotypes that are less fit than either parental form. For example, selection in one population might favor a large body size, whereas in another population small body size might be more advantageous, while individuals with intermediate body sizes are comparatively disadvantaged in both populations. As another example, in the Tatra Mountains, the introduction of ibex from the Middle East resulted in hybrids which produced calves at the coldest time of the year.
Breakdown of biochemical or physiological compatibility. Within isolated breeding populations, alleles are selected in the context of the local genetic background. Because the same alleles may have rather different effects in different genetic backgrounds, this can result in different locally coadapted gene complexes. Outcrossing between individuals with differently adapted gene complexes can result in disruption of this selective advantage, resulting in a loss of fitness.
Mechanisms for generating outbreeding depression
The different mechanisms of outbreeding depression can operate at the same time. However, determining which mechanism is likely to occur in a particular population can be very difficult.
There are three main mechanisms for generating outbreeding depression:
Fixed chromosomal differences resulting in the partial or complete sterility of F1 hybrids.
Adaptive differentiation among populations
Population bottlenecks and genetic drift
Some mechanisms |
https://en.wikipedia.org/wiki/Critical%20ionization%20velocity | Critical ionization velocity (CIV), or critical velocity (CV), is the relative velocity between a neutral gas and plasma (an ionized gas), at which the neutral gas will start to ionize. If more energy is supplied, the velocity of the atoms or molecules will not exceed the critical ionization velocity until the gas becomes almost fully ionized.
The phenomenon was predicted by Swedish engineer and plasma scientist, Hannes Alfvén, in connection with his model on the origin of the Solar System (1942). At the time, no known mechanism was available to explain the phenomenon, but the theory was subsequently demonstrated in the laboratory. Subsequent research by Brenning and Axnäs (1988) have suggested that a lower hybrid plasma instability is involved in transferring energy from the larger ions to electrons so that they have sufficient energy to ionize. Application of the theory to astronomy through a number of experiments have produced mixed results.
Experimental research
The Royal Institute of Technology in Stockholm carried out the first laboratory tests, and found that (a) the relative velocity between a plasma and neutral gas could be increased to the critical velocity, but then additional energy put into the system went into ionizing the neutral gas, rather than into
increasing the relative velocity, (b) the critical velocity is roughly independent of the pressure and magnetic field.
In 1973, Lars Danielsson published a review of critical ionization velocity, and concluded that the existence of the phenomenon "is proved by sufficient experimental evidence". In 1976, Alfvén reported that "The first observation of the critical velocity effect under cosmic conditions was reported by Manka et al. (1972) from the Moon. When an abandoned lunar [391] excursion module was made to impact on the dark side of the Moon not very far from the terminator, a gas cloud was produced which when it had expanded so that it was hit by the solar wind gave rise to superthermal electrons. |
https://en.wikipedia.org/wiki/Urban%20gardening | Urban gardening may refer to:
Urban Garden (sculpture), Seattle, Washington, U.S.
The practice of growing vegetables, fruit and plants in urban areas, such as schools, backyards or apartment balconies.
Container garden - Growing plants in pots or other containers, rather than in ground
Urban horticulture - Growing crops or ornamental plants in urban or semi-urban setting
Urban agriculture - Food production in urban setting
Windowbox
Urban park |
https://en.wikipedia.org/wiki/Monkey%20testing | In software testing, monkey testing is a technique where the user tests the application or system by providing random inputs and checking the behavior, or seeing whether the application or system will crash. Monkey testing is usually implemented as random, automated unit tests.
While the source of the name "monkey" is uncertain, it is believed by some that the name has to do with the infinite monkey theorem, which states that a monkey hitting keys at random on a typewriter keyboard for an infinite amount of time will almost surely type a given text, such as the complete works of William Shakespeare. Some others believe that the name comes from the classic Mac OS application "The Monkey" developed by Steve Capps prior to 1983. It used journaling hooks to feed random events into Mac programs, and was used to test for bugs in MacPaint.
Monkey Testing is also included in Android Studio as part of the standard testing tools for stress testing.
Types of monkey testing
Monkey testing can be categorized into smart monkey tests or dumb monkey tests.
Smart monkey tests
Smart monkeys are usually identified by the following characteristics:
Have a brief idea about the application or system
Know its own location, where it can go and where it has been
Know its own capability and the system's capability
Focus to break the system
Report bugs they found
Some smart monkeys are also referred to as brilliant monkeys, which perform testing as per user's behavior and can estimate the probability of certain bugs.
Dumb monkey tests
Dumb monkeys, also known as "ignorant monkeys", are usually identified by the following characteristics:
Have no knowledge about the application or system
Don't know if their input or behavior is valid or invalid
Don't know their or the system's capabilities, nor the flow of the application
Can find fewer bugs than smart monkeys, but can also find important bugs that are hard to catch by smart monkeys
Advantages and disadvantages
Advantages
Mo |
https://en.wikipedia.org/wiki/Hydropneumatic%20device | Hydropneumatic devices such as hydropneumatic accumulators or pulsation dampeners are devices which prevent, but do not absorb, alleviate, arrest, attenuate, or suppress a shock that already exists, meaning that these devices prevent the creation of a shock wave at an otherwise earlier stage. These can include pulsation dampeners, hydropneumatic accumulators, water hammer preventers, water hammer arrestors, and other things.
Hydropneumatic water hammer preventers
Purpose
To provide a chamber of sufficient volume to allow an extension of time in which a given flow may be accelerated or decelerated without sudden large change in pressure. See also expansion tank. When shock waves of an incompressible fluid within a piping system exist, especially at a high velocity, there is a high chance for water hammer. To help prevent a swing check from slamming and causing water hammer, a spring-assisted non-slam check valve is installed. Rather than relying on flow or gravity to be closed, the non-slam design prevents a sudden velocity decrease and reverse flow.
Characteristics
The chamber is generally adapted to contain a separator member which prevents the escape of a pre-filled compressed inert gas.
Applications
Placed closely before a valve that is closed quickly. Stops water hammering.
Placed immediately after the discharge of a pump that is started fast into a pipe full of a long column of liquid. Reduces start up surge pressure.
Placed immediately after a pump, which when caused to stop suddenly, enables a vacuum to form, which pulls the flow back towards the pump. Prevents an implosion bang.
Variations
Having a separator membrane into the interior of which the liquid is communicated. Used for corrosive liquids, so that the chamber metal can be of low cost.
Having a metal bellows separator membrane for use at low and higher temperatures than are compatible with an elastomeric or plastomeric membrane.
Having a float separator to reduce the rate of gas absorp |
https://en.wikipedia.org/wiki/Prenatal%20perception | Prenatal perception is the study of the extent of somatosensory and other types of perception during pregnancy. In practical terms, this means the study of fetuses; none of the accepted indicators of perception are present in embryos. Studies in the field inform the abortion debate, along with certain related pieces of legislation in countries affected by that debate. As of 2022, there is no scientific consensus on whether a fetus can feel pain.
Prenatal hearing
Numerous studies have found evidence indicating a fetus's ability to respond to auditory stimuli. The earliest fetal response to a sound stimulus has been observed at 16 weeks' gestational age, while the auditory system is fully functional at 25–29 weeks' gestation. At 33–41 weeks' gestation, the fetus is able to distinguish its mother's voice from others.
Prenatal pain
The hypothesis that human fetuses are capable of perceiving pain in the first trimester has little support, although fetuses at 14 weeks may respond to touch. A multidisciplinary systematic review from 2005 found limited evidence that thalamocortical pathways begin to function "around 29 to 30 weeks' gestational age", only after which a fetus is capable of feeling pain.
In March 2010, the Royal College of Obstetricians and Gynecologists submitted a report, concluding that "Current research shows that the sensory structures are not developed or specialized enough to respond to pain in a fetus of less than 24 weeks",
The report specifically identified the anterior cingulate as the area of the cerebral cortex responsible for pain processing. The anterior cingulate is part of the cerebral cortex, which begins to develop in the fetus at week 26. A co-author of that report revisited the evidence in 2020, specifically the functionality of the thalamic projections into the cortical subplate, and posited "an immediate and unreflective pain experience...from as early as 12 weeks."
There is a consensus among developmental neurobiologists that the |
https://en.wikipedia.org/wiki/Isovanillin | Isovanillin is a phenolic aldehyde, an organic compound and isomer of vanillin. It is a selective inhibitor of aldehyde oxidase. It is not a substrate of that enzyme, and is metabolized by aldehyde dehydrogenase into isovanillic acid, which could make it a candidate drug for use in alcohol aversion therapy. Isovanillin can be used as a precursor in the chemical total synthesis of morphine. The proposed metabolism of isovanillin (and vanillin) in rat has been described in literature, and is part of the WikiPathways machine readable pathway collection.
See also
Vanillin
2-Hydroxy-5-methoxybenzaldehyde
ortho-Vanillin
2-Hydroxy-4-methoxybenzaldehyde |
https://en.wikipedia.org/wiki/Stacking%20%28chemistry%29 | In chemistry, pi stacking (also called π–π stacking) refers to the presumptive attractive, noncovalent pi interactions (orbital overlap) between the pi bonds of aromatic rings. However this is a misleading description of the phenomena since direct stacking of aromatic rings (the "sandwich interaction") is electrostatically repulsive. What is more commonly observed (see figure to the right) is either a staggered stacking (parallel displaced) or pi-teeing (perpendicular T-shaped) interaction both of which are electrostatic attractive For example, the most commonly observed interactions between aromatic rings of amino acid residues in proteins is a staggered stacked followed by a perpendicular orientation. Sandwiched orientations are relatively rare.
Pi stacking is repulsive as it places carbon atoms with partial negative charges from one ring on top of other partial negatively charged carbon atoms from the second ring and hydrogen atoms with partial positive charges on top of other hydrogen atoms that likewise carry partial positive charges. In staggered stacking, one of the two aromatic rings is offset sideways so that the carbon atoms with partial negative charge in the first ring are placed above hydrogen atoms with partial positive charge in the second ring so that the electrostatic interactions become attractive. Likewise, pi-teeing interactions in which the two rings are oriented perpendicular to either other is electrostatically attractive as it places partial positively charged hydrogen atoms in close proximity to partially negatively charged carbon atoms. An alternative explanation for the preference for staggered stacking is due to the balance between van der Waals interactions (attractive dispersion plus Pauli repulsion).
These staggered stacking and π-teeing interactions between aromatic rings are important in nucleobase stacking within DNA and RNA molecules, protein folding, template-directed synthesis, materials science, and molecular recognition. |
https://en.wikipedia.org/wiki/Quantum%20defect | The term quantum defect refers to two concepts: energy loss in lasers and energy levels in alkali elements. Both deal with quantum systems where matter interacts with light.
In laser science
In laser science, the term "quantum defect" refers to the fact that the energy of a pump photon is generally higher than that of a signal photon (photon of the output radiation). The energy difference is lost to heat, which may carry away the excess entropy delivered by the multimode incoherent pump.
The quantum defect of a laser can be defined as the part of the energy of the pumping photon which is lost (not turned into photons at the lasing wavelength) in the gain medium during lasing. At given frequency of pump and given frequency of lasing, the quantum defect . Such a quantum defect has dimensions of energy; for the efficient operation, the temperature of the gain medium
(measured in units of energy) should be small compared to the quantum defect.
The quantum defect may also be defined as follows: at a given frequency of pump and given frequency of lasing, the quantum defect ; according to this definition, quantum defect is dimensionless.
At a fixed pump frequency, the higher the quantum defect, the lower is the upper bound for the power efficiency.
In hydrogenic atoms
The quantum defect of an alkali atom refers to a correction to the energy levels predicted by the classic calculation of the hydrogen wavefunction. A simple model of the potential experienced by the single valence electron of an alkali atom is that the ionic core acts as a point charge with effective charge e and the wavefunctions are hydrogenic. However, the structure of the ionic core alters the potential at small radii.
The 1/r potential in the hydrogen atom leads to an electron binding energy given by
where is the Rydberg constant, is Planck's constant, is the speed of light and is the principal quantum number.
For alkali atoms with small orbital angular momentum, the wavefunction of th |
https://en.wikipedia.org/wiki/Central%20England%20temperature | The Central England Temperature (CET) record is a meteorological dataset originally published by Professor Gordon Manley in 1953 and subsequently extended and updated in 1974, following many decades of painstaking work. The monthly mean surface air temperatures, for the Midlands region of England, are given (in degrees Celsius) from the year 1659 to the present.
This record represents the longest series of monthly temperature observations in existence. It is a valuable dataset for meteorologists and climate scientists. It is monthly from 1659, and a daily version has been produced from 1772. The monthly means from November 1722 onwards are given to a precision of 0.1 °C. The earliest years of the series, from 1659 to October 1722 inclusive, for the most part only have monthly means given to the nearest degree or half a degree, though there is a small 'window' of 0.1 degree precision from 1699 to 1706 inclusive. This reflects the number, accuracy, reliability and geographical spread of the temperature records that were available for the years in question.
Data quality
Although best efforts have been made by Manley and subsequent researchers to quality control the series, there are data problems in the early years, with some non-instrumental data used. These problems account for the lower precision to which the early monthly means were quoted by Manley. Parker et al. (1992) addressed this by not using data prior to 1772, since their daily series required more accurate data than did the original series of monthly means. Before 1722, instrumental records do not overlap and Manley used a non-instrumental series from Utrecht compiled by Labrijn (1945), to make the monthly central England temperature (CET) series complete.
For a period early in the 21st century there were two versions of the series: the "official" version maintained by the Hadley Centre in Exeter, and a version that was maintained by the late Philip Eden which he argued was more consistent with the se |
https://en.wikipedia.org/wiki/Pores%20of%20Kohn | The pores of Kohn (also known as interalveolar connections or alveolar pores) are discrete holes in walls of adjacent alveoli. Cuboidal type II alveolar cells, which produce surfactant, usually form part of aperture.
Etymology
The pores of Kohn take their name from the German physician and pathologist Hans Nathan Kohn (1866–1935) who first described them in 1893.
Development
They are absent in human newborns. They develop at 3–4 years of age along with canals of Lambert during the process of thinning of alveolar septa.
Function
The pores allow the passage of other materials such as fluid and bacteria, which is an important mechanism of spread of infection in lobar pneumonia and spread of fibrin in the grey hepatisation phase of recovery from the same. They also equalize the pressure in adjacent alveoli and, combined with increased distribution of surfactant, thus play an important role in prevention of collapse of the lung.
Unlike adults, in children these inter-alveolar connections are poorly developed which aids in limiting the spread of infection. This is thought to contribute to round pneumonia. |
https://en.wikipedia.org/wiki/Ryo%20Kawasaki | was a Japanese jazz fusion guitarist, composer and band leader, best known as one of the first musicians to develop and popularise the fusion genre and for helping to develop the guitar synthesizer in collaboration with Roland Corporation and Korg. His album Ryo Kawasaki and the Golden Dragon Live was one of the first all-digital recordings and he created the Kawasaki Synthesizer for the Commodore 64. During the 1960s, he played with various Japanese jazz groups and also formed his own bands. In the early 1970s, he moved to New York City, where he settled and worked with Gil Evans, Elvin Jones, Chico Hamilton, Ted Curson, Joanne Brackeen amongst others. In the mid-1980s, Kawasaki drifted out of performing music in favour of writing music software for computers. He also produced several techno dance singles, formed his own record company called Satellites Records, and later returned to jazz-fusion in 1991.
Life
Early life (1947–1968)
Ryo Kawasaki was born on February 25, 1947, in Kōenji, Tokyo, while Japan was still struggling and recovering from the early post World War II period. His father, Torao Kawasaki, was a Japanese diplomat who had worked for The Japanese Ministry of Foreign Affairs since 1919. Torao worked at several Japanese consulates and embassies, including San Francisco, Honolulu, Fengtian (then capital of Manchuria, now Shenyang in China), Shanghai, and Beijing while active as an English teacher and translator for official diplomatic conferences. Ryo's mother, Hiroko, was also multilingual, and spoke German, Russian, English, and Chinese aside from her native tongue Japanese. Hiroko grew up in Manchuria and then met Torao in Shanghai. Torao was already 58 years old when Ryo was born as an only child.
Kawasaki's mother encouraged him to take piano and ballet lessons, and he took voice lessons and solfege at age four and violin lessons at five, and he was reading music before elementary school. As a grade scholar, he began a lifelong fascination with |
https://en.wikipedia.org/wiki/Jovan%20Karamata | Jovan Karamata (; February 1, 1902 – August 14, 1967) was a Serbian mathematician. He is remembered for contributions to analysis, in particular, the Tauberian theory and the theory of slowly varying functions. Considered to be among the most influential Serbian mathematicians of the 20th century, Karamata was one of the founders of the Mathematical Institute of the Serbian Academy of Sciences and Arts, established in 1946.
Life
Jovan Karamata was born in Zagreb on February 1, 1902, into a family descended from merchants based in the city of Zemun, which was then in Austria-Hungary, and now in Serbia. Being of Aromanian origin, the family traced its roots back to Pyrgoi, Eordaia, West Macedonia (his father Ioannis Karamatas was the president of the "Greek Community of Zemun"); Aromanians mainly lived and still live in the area of modern Greece. Its business affairs on the borders of the Austro-Hungarian and Ottoman empires were very well known. In 1914, he finished most of his primary school in Zemun but because of constant warfare on the borderlands, Karamata's father sent him, together with his brothers and his sister, to Switzerland for their own safety. In Lausanne, 1920, he finished primary school oriented towards mathematics and sciences. In the same year he enrolled at the Engineering faculty of Belgrade University and, after several years moved to the Philosophy and Mathematicians sector, where he graduated in 1925.
He spent the years 1927–1928 in Paris, as a fellow of the Rockefeller Foundation, and in 1928 he became Assistant for Mathematics at the Faculty of Philosophy of Belgrade University. In 1930 he became Assistant Professor, in 1937 Associate Professor and, after the end of World War II, in 1950 he became Full Professor. In 1951 he was elected Full Professor at the University of Geneva. In 1933 he became a member of Yugoslav Academy of Sciences and Arts, Czech Royal Society in 1936, and Serbian Royal Academy in 1939 as well as a fellow of Serbian |
https://en.wikipedia.org/wiki/%C4%90uro%20Kurepa | Đuro Kurepa (Serbian Cyrillic: Ђуро Курепа, ; 16 August 1907 – 2 November 1993) was a Yugoslav mathematician, university professor and academic.
Throughout his life, Kurepa published over 700 articles, books, papers, and reviews and over 1,000 scientific reviews. He lectured at universities across Europe, as well as those in Canada, Cuba, Iraq, Israel, and the United States, and was quoted saying "I lectured at almost each of [the] nineteen universities of [the former] Yugoslavia..."
Early life
Born as Đurađ Kurepa in Majske Poljane, Kingdom of Croatia-Slavonia, Austria-Hungary to a Serb family. In English, his name was transliterated as Djuro Kurepa while in French he is often attributed as Georges Kurepa. Kurepa was the youngest of Rade and Anđelija Kurepa's fourteen children. His nephew was the mathematician Svetozar Kurepa.
He began his schooling in Majske Poljane, continued his education in Glina, and graduated from high school in Križevci. He received a diploma in theoretical mathematics and physics from the University of Zagreb in 1931, and began work as an assistant in the teaching of mathematics the same year. Kurepa then went to the Collège de France and the University of Paris, where he received his doctoral diploma in 1935; his advisor was French mathematician Maurice René Fréchet, and his thesis was titled Ensembles ordonnés et ramifiés.
Career
Kurepa continued to receive post-doctoral education at Warsaw University in Poland and the University of Paris. He became an assistant professor at the University of Zagreb in 1937, associate professor the next year, and assumed the position of full professor in 1948. After the end of World War II and the formation of the Socialist Federal Republic of Yugoslavia, he traveled to five universities in the United States: Harvard University in Cambridge, Massachusetts, the University of Chicago in Chicago, Illinois, the branch of the University of California at Berkeley and the branch at Los Angeles, California th |
https://en.wikipedia.org/wiki/RNA-binding%20protein | RNA-binding proteins (often abbreviated as RBPs) are proteins that bind to the double or single stranded RNA in cells and participate in forming ribonucleoprotein complexes.
RBPs contain various structural motifs, such as RNA recognition motif (RRM), dsRNA binding domain, zinc finger and others.
They are cytoplasmic and nuclear proteins. However, since most mature RNA is exported from the nucleus relatively quickly, most RBPs in the nucleus exist as complexes of protein and pre-mRNA called heterogeneous ribonucleoprotein particles (hnRNPs).
RBPs have crucial roles in various cellular processes such as: cellular function, transport and localization. They especially play a major role in post-transcriptional control of RNAs, such as: splicing, polyadenylation, mRNA stabilization, mRNA localization and translation. Eukaryotic cells express diverse RBPs with unique RNA-binding activity and protein–protein interaction. According to the Eukaryotic RBP Database (EuRBPDB), there are 2961 genes encoding RBPs in humans. During evolution, the diversity of RBPs greatly increased with the increase in the number of introns. Diversity enabled eukaryotic cells to utilize RNA exons in various arrangements, giving rise to a unique RNP (ribonucleoprotein) for each RNA. Although RBPs have a crucial role in post-transcriptional regulation in gene expression, relatively few RBPs have been studied systematically.It has now become clear that RNA–RBP interactions play important roles in many biological processes among organisms.
Structure
Many RBPs have modular structures and are composed of multiple repeats of just a few specific basic domains that often have limited sequences. Different RBPs contain these sequences arranged in varying combinations. A specific protein's recognition of a specific RNA has evolved through the rearrangement of these few basic domains. Each basic domain recognizes RNA, but many of these proteins require multiple copies of one of the many common domains to fun |
https://en.wikipedia.org/wiki/Holomorphic%20functional%20calculus | In mathematics, holomorphic functional calculus is functional calculus with holomorphic functions. That is to say, given a holomorphic function f of a complex argument z and an operator T, the aim is to construct an operator, f(T), which naturally extends the function f from complex argument to operator argument. More precisely, the functional calculus defines a continuous algebra homomorphism from the holomorphic functions on a neighbourhood of the spectrum of T to the bounded operators.
This article will discuss the case where T is a bounded linear operator on some Banach space. In particular, T can be a square matrix with complex entries, a case which will be used to illustrate functional calculus and provide some heuristic insights for the assumptions involved in the general construction.
Motivation
Need for a general functional calculus
In this section T will be assumed to be a n × n matrix with complex entries.
If a given function f is of certain special type, there are natural ways of defining f(T). For instance, if
is a complex polynomial, one can simply substitute T for z and define
where T0 = I, the identity matrix. This is the polynomial functional calculus. It is a homomorphism from the ring of polynomials to the ring of n × n matrices.
Extending slightly from the polynomials, if f : C → C is holomorphic everywhere, i.e. an entire function, with MacLaurin series
mimicking the polynomial case suggests we define
Since the MacLaurin series converges everywhere, the above series will converge, in a chosen operator norm. An example of this is the exponential of a matrix. Replacing z by T in the MacLaurin series of f(z) = ez gives
The requirement that the MacLaurin series of f converges everywhere can be relaxed somewhat. From above it is evident that all that is really needed is the radius of convergence of the MacLaurin series be greater than ǁTǁ, the operator norm of T. This enlarges somewhat the family of f for which f(T) can be defined using |
https://en.wikipedia.org/wiki/Lepidium%20campestre | Lepidium campestre, the field pepperwort or field pepperweed or field cress, is usually a biennial with some form of annual plant in the Brassicaceae or mustard family, native to Europe, but commonly found in North America as an invasive weed. The most notable characteristic of field pepperweed is the raceme of flowers which forks off of the stem. These racemes are made up of first small white flowers and later green, flat and oval seedpods each about 6 mm long and 4 mm wide. Each seedpod contains two brown, 2.5 mm long seeds.
The stem of field pepperweed comes out of a basal rosette of toothed leaves. The stem is covered in leaves, which are sessile, alternate and arrow-shaped. The entire plant is generally between 20 and 60 cm tall and covered in small hairs.
Cultivation and uses
Field pepperweed grows in disturbed land, crops, and waste places. It can tolerate most soils.
The plant is edible. The young leaves can be eaten as greens, added raw to salads or boiled for ten minutes. The young fruits and seeds can be used as a spice, with a taste between black pepper and mustard. The leaves contain protein, vitamin A and vitamin C.
Domestication
Field cress has been targeted for domestication at the Swedish University of Agricultural Sciences (SLU), because it holds high agronomic promise as a biennial/perennial oilseed crop as it has many good characteristics of a high-yielding winter-hardy crop. Unlike any other oilseed crop, field cress can be highly productive in the northern parts of temperate regions and has been successfully grown in Umeå, Sweden (40 km south of the Arctic circle) where it can yield correspondingly 3.3 tons/ha. In addition, field cress provides important ecosystem services as it functions as a cover crop during winter and can be undersown a spring cereal. The oil of field cress is suitable for different industrial applications such as production of hydrotreated vegetable oil diesel (HVO). The researchers at SLU have identified and mapped |
https://en.wikipedia.org/wiki/Coframe | In mathematics, a coframe or coframe field on a smooth manifold is a system of one-forms or covectors which form a basis of the cotangent bundle at every point. In the exterior algebra of , one has a natural map from , given by . If is dimensional a coframe is given by a section of such that . The inverse image under of the complement of the zero section of forms a principal bundle over , which is called the coframe bundle. |
https://en.wikipedia.org/wiki/Barry%20Simon | Barry Martin Simon (born 16 April 1946) is an American mathematical physicist and was the IBM professor of Mathematics and Theoretical Physics at Caltech, known for his prolific contributions in spectral theory, functional analysis, and nonrelativistic quantum mechanics (particularly Schrödinger operators), including the connections to atomic and molecular physics. He has authored more than 400 publications on mathematics and physics.
His work has focused on broad areas of mathematical physics and analysis covering: quantum field theory, statistical mechanics, Brownian motion, random matrix theory, general nonrelativistic quantum mechanics (including N-body systems and resonances), nonrelativistic quantum mechanics in electric and magnetic fields, the semi-classical limit, the singular continuous spectrum, random and ergodic Schrödinger operators, orthogonal polynomials, and non-selfadjoint spectral theory.
Early life
Barry Simon's mother was a school teacher, his father was an accountant. Simon attended James Madison High School in Brooklyn.
Career
During his high school years, Simon started attending college courses for highly gifted pupils at Columbia University. In 1962, Simon won a MAA mathematics competition. The New York Times reported that in order to receive full credits for a faultless test result he had to make a submission with MAA. In this submission he proved that one of the problems posed in the test was ambiguous.
In 1962, Simon entered Harvard with a stipend. He became a Putnam Fellow in 1965 at 19 years old. He received his AB in 1966 from Harvard College and his PhD in Physics at Princeton University in 1970, supervised by Arthur Strong Wightman. His dissertation dealt with Quantum mechanics for Hamiltonians defined as quadratic forms.
Following his doctoral studies, Simon took a professorship at Princeton for several years, often working with colleague Elliott H. Lieb on the Thomas-Fermi Theory and Hartree-Fock Theory of atoms in addition |
https://en.wikipedia.org/wiki/Still%20life%20%28cellular%20automaton%29 | In Conway's Game of Life and other cellular automata, a still life is a pattern that does not change from one generation to the next. The term comes from the art world where a still life painting or photograph depicts an inanimate scene. In cellular automata, a still life can be thought of as an oscillator with unit period.
Classification
A pseudo still life consists of two or more adjacent islands (connected components) which can be partitioned (either individually or as sets) into non-interacting subparts, which are also still lifes. This compares with a strict still life, which may not be partitioned in this way. A strict still life may have only a single island, or it may have multiple islands that depend on one another for stability, and thus cannot be decomposed. The distinction between the two is not always obvious, as a strict still life may have multiple connected components all of which are needed for its stability. However, it is possible to determine whether a still life pattern is a strict still life or a pseudo still life in polynomial time by searching for cycles in an associated skew-symmetric graph.
Examples
There are many naturally occurring still lifes in Conway's Game of Life. A random initial pattern will leave behind a great deal of debris, containing small oscillators and a large variety of still lifes.
The most common still life (i.e. that most likely to be generated from a random initial state) is the block. A pair of blocks placed side-by-side (or bi-block) is the simplest pseudo still life. Blocks are used as components in many complex devices, an example being the Gosper glider gun.
The second most common still life is the hive (or beehive). Hives are frequently created in (non-interacting) sets of four, in a formation known as a honey farm.
The third most common still life is the loaf. Loaves are often found together in a pairing known as a bi-loaf. Bi-loaves themselves are often created in a further (non-interacting) pairin |
https://en.wikipedia.org/wiki/Slow%20virus | A slow virus is a virus, or a viruslike agent, etiologically associated with a slow virus disease. A slow virus disease is a disease that, after an extended period of latency, follows a slow, progressive course spanning months to years, frequently involves the central nervous system, and in most cases progresses to death. Examples of slow virus diseases include HIV/AIDS, caused by the HIV virus, subacute sclerosing panencephalitis, the rare result of a measles virus infection, and Paget's disease of bone (osteitis deformans), which may be associated with paramyxoviruses, especially the measles virus and the human respiratory syncytial virus.
Characteristics
Every infectious agent is different, but in general, slow viruses:
Cause an asymptomatic primary infection
Have a long incubation period ranging from months to years
Follow a slow but relentless progressive course leading to death
Tend to have a genetic predisposition
Often re-emerge from latency if the host becomes immuno-compromised
Additionally, the immune system seems to plays a limited role, or no role, in protection from many of these slow viruses. This may be due to the slow replication rates some of these agents exhibit, preexisting immunosuppression (as in the cases of JC virus and BK virus), or, in the case of prions, the identity of the agent involved.
Scope
Slow viruses cause a variety of diseases, including cancer.
§JC virus & BK virus only cause disease in immunocompromised patients
Kuru- A form of Transmissible spongiform encephalopathy
Was once thought to be due to a slow virus but is now known to be the result of Prion disease.
See also
Clinical latency
Virus latency |
https://en.wikipedia.org/wiki/Subacute%20sclerosing%20panencephalitis | Subacute sclerosing panencephalitis (SSPE), also known as Dawson disease, is a rare form of progressive brain inflammation caused by a persistent infection with the measles virus. The condition primarily affects children, teens, and young adults. It has been estimated that about 2 in 10,000 people who get measles will eventually develop SSPE. However, a 2016 study estimated that the rate for unvaccinated infants under 15 months was as high as 1 in 609. No cure for SSPE exists, and the condition is almost always fatal. SSPE should not be confused with acute disseminated encephalomyelitis, which can also be caused by the measles virus, but has a very different timing and course.
SSPE is caused by the wild-type virus, not by vaccine strains.
Signs and symptoms
SSPE is characterized by a history of primary measles infection, followed by an asymptomatic period that lasts 7 years on average but can range from 1 month to 27 years. After the asymptomatic period, progressive neurological deterioration occurs, characterized by behavior change, intellectual problems, myoclonic seizures, blindness, ataxia, and eventually death.
Stages of Progression
Symptoms progress through the following 4 stages:
Stage 1: There may be personality changes, mood swings, or depression. Fever, headache, and memory loss may also be present. This stage may last up to 6 months.
Stage 2: This stage may involve jerking, muscle spasms, seizures, loss of vision, and dementia.
Stage 3: Jerking movements are replaced by writhing (twisting) movements and rigidity. At this stage, complications may result in blindness or death.
Stage 4: Progressive loss of consciousness into a persistent vegetative state, which may be preceded by or concomitant with paralysis, occurs in the final stage, during which breathing, heart rate, and blood pressure are affected. Death usually occurs as a result of fever, heart failure, or the brain’s inability to control the autonomic nervous system.
Pathogenesis
A large num |
https://en.wikipedia.org/wiki/Nakai%20conjecture | In mathematics, the Nakai conjecture is an unproven characterization of smooth algebraic varieties, conjectured by Japanese mathematician Yoshikazu Nakai in 1961.
It states that if V is a complex algebraic variety, such that its ring of differential operators is generated by the derivations it contains, then V is a smooth variety. The converse statement, that smooth algebraic varieties have rings of differential operators that are generated by their derivations, is a result of Alexander Grothendieck.
The Nakai conjecture is known to be true for algebraic curves and Stanley–Reisner rings. A proof of the conjecture would also establish the Zariski–Lipman conjecture, for a complex variety V with coordinate ring R. This conjecture states that if the derivations of R are a free module over R, then V is smooth. |
https://en.wikipedia.org/wiki/Mordell%20curve | In algebra, a Mordell curve is an elliptic curve of the form y2 = x3 + n, where n is a fixed non-zero integer.
These curves were closely studied by Louis Mordell, from the point of view of determining their integer points. He showed that every Mordell curve contains only finitely many integer points (x, y). In other words, the differences of perfect squares and perfect cubes tend to infinity. The question of how fast was dealt with in principle by Baker's method. Hypothetically this issue is dealt with by Marshall Hall's conjecture.
Properties
If (x, y) is an integer point on a Mordell curve, then so is (x, -y).
There are certain values of n for which the corresponding Mordell curve has no integer solutions; these values are:
6, 7, 11, 13, 14, 20, 21, 23, 29, 32, 34, 39, 42, ... .
−3, −5, −6, −9, −10, −12, −14, −16, −17, −21, −22, ... .
The specific case where n = −2 is also known as Fermat's Sandwich Theorem.
List of solutions
The following is a list of solutions to the Mordell curve y2 = x3 + n for |n| ≤ 25. Only solutions with y ≥ 0 are shown.
In 1998, J. Gebel, A. Pethö, H. G. Zimmer found all integers points for 0 < |n| ≤ 104.
In 2015, M. A. Bennett and A. Ghadermarzi computed integer points for 0 < |n| ≤ 107. |
https://en.wikipedia.org/wiki/Universal%20remote | A universal remote is a remote control that can be programmed to operate various brands of one or more types of consumer electronics devices. Low-end universal remotes can only control a set number of devices determined by their manufacturer, while mid- and high-end universal remotes allow the user to program in new control codes to the remote. Many remotes sold with various electronics include universal remote capabilities for other types of devices, which allows the remote to control other devices beyond the device it came with. For example, a VCR remote may be programmed to operate various brands of televisions.
History
On May 30, 1985, Philips introduced the first universal remote (U.S. Pat. #4774511) under the Magnavox brand name.
In 1985, Robin Rumbolt, William "Russ" McIntyre, and Larry Goodson with North American Philips Consumer Electronics (Magnavox, Sylvania, and Philco) developed the first universal remote control.
In 1987, the first programmable universal remote control was released. It was called the "CORE" and was created by CL 9, a startup founded by Steve Wozniak, the inventor of the Apple I and Apple II computers.
In March 1987, Steve Ciarcia published an article in Byte magazine entitled "Build a Trainable Infrared Master Controller", describing a universal remote with the ability to upload the settings to a computer. This device had macro capabilities.
Layout and features
Most universal remotes share a number of basic design elements:
A power button, as well as a switch or series of buttons to select which device the remote is controlling at the moment. A typical selection includes TV, VCR, DVD, and CBL/SAT, along with other devices that sometimes include DVRs, audio equipment or home automation devices.
Channel and volume up/down selectors (sometimes marked with + and - signs).
A numeric keypad for entering channel numbers and some other purposes such as time and date entry.
A set button (sometimes recessed to avoid accidental pressing |
https://en.wikipedia.org/wiki/Ehud%20Hrushovski | Ehud Hrushovski (; born 30 September 1959) is a mathematical logician. He is a Merton Professor of Mathematical Logic at the University of Oxford and a Fellow of Merton College, Oxford. He was also Professor of Mathematics at the Hebrew University of Jerusalem.
Early life and education
Hrushovski's father, Benjamin Harshav (Hebrew: בנימין הרשב, né Hruszowski; 1928–2015), was a literary theorist, a Yiddish and Hebrew poet and a translator, professor at Yale University and Tel Aviv University in comparative literature. Ehud Hrushovski earned his PhD from the University of California, Berkeley in 1986 under Leo Harrington; his dissertation was titled Contributions to Stable Model Theory. He was a professor of mathematics at the Massachusetts Institute of Technology until 1994, when he became a professor at the Hebrew University of Jerusalem. Hrushovski moved in 2017 to the University of Oxford, where he is the Merton Professor of Mathematical Logic.
Career
Hrushovski is well known for several fundamental contributions to model theory, in particular in the branch that has become known as geometric model theory, and its applications. His PhD thesis revolutionized stable model theory (a part of model theory arising from the stability theory introduced by Saharon Shelah). Shortly afterwards he found counterexamples to the Trichotomy Conjecture of Boris Zilber and his method of proof has become well known as Hrushovski constructions and found many other applications since.
One of his most famous results is his proof of the geometric Mordell–Lang conjecture in all characteristics using model theory in 1996. This deep proof was a landmark in logic and geometry. He has had many other famous and notable results in model theory and its applications to geometry, algebra, and combinatorics.
Honours and awards
He was an invited speaker at the 1990 International Congress of Mathematicians and a plenary speaker at the 1998 ICM. He is a recipient of the Erdős Prize of the Israel |
https://en.wikipedia.org/wiki/Cray%20MTA-2 | The Cray MTA-2 is a shared-memory MIMD computer marketed by Cray Inc. It is an unusual design based on the Tera computer designed by Tera Computer Company. The original Tera computer (also known as the MTA) turned out to be nearly unmanufacturable due to its aggressive packaging and circuit technology. The MTA-2 was an attempt to correct these problems while maintaining essentially the same processor architecture respun in one silicon ASIC, down from some 26 gallium arsenide ASICs in the original MTA; and while regressing the network design from a 4-D torus topology to a less efficient but more scalable Cayley graph topology. The name Cray was added to the second version after Tera Computer Company bought the remains of the Cray Research division of Silicon Graphics in 2000 and renamed itself Cray Inc.
The MTA-2 was not a commercial success, with only one moderately-sized 40-processor system ("Boomer") being sold to the United States Naval Research Laboratory in 2002, and one 4-processor system sold to the Electronic Navigation Research Institute (ENRI) in Japan.
The MTA computers pioneered several technologies, presumably to be used in future Cray Inc. products:
A simple, whole-machine-oriented programming model.
Hardware-based multithreading.
Low-overhead thread synchronization.
See also
Cray MTA
Heterogeneous Element Processor |
https://en.wikipedia.org/wiki/Hadwiger%E2%80%93Nelson%20problem | In geometric graph theory, the Hadwiger–Nelson problem, named after Hugo Hadwiger and Edward Nelson, asks for the minimum number of colors required to color the plane such that no two points at distance 1 from each other have the same color. The answer is unknown, but has been narrowed down to one of the numbers 5, 6 or 7. The correct value may depend on the choice of axioms for set theory.
Relation to finite graphs
The question can be phrased in graph theoretic terms as follows. Let G be the unit distance graph of the plane: an infinite graph with all points of the plane as vertices and with an edge between two vertices if and only if the distance between the two points is 1. The Hadwiger–Nelson problem is to find the chromatic number of G. As a consequence, the problem is often called "finding the chromatic number of the plane". By the de Bruijn–Erdős theorem, a result of , the problem is equivalent (under the assumption of the axiom of choice) to that of finding the largest possible chromatic number of a finite unit distance graph.
History
According to , the problem was first formulated by Nelson in 1950, and first published by . had earlier published a related result, showing that any cover of the plane by five congruent closed sets contains a unit distance in one of the sets, and he also mentioned the problem in a later paper . discusses the problem and its history extensively.
One application of the problem connects it to the Beckman–Quarles theorem, according to which any mapping of the Euclidean plane (or any higher dimensional space) to itself that preserves unit distances must be an isometry, preserving all distances. Finite colorings of these spaces can be used to construct mappings from them to higher-dimensional spaces that preserve distances but are not isometries. For instance, the Euclidean plane can be mapped to a six-dimensional space by coloring it with seven colors so that no two points at distance one have the same color, and then mapping |
https://en.wikipedia.org/wiki/F%C3%A1ry%E2%80%93Milnor%20theorem | In the mathematical theory of knots, the Fáry–Milnor theorem, named after István Fáry and John Milnor, states that three-dimensional smooth curves with small total curvature must be unknotted. The theorem was proved independently by Fáry in 1949 and Milnor in 1950. It was later shown to follow from the existence of quadrisecants .
Statement
If K is any closed curve in Euclidean space that is sufficiently smooth to define the curvature κ at each of its points, and if the total absolute curvature is less than or equal to 4π, then K is an unknot, i.e.:
The contrapositive tells us that if K is not an unknot, i.e. K is not isotopic to the circle, then the total curvature will be strictly greater than 4π. Notice that having the total curvature less than or equal to 4 is merely a sufficient condition for K to be an unknot; it is not a necessary condition. In other words, although all knots with total curvature less than or equal to 4π are the unknot, there exist unknots with curvature strictly greater than 4π.
Generalizations to non-smooth curves
For closed polygonal chains the same result holds with the integral of curvature replaced by the sum of angles between adjacent segments of the chain. By approximating arbitrary curves by polygonal chains, one may extend the definition of total curvature to larger classes of curves, within which the Fáry–Milnor theorem also holds (, ). |
https://en.wikipedia.org/wiki/Zariski%20geometry | In mathematics, a Zariski geometry consists of an abstract structure introduced by Ehud Hrushovski and Boris Zilber, in order to give a characterisation of the Zariski topology on an algebraic curve, and all its powers. The Zariski topology on a product of algebraic varieties is very rarely the product topology, but richer in closed sets defined by equations that mix two sets of variables. The result described gives that a very definite meaning, applying to projective curves and compact Riemann surfaces in particular.
Definition
A Zariski geometry consists of a set X and a topological structure on each of the sets
X, X2, X3, ...
satisfying certain axioms.
(N) Each of the Xn is a Noetherian topological space, of dimension at most n.
Some standard terminology for Noetherian spaces will now be assumed.
(A) In each Xn, the subsets defined by equality in an n-tuple are closed. The mappings
Xm → Xn
defined by projecting out certain coordinates and setting others as constants are all continuous.
(B) For a projection
p: Xm → Xn
and an irreducible closed subset Y of Xm, p(Y) lies between its closure Z and Z \ where is a proper closed subset of Z. (This is quantifier elimination, at an abstract level.)
(C) X is irreducible.
(D) There is a uniform bound on the number of elements of a fiber in a projection of any closed set in Xm, other than the cases where the fiber is X.
(E) A closed irreducible subset of Xm, of dimension r, when intersected with a diagonal subset in which s coordinates are set equal, has all components of dimension at least r − s + 1.
The further condition required is called very ample (cf. very ample line bundle). It is assumed there is an irreducible closed subset P of some Xm, and an irreducible closed subset Q of P× X2, with the following properties:
(I) Given pairs (x, y), (, ) in X2, for some t in P, the set of (t, u, v) in Q includes (t, x, y) but not (t, , )
(J) For t outside a proper closed subset of P, the set of (x, y) in X |
https://en.wikipedia.org/wiki/Steve%20Ciarcia | Steve Ciarcia is an embedded control systems engineer. He became popular through his Ciarcia's Circuit Cellar column in BYTE magazine, and later through the Circuit Cellar magazine that he published. He is also the author of Build Your Own Z80 Computer, edited in 1981 and Take My Computer...Please!, published in 1978. He has also compiled seven volumes of his hardware project articles that appeared in BYTE magazine.
In 1982 and 1983, he published a series of articles on building the MPX-16, a 16-bit single-board computer that was hardware-compatible with the IBM PC.
In December 2009, Steve Ciarcia announced that for the American market a strategic cooperation would be entered between Elektor and his Circuit Cellar magazine. In November 2012, Steve Ciarcia announced that he was quitting Circuit Cellar and Elektor would take it over.
In October 2014, Ciarcia purchased Circuit Cellar, audioXpress, Voice Coil, Loudspeaker Industry Sourcebook, and their respective websites, newsletters, and products from Netherlands-based Elektor International Media. The aforementioned magazines will continue to be published by Ciarcia's US-based team.
In July 2016, Steve Ciarcia sold the company to long time employee KC Prescott operating under the company name KCK Media Corp. |
https://en.wikipedia.org/wiki/Switch%20virtual%20interface | A switch virtual interface (SVI) represents a logical layer-3 interface on a switch.
VLANs divide broadcast domains in a LAN environment. Whenever hosts in one VLAN need to communicate with hosts in another VLAN, the traffic must be routed between them. This is known as inter-VLAN routing. On layer-3 switches it is accomplished by the creation of layer-3 interfaces (SVIs). Inter VLAN routing, in other words routing between VLANs, can be achieved using SVIs.
SVI or VLAN interface, is a virtual routed interface that connects a VLAN on the device to the Layer 3 router engine on the same device. Only one VLAN interface can be associated with a VLAN, but you need to configure a VLAN interface for a VLAN only when you want to route between VLANs or to provide IP host connectivity to the device through a virtual routing and forwarding (VRF) instance that is not the management VRF. When you enable VLAN interface creation, a switch creates a VLAN interface for the default VLAN (VLAN 1) to permit remote switch administration.
SVIs are generally configured for a VLAN for the following reasons:
Allow traffic to be routed between VLANs by providing a default gateway for the VLAN.
Provide fallback bridging (if required for non-routable protocols).
Provide Layer 3 IP connectivity to the switch.
Support bridging configurations and routing protocol.
Access Layer - 'Routed Access' Configuration (in lieu of Spanning Tree)
SVIs advantages include:
Much faster than router-on-a-stick, because everything is hardware-switched and routed.
No need for external links from the switch to the router for routing.
Not limited to one link. Layer 2 EtherChannels can be used between the switches to get more bandwidth.
Latency is much lower, because it does not need to leave the switch
An SVI can also be known as a Routed VLAN Interface (RVI) by some vendors. |
https://en.wikipedia.org/wiki/Natural%20food | Natural food and all-natural food are terms in food labeling and marketing with several definitions, often implying foods that are not manufactured by processing. In some countries like the United Kingdom, the term "natural" is defined and regulated; in others, such as the United States, the term natural is not enforced for food labels, although there is USDA regulation of organic labeling.
The term is assumed to describe foods having ingredients that are intrinsic to an unprocessed food.
Diverse definitions
While almost all foodstuffs are derived from the natural products of plants and animals, 'natural foods' are often assumed to be foods that are not processed, or do not contain any food additives, or do not contain particular additives such as hormones, antibiotics, sweeteners, food colors, preservatives, or flavorings that were not originally in the food. In fact, many people (63%) when surveyed showed a preference for products labeled "natural" compared to the unmarked counterparts, based on the common belief (86% of polled consumers) that the term "natural" indicated that the food does not contain any artificial ingredients.
The term is variously misused on labels and in advertisements. The international Food and Agriculture Organization's Codex Alimentarius does not recognize the term 'natural' but does have a standard for organic foods.
History
The idea of eating "natural foods" was promoted by cookbook writers in the United States during the 1970s with cookbooks emphasizing "natural," "health" and "whole" foods in opposition to processed foods which were considered bad for health. In 1971, Eleanor Levitt authored The Wonderful World of Natural Food Cookery which dismissed processed foods such as readymade dinners, cookie mixes, and cold cuts as being full of preservatives and other "chemical poisons."
Jean Hewitt authored the New York Times Natural Foods Cookbook, an influential cookbook on the use of natural foods. Hewitt suggested that before larg |
https://en.wikipedia.org/wiki/Open%20Vulnerability%20and%20Assessment%20Language | Open Vulnerability and Assessment Language (OVAL) is an international, information security, community standard to promote open and publicly available security content, and to standardize the transfer of this information across the entire spectrum of security tools and services. OVAL includes a language used to encode system details, and an assortment of content repositories held throughout the community. The language standardizes the three main steps of the assessment process:
representing configuration information of systems for testing;
analyzing the system for the presence of the specified machine state (vulnerability, configuration, patch state, etc.); and
reporting the results of this assessment.
The repositories are collections of publicly available and open content that utilize the language.
The OVAL community has developed three schemas written in Extensible Markup Language (XML) to serve as the framework and vocabulary of the OVAL Language. These schemas correspond to the three steps of the assessment process: an OVAL System Characteristics schema for representing system information, an OVAL Definition schema for expressing a specific machine state, and an OVAL Results schema for reporting the results of an assessment.
Content written in the OVAL Language is located in one of the many repositories found within the community. One such repository, known as the OVAL Repository, is hosted by The MITRE Corporation. It is the central meeting place for the OVAL Community to discuss, analyze, store, and disseminate OVAL Definitions. Each definition in the OVAL Repository determines whether a specified software vulnerability, configuration issue, program, or patch is present on a system.
The information security community contributes to the development of OVAL by participating in the creation of the OVAL Language on the OVAL Developers Forum and by writing definitions for the OVAL Repository through the OVAL Community Forum. An OVAL Board consisting of repr |
https://en.wikipedia.org/wiki/Theta%20divisor | In mathematics, the theta divisor Θ is the divisor in the sense of algebraic geometry defined on an abelian variety A over the complex numbers (and principally polarized) by the zero locus of the associated Riemann theta-function. It is therefore an algebraic subvariety of A of dimension dim A − 1.
Classical theory
Classical results of Bernhard Riemann describe Θ in another way, in the case that A is the Jacobian variety J of an algebraic curve (compact Riemann surface) C. There is, for a choice of base point P on C, a standard mapping of C to J, by means of the interpretation of J as the linear equivalence classes of divisors on C of degree 0. That is, Q on C maps to the class of Q − P. Then since J is an algebraic group, C may be added to itself k times on J, giving rise to subvarieties Wk.
If g is the genus of C, Riemann proved that Θ is a translate on J of Wg − 1. He also described which points on Wg − 1 are non-singular: they correspond to the effective divisors D of degree g − 1 with no associated meromorphic functions other than constants. In more classical language, these D do not move in a linear system of divisors on C, in the sense that they do not dominate the polar divisor of a non constant function.
Riemann further proved the Riemann singularity theorem, identifying the multiplicity of a point p = class(D) on Wg − 1 as the number of linearly independent meromorphic functions with pole divisor dominated by D, or equivalently as h0(O(D)), the number of linearly independent global sections of the holomorphic line bundle associated to D as Cartier divisor on C.
Later work
The Riemann singularity theorem was extended by George Kempf in 1973, building on work of David Mumford and Andreotti - Mayer, to a description of the singularities of points p = class(D) on Wk for 1 ≤ k ≤ g − 1. In particular he computed their multiplicities also in terms of the number of independent meromorphic functions associated to D (Riemann-Kempf singularity theorem).
More |
https://en.wikipedia.org/wiki/Diskless%20Remote%20Boot%20in%20Linux | DRBL (Diskless Remote Boot in Linux) is a NFS-/NIS server providing a diskless or systemless environment for client machines.
It could be used for
cloning machines with Clonezilla software inbuilt,
providing for a network installation of Linux distributions like Fedora, Debian, etc.,
providing machines via PXE boot (or similar means) with a small size operation system (e.g., DSL, Puppy Linux, FreeDOS).
Providing a DRBL-Server
Installation on a machine running a supported Linux distribution via installation script,
Live CD.
Installation is possible on a machine with Debian, Ubuntu, Mandriva, Red Hat Linux, Fedora, CentOS or SuSE already installed. Unlike LTSP, it uses distributed hardware resources and makes it possible for clients to fully access local hardware, thus making it feasible to use server machines with less power. It also includes Clonezilla, a partitioning and disk cloning utility similar to Symantec Ghost.
DRBL comes under the terms of the GNU GPL license so providing the user with the ability to customize it.
Features
DRBL excels in two main categories.
Disk Cloning
Clonezilla (packaged with DRBL) uses Partimage to avoid copying free space, and gzip to compress Hard Disk images. The stored image can then be restored to multiple machines simultaneously using multicast packets, thus greatly reducing the time it takes to image large numbers of computers. The DRBL Live CD allows you to do all of this without actually installing anything on any of the machines, by simply booting one machine (the server) from the CD, and PXE booting the rest of the machines.
Diskless node
A diskless node is an excellent way to make use of old hardware. Using old hardware as thin clients is a good solution, but has some disadvantages that a diskless node can make up for.
Streaming audio/video - A terminal server must decompress, recompress, and send video over the network to the client. A diskless node does all decompression locally, and can make use of a |
https://en.wikipedia.org/wiki/Polarizable%20vacuum | In theoretical physics, particularly fringe physics, polarizable vacuum (PV) and its associated theory refers to proposals by Harold Puthoff, Robert H. Dicke, and others to develop an analogue of general relativity to describe gravity and its relationship to electromagnetism.
Description
In essence, Dicke and Puthoff proposed that the presence of mass alters the electric permittivity and the magnetic permeability of flat spacetime, εo and μo respectively by multiplying them by a scalar function, K:
arguing that this will affect the lengths of rulers made of ordinary matter, so that in the presence of a gravitational field the spacetime metric of Minkowski spacetime is replaced by
where is the so-called "dielectric constant of the vacuum". This is a "diagonal" metric given in terms of a Cartesian chart and having the same stratified conformally flat form in the Watt-Misner theory of gravitation. However, according to Dicke and Puthoff, κ must satisfy a field equation which differs from the field equation of the Watt-Misner theory. In the case of a static spherically symmetric vacuum, this yields the asymptotically flat solution
The resulting Lorentzian spacetime happens to agree with the analogous solution in the Watt-Misner theory, and it has the same weak-field limit, and the same far-field, as the Schwarzschild vacuum solution in general relativity, and it satisfies three of the four classical tests of relativistic gravitation (redshift, deflection of light, precession of the perihelion of Mercury) to within the limit of observational accuracy. However, as shown by Ibison (2003), it yields a different prediction for the inspiral of test particles due to gravitational radiation.
However, requiring stratified-conformally flat metrics rules out the possibility of recovering the weak-field Kerr metric, and is certainly inconsistent with the claim that PV can give a general "approximation" of the general theory of relativity. In particular, this theory exh |
https://en.wikipedia.org/wiki/Charge%20density | In electromagnetism, charge density is the amount of electric charge per unit length, surface area, or volume. Volume charge density (symbolized by the Greek letter ρ) is the quantity of charge per unit volume, measured in the SI system in coulombs per cubic meter (C⋅m−3), at any point in a volume. Surface charge density (σ) is the quantity of charge per unit area, measured in coulombs per square meter (C⋅m−2), at any point on a surface charge distribution on a two dimensional surface. Linear charge density (λ) is the quantity of charge per unit length, measured in coulombs per meter (C⋅m−1), at any point on a line charge distribution. Charge density can be either positive or negative, since electric charge can be either positive or negative.
Like mass density, charge density can vary with position. In classical electromagnetic theory charge density is idealized as a continuous scalar function of position , like a fluid, and , , and are usually regarded as continuous charge distributions, even though all real charge distributions are made up of discrete charged particles. Due to the conservation of electric charge, the charge density in any volume can only change if an electric current of charge flows into or out of the volume. This is expressed by a continuity equation which links the rate of change of charge density and the current density .
Since all charge is carried by subatomic particles, which can be idealized as points, the concept of a continuous charge distribution is an approximation, which becomes inaccurate at small length scales. A charge distribution is ultimately composed of individual charged particles separated by regions containing no charge. For example, the charge in an electrically charged metal object is made up of conduction electrons moving randomly in the metal's crystal lattice. Static electricity is caused by surface charges consisting of ions on the surface of objects, and the space charge in a vacuum tube is composed of |
https://en.wikipedia.org/wiki/Default-free%20zone | In Internet routing, the default-free zone (DFZ) is the collection of all Internet autonomous systems (AS) that do not require a default route to route a packet to any destination. Conceptually, DFZ routers have a "complete" Border Gateway Protocol table, sometimes referred to as the Internet routing table, global routing table or global BGP table. However, internet routing changes rapidly and the widespread use of route filtering ensures that no router has a complete view of all routes. Any routing table created would look different from the perspective of different routers, even if a stable view could be achieved.
Highly connected Autonomous Systems and routers
The Weekly Routing Reports used by the ISP community come from the Asia-Pacific Network Information Centre (APNIC) router in Tokyo, which is a well-connected router that has as good a view of the Internet as any other single router. For serious routing research, however, routing information will be captured at multiple well-connected sites, including high-traffic ISPs (see the "skitter core") below.
As of May 12, 2014, there were 494,105 routes seen by the APNIC router. These came from 46,795 autonomous systems, of which only 172 were transit-only and 35787 were stub/origin-only. 6087 autonomous systems provided some level of transit.
The Idea of an "Internet core"
The term "default-free zone" is sometimes confused with an "Internet core" or Internet backbone, but there has been no true "core" since before the Border Gateway Protocol (BGP) was introduced. In pre-BGP days, when the Exterior Gateway Protocol (EGP) was the exterior routing protocol, it indeed could be assumed there was a single Internet core.
That concept, however, has been obsolete for a long time. At best, today's definition of the Internet core is statistical, with the "skitter core" being some number of AS with the greatest traffic according to the CAIDA measurements, previously made with its measuring tool called "skitter". The C |
https://en.wikipedia.org/wiki/Dust%20solution | In general relativity, a dust solution is a fluid solution, a type of exact solution of the Einstein field equation, in which the gravitational field is produced entirely by the mass, momentum, and stress density of a perfect fluid that has positive mass density but vanishing pressure. Dust solutions are an important special case of fluid solutions in general relativity.
Dust model
A pressureless perfect fluid can be interpreted as a model of a configuration of dust particles that locally move in concert and interact with each other only gravitationally, from which the name is derived. For this reason, dust models are often employed in cosmology as models of a toy universe, in which the dust particles are considered as highly idealized models of galaxies, clusters, or superclusters. In astrophysics, dust models have been employed as models of gravitational collapse.
Dust solutions can also be used to model finite rotating disks of dust grains; some examples are listed below. If superimposed somehow on a stellar model comprising a ball of fluid surrounded by vacuum, a dust solution could be used to model an accretion disk around a massive object; however, no such exact solutions that model rotating accretion disks are yet known due to the extreme mathematical difficulty of constructing them.
Mathematical definition
The stress–energy tensor of a relativistic pressureless fluid can be written in the simple form
Here
the world lines of the dust particles are the integral curves of the four-velocity ,
the matter density is given by the scalar function .
Eigenvalues
Because the stress-energy tensor is a rank-one matrix, a short computation shows that the characteristic polynomial
of the Einstein tensor in a dust solution will have the form
Multiplying out this product, we find that the coefficients must satisfy the following three algebraically independent (and invariant) conditions:
Using Newton's identities, in terms of the sums of the powers of the ro |
https://en.wikipedia.org/wiki/Low-definition%20television | Low-definition television (LDTV) refers to TV systems that have a lower screen resolution than standard-definition television systems. The term is usually used in reference to digital television, in particular when broadcasting at the same (or similar) resolution as low-definition analog television systems. Mobile DTV systems usually transmit in low definition, as do all slow-scan television systems.
Sources
The Video CD format uses a progressive scan LDTV signal (352×240 or 352×288), which is half the vertical and horizontal resolution of full-bandwidth SDTV. However, most players will internally upscale VCD material to 480/576 lines for playback, as this is both more widely compatible and gives a better overall appearance. No motion information is lost due to this process, as VCD video is not high-motion and only plays back at 25 or 30 frames per second, and the resultant display is comparable to consumer-grade VHS video playback.
For the first few years of its existence, YouTube offered only one, low-definition resolution of 256x144 or 144p at 30~50 fps or less, later extending first to widescreen 426×240, then to gradually higher resolutions; once the video service had become well established and had been acquired by Google, it had access to Google's radically improved storage space and transmission bandwidth, and could rely on a good proportion of its users having high-speed internet connections, giving an overall effect reminiscent of early online video streaming attempts using RealVideo or similar services, where 160×120 at single-figure framerates was deemed acceptable to cater to those whose network connections could not even sufficiently deliver 240p content.
Video games
Older video game consoles and home computers often generated a technically compliant analog 525-line NTSC or 625-line PAL signal, but only sent one field type rather than alternating between the two. This created a 262 or 312 line progressive scan signal (with half the vertical resol |
https://en.wikipedia.org/wiki/Poromechanics | Poromechanics is a branch of physics and specifically continuum mechanics and acoustics that studies the behaviour of fluid-saturated porous media. A porous medium or a porous material is a solid referred to as matrix) permeated by an interconnected network of pores (voids) filled with a fluid (liquid or gas). Usually both solid matrix and the pore network, or pore space, are assumed to be continuous, so as to form two interpenetrating continua such as in a sponge. Natural substances including rocks, soils, biological tissues including heart and cancellous bone, and man-made materials such as foams and ceramics can be considered as porous media. Porous media whose solid matrix is elastic and the fluid is viscous are called poroelastic. A poroelastic medium is characterised by its porosity, permeability as well as the properties of its constituents (solid matrix and fluid).
The concept of a porous medium originally emerged in soil mechanics, and in particular in the works of Karl von Terzaghi, the father of soil mechanics. However a more general concept of a poroelastic medium, independent of its nature or application, is usually attributed to Maurice Anthony Biot (1905–1985), a Belgian-American engineer. In a series of papers published between 1935 and 1962 Biot developed the theory of dynamic poroelasticity (now known as Biot theory) which gives a complete and general description of the mechanical behaviour of a poroelastic medium. Biot's equations of the linear theory of poroelasticity are derived from
the equations of linear elasticity for a solid matrix,
the Navier–Stokes equations for a viscous fluid, and Darcy's law for a flow of fluid through a porous matrix.
One of the key findings of the theory of poroelasticity is that in poroelastic media there exist three types of elastic waves: a shear or transverse wave, and two types of longitudinal or compressional waves, which Biot called type I and type II waves. The transverse and type I (or fast) longitudina |
https://en.wikipedia.org/wiki/Threading%20%28protein%20sequence%29 | In molecular biology, protein threading, also known as fold recognition, is a method of protein modeling which is used to model those proteins which have the same fold as proteins of known structures, but do not have homologous proteins with known structure.
It differs from the homology modeling method of structure prediction as it (protein threading) is used for proteins which do not have their homologous protein structures deposited in the Protein Data Bank (PDB), whereas homology modeling is used for those proteins which do. Threading works by using statistical knowledge of the relationship between the structures deposited in the PDB and the sequence of the protein which one wishes to model.
The prediction is made by "threading" (i.e. placing, aligning) each amino acid in the target sequence to a position in the template structure, and evaluating how well the target fits the template. After the best-fit template is selected, the structural model of the sequence is built based on the alignment with the chosen template. Protein threading is based on two basic observations: that the number of different folds in nature is fairly small (approximately 1300); and that 90% of the new structures submitted to the PDB in the past three years have similar structural folds to ones already in the PDB.
Classification of protein structure
The Structural Classification of Proteins (SCOP) database provides a detailed and comprehensive description of the structural and evolutionary relationships of known structure. Proteins are classified to reflect both structural and evolutionary relatedness. Many levels exist in the hierarchy, but the principal levels are family, superfamily, and fold:
Family (clear evolutionary relationship): Proteins clustered together into families are clearly evolutionarily related. Generally, this means that pairwise residue identities between the proteins are 30% and greater. However, in some cases similar functions and structures provide definitive ev |
https://en.wikipedia.org/wiki/Grothendieck%E2%80%93Katz%20p-curvature%20conjecture | In mathematics, the Grothendieck–Katz p-curvature conjecture is a local-global principle for linear ordinary differential equations, related to differential Galois theory and in a loose sense analogous to the result in the Chebotarev density theorem considered as the polynomial case. It is a conjecture of Alexander Grothendieck from the late 1960s, and apparently not published by him in any form.
The general case remains unsolved, despite recent progress; it has been linked to geometric investigations involving algebraic foliations.
Formulation
In a simplest possible statement the conjecture can be stated in its essentials for a vector system written as
for a vector v of size n, and an n×n matrix A of algebraic functions with algebraic number coefficients. The question is to give a criterion for when there is a full set of algebraic function solutions, meaning a fundamental matrix (i.e. n vector solutions put into a block matrix). For example, a classical question was for the hypergeometric equation: when does it have a pair of algebraic solutions, in terms of its parameters? The answer is known classically as Schwarz's list. In monodromy terms, the question is of identifying the cases of finite monodromy group.
By reformulation and passing to a larger system, the essential case is for rational functions in A and rational number coefficients. Then a necessary condition is that for almost all prime numbers p, the system defined by reduction modulo p should also have a full set of algebraic solutions, over the finite field with p elements.
Grothendieck's conjecture is that these necessary conditions, for almost all p, should be sufficient. The connection with p-curvature is that the mod p condition stated is the same as saying the p-curvature, formed by a recurrence operation on A, is zero; so another way to say it is that p-curvature of 0 for almost all p implies enough algebraic solutions of the original equation.
Katz's formulation for the Galois group
Nichol |
https://en.wikipedia.org/wiki/Norton%20amplifier | A Norton amplifier or current differencing amplifier (CDA) is an electronic amplifier with two low impedance current inputs and one low impedance voltage output where the output voltage is proportional to the difference between the two input currents. A norton amplifier is a current controlled voltage source (CCVS) controlled by the difference of two input currents.
The Norton amplifier can be regarded as the dual of the operational transconductance amplifier (OTA) which takes a differential voltage input and provides a high impedance current output. The OTA has a gain measured in units of transconductance (siemens) whereas the Norton amplifier has a gain measured in units of transimpedance (ohms).
A commercial example of this circuit is the LM3900 quad operational amplifier and its high speed cousin the LM359 (400MHz gain bandwidth product).
The LM3900 was introduced in the mid 1970s, and was designed to be an easy to use single supply op amp with comparable input bias currents (~30nA) to other bi polar op-amps of the time period (LM741, LM324), while having rail to rail output and a much higher gain bandwidth product(2.5MHz). The LM3900 was popular with designers of analog synthesizers. The LM359 was introduced in the early 1990s as video capable amplifier capable of high amplification at video frequencies (10MHz).
See also
Current differencing transconductance amplifier, current difference input and differential current output
Current-feedback operational amplifier, single-ended current input and voltage output. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.