source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Large%20Volume%20Detector | The Large Volume Detector (LVD) is a particle physics experiment situated in the Gran Sasso laboratory in Italy and is operated by the Italian Institute of Nuclear Physics (INFN). It has been in operation since June 1992, and is a member of the Supernova Early Warning System. Among other work, the detector should be able to detect neutrinos from our galaxy and possibly nearby galaxies. The LVD uses 840 scintillator counters around a large tank of hydrocarbons. The detector can detect both charged current and neutral current interactions.
In 2012, they published the results of measurements of the speed of CERN Neutrinos to Gran Sasso. The results were consistent with the speed of light. See measurements of neutrino speed. |
https://en.wikipedia.org/wiki/Dirichlet%27s%20principle | In mathematics, and particularly in potential theory, Dirichlet's principle is the assumption that the minimizer of a certain energy functional is a solution to Poisson's equation.
Formal statement
Dirichlet's principle states that, if the function is the solution to Poisson's equation
on a domain of with boundary condition
on the boundary ,
then u can be obtained as the minimizer of the Dirichlet energy
amongst all twice differentiable functions such that on (provided that there exists at least one function making the Dirichlet's integral finite). This concept is named after the German mathematician Peter Gustav Lejeune Dirichlet.
History
The name "Dirichlet's principle" is due to Riemann, who applied it in the study of complex analytic functions.
Riemann (and others such as Gauss and Dirichlet) knew that Dirichlet's integral is bounded below, which establishes the existence of an infimum; however, he took for granted the existence of a function that attains the minimum. Weierstrass published the first criticism of this assumption in 1870, giving an example of a functional that has a greatest lower bound which is not a minimum value. Weierstrass's example was the functional
where is continuous on , continuously differentiable on , and subject to boundary conditions , where and are constants and . Weierstrass showed that , but no admissible function can make equal 0. This example did not disprove Dirichlet's principle per se, since the example integral is different from Dirichlet's integral. But it did undermine the reasoning that Riemann had used, and spurred interest in proving Dirichlet's principle as well as broader advancements in the calculus of variations and ultimately functional analysis.
In 1900, Hilbert later justified Riemann's use of Dirichlet's principle by developing the direct method in the calculus of variations.
See also
Dirichlet problem
Hilbert's twentieth problem
Plateau's problem
Green's first identity
Notes |
https://en.wikipedia.org/wiki/Distributed%20acoustic%20sensing | Rayleigh scattering based distributed acoustic sensing (DAS) systems use fiber optic cables to provide distributed strain sensing. In DAS, the optical fiber cable becomes the sensing element and measurements are made, and in part processed, using an attached optoelectronic device. Such a system allows acoustic frequency strain signals to be detected over large distances and in harsh environments.
Fundamentals of Rayleigh scatter based fiber optic sensing
In Rayleigh scatter based distributed fiber optic sensing, a coherent laser pulse is sent along an optic fiber, and scattering sites within the fiber cause the fiber to act as a distributed interferometer with a gauge length approximately equal to the pulse length. The intensity of the reflected light is measured as a function of time after transmission of the laser pulse. This is known as Coherent Rayleigh Optical Time Domain Reflectometry (COTDR). When the pulse has had time to travel the full length of the fiber and back, the next laser pulse can be sent along the fiber. Changes in the reflected intensity of successive pulses from the same region of fiber are caused by changes in the optical path length of that section of fiber. This type of system is very sensitive to both strain and temperature variations of the fiber and measurements can be made almost simultaneously at all sections of the fiber.
Capabilities of Rayleigh-based systems
Maximum range
The optical pulse is attenuated as it propagates along the fiber. For a single mode fiber operating at 1550 nm, a typical attenuation is 0.2 dB/km. Since the light must make a double pass along each section of fiber, this means each 1 km causes a total loss of 0.4 dB. The maximum range of the system occurs when the amplitude of the reflected pulse becomes so low it is impossible to obtain a clear signal from it. It is not possible to counteract this effect by increasing the input power because above a certain level this will induce nonlinear optical effects wh |
https://en.wikipedia.org/wiki/Crystallographic%20image%20processing | Crystallographic image processing (CIP) is traditionally understood as being a set of key steps in the determination of the atomic structure of crystalline matter from high-resolution electron microscopy (HREM) images obtained in a transmission electron microscope (TEM) that is run in the parallel illumination mode. The term was created in the research group of Sven Hovmöller at Stockholm University during the early 1980s and became rapidly a label for the "3D crystal structure from 2D transmission/projection images" approach. Since the late 1990s, analogous and complementary image processing techniques that are directed towards the achieving of goals with are either complementary or entirely beyond the scope of the original inception of CIP have been developed independently by members of the computational symmetry/geometry, scanning transmission electron microscopy, scanning probe microscopy communities, and applied crystallography communities.
HREM image contrasts and crystal potential reconstruction methods
Many beam HREM images of extremely thin samples are only directly interpretable in terms of a projected crystal structure if they have been recorded under special conditions, i.e. the so-called Scherzer defocus. In that case the positions of the atom columns appear as black blobs in the image (when the spherical aberration coefficient of the objective lens is positive - as always the case for uncorrected TEMs). Difficulties for interpretation of HREM images arise for other defocus values because the transfer properties of the objective lens alter the image contrast as function of the defocus. Hence atom columns which appear at one defocus value as dark blobs can turn into white blobs at a different defocus and vice versa. In addition to the objective lens defocus (which can easily be changed by the TEM operator), the thickness of the crystal under investigation has also a significant influence on the image contrast. These two factors often mix and yield HRE |
https://en.wikipedia.org/wiki/Boletus%20pinophilus | Boletus pinophilus, commonly known as the pine bolete or pinewood king bolete, is a basidiomycete fungus of the genus Boletus found throughout Europe and western Asia. Described by Italian naturalist Carlo Vittadini in 1835, B. pinophilus was for many years considered a subspecies or form of the porcini mushroom B. edulis before genetic studies confirmed its distinct status. In 2008, B. pinophilus in western North America were reclassified as a new species, B. rex-veris. B. pinophilus is edible, and may be preserved and cooked.
The fungus grows predominantly in coniferous forests on sandy soils, forming ectomycorrhizal associations in symbiosis with living trees by enveloping the tree's underground roots with sheaths of fungal tissue. Host trees include various species of pine, the European silver fir and European spruce, as well as deciduous trees such as chestnut trees, oak and beech. The fungus produces spore-bearing fruit bodies (known as "mushrooms") above ground under pine trees in summer and autumn. It has a red-brown to maroon-coloured cap and a large and bulbous stipe, covered with coarse orange-red reticulation. As with other boletes, the size of the fruiting body is variable.
Description
The fruiting body has a convex-shaped cap, at first small in relation to its stipe, expanding in volume as it matures. The skin of the cap is dry, matte and can be coloured from maroon to chocolate brown with a reddish tint. It is thicker than other porcini-like boletes and is gelatinous. These characteristics distinguish it visually from relatives such as Boletus edulis, B. reticulatus and B. aereus. The young, immature cap may have a pale pink colour and a white, powdery flush.
As with all boletes, the size of the fruiting body can vary greatly. The cap diameter can be as much as , the stem height and stem diameter . Measuring tall by wide, the bulbous stipe is often large, swollen and imposing, bearing a network pattern, much coarser in this species than other p |
https://en.wikipedia.org/wiki/AlterEgo | AlterEgo is a wearable silent speech output-input device developed by MIT Media Lab. The device is attached around the head, neck, and jawline and translates your brain speech center impulse input into words on a computer, without vocalization.
Description
The device consists of 7 small electrodes that attach at various points around the jaw-line and mouth to receive the electrical inputs to the muscles used for speech. It looks similar to a sling for the head, neck and jaw.
Background
Scientists Arnav Kapur of Fluid Interfaces group at MIT Media Lab with Shreyas Kapur and Pattie Maes designed the prototype and presented the work at the Conference on Intelligent User Interfaces in March 2018, in Tokyo. They reported that, when testing the accuracy of a classifier trained on data where users were instructed to "read the number to themselves, without producing a sound and moving their lips," they were able to classify the digit (between 0 and 9, i.e., ten classes), with 92 percent accuracy rate.
See Also
Silent speech interface
Imagined speech / Subvocalization
External links
MIT Alterego overview
MIT news
International Conference on Intelligent User Interfaces
Fluid Interfaces group
Transcribing the Voice in Your Head |
https://en.wikipedia.org/wiki/Non-logical%20symbol | In logic, the formal languages used to create expressions consist of symbols, which can be broadly divided into constants and variables. The constants of a language can further be divided into logical symbols and non-logical symbols (sometimes also called logical and non-logical constants).
The non-logical symbols of a language of first-order logic consist of predicates and individual constants. These include symbols that, in an interpretation, may stand for individual constants, variables, functions, or predicates. A language of first-order logic is a formal language over the alphabet consisting of its non-logical symbols and its logical symbols. The latter include logical connectives, quantifiers, and variables that stand for statements.
A non-logical symbol only has meaning or semantic content when one is assigned to it by means of an interpretation. Consequently, a sentence containing a non-logical symbol lacks meaning except under an interpretation, so a sentence is said to be true or false under an interpretation. These concepts are defined and discussed in the article on first-order logic, and in particular the section on syntax.
The logical constants, by contrast, have the same meaning in all interpretations. They include the symbols for truth-functional connectives (such as "and", "or", "not", "implies", and logical equivalence) and the symbols for the quantifiers "for all" and "there exists".
The equality symbol is sometimes treated as a non-logical symbol and sometimes treated as a symbol of logic. If it is treated as a logical symbol, then any interpretation will be required to interpret the equality sign using true equality; if interpreted as a non-logical symbol, it may be interpreted by an arbitrary equivalence relation.
Signatures
A signature is a set of non-logical constants together with additional information identifying each symbol as either a constant symbol, or a function symbol of a specific arity n (a natural number), or a relation |
https://en.wikipedia.org/wiki/Mucigel | Mucigel is a slimy substance that covers the root cap of the roots of plants. It is a highly hydrated polysaccharide, most likely a pectin, which is secreted from the outermost (epidermal) cells of the rootcap. Mucigel is formed in the Golgi bodies of such cells, and is secreted through the process of exocytosis. The layer of microorganism-rich soil surrounding the mucigel is called the rhizosphere.
Mucigel serves several functions, including:
Protection of rootcap; prevents desiccation
Lubrication of rootcap; allows root to more efficiently penetrate the soil
Creation of symbiotic environment for nitrogen fixing bacteria (i.e. diazotrophs) and fungi (which help with water absorption)
Provision of a 'diffusion bridge' between the fine root system and soil particles, which allows for a more efficient uptake of water and mineral nutrients by roots in dry soils.
Mucigel is composed of mucilage, microbial exopolysaccharides and glomalin proteins.
See also
Meristem |
https://en.wikipedia.org/wiki/Suspension%20%28dynamical%20systems%29 | Suspension is a construction passing from a map to a flow. Namely, let be a metric space, be a continuous map and be a function (roof function or ceiling function) bounded away from 0. Consider the quotient space:
The suspension of with roof function is the semiflow induced by the time translation .
If , then the quotient space is also called the mapping torus of . |
https://en.wikipedia.org/wiki/Group%20II%20pyridoxal-dependent%20decarboxylases | In molecular biology, group II pyridoxal-dependent decarboxylases are family of enzymes including aromatic-L-amino-acid decarboxylase (L-dopa decarboxylase or tryptophan decarboxylase) , which catalyses the decarboxylation of tryptophan to tryptamine, tyrosine decarboxylase , which converts tyrosine into tyramine and histidine decarboxylase , which catalyses the decarboxylation of histidine to histamine.
Pyridoxal-5'-phosphate-dependent amino acid decarboxylases can be divided into four groups based on amino acid sequence. Group II includes glutamate, histidine, tyrosine, and aromatic-L-amino-acid decarboxylases.
See also
Group I pyridoxal-dependent decarboxylases
Group III pyridoxal-dependent decarboxylases
Group IV pyridoxal-dependent decarboxylases |
https://en.wikipedia.org/wiki/University%20of%20Michigan%20Library | The University of Michigan Library is the academic library system of the University of Michigan. The university's 38 constituent and affiliated libraries together make it the second largest research library by number of volumes in the United States.
As of 2019–20, the University Library contained more than 14,543,814 volumes, while all campus library systems combined held more than 16,025,996 volumes. As of the 2019–2020 fiscal year, the Library also held 221,979 serials, and over 4,239,355 annual visits.
Founded in 1838, the University Library is the university's main library and is housed in 12 buildings with more than 20 libraries, among the most significant of which are the Shapiro Undergraduate Library, Hatcher Graduate Library, Special Collections Library, and Taubman Health Sciences Library. However, several U-M libraries are independent of the University Library: the Bentley Historical Library, the William L. Clements Library, the Gerald R. Ford Library, the Kresge Business Administration Library of the Ross School of Business, and the Law Library of the University of Michigan Law School. The University Library is also separate from the libraries of the University of Michigan–Dearborn (Mardigian Library) and the University of Michigan–Flint (Frances Willson Thompson Library).
The University of Michigan was the original home of the JSTOR database, which contains about 750,000 digitized pages from the entire pre-1990 backfile of ten journals of history and economics. In December 2004, the University of Michigan announced a book digitization program in collaboration with Google (known as Michigan Digitization Project), which is both revolutionary and controversial. Books scanned by Google are included in HathiTrust, a digital library created by a partnership of major research institutions. As of March 2014, the following collections had been digitized: Art, Architecture and Engineering Library; Bentley Historical Library; Buhr Building (large portions); Dent |
https://en.wikipedia.org/wiki/Color%20Light%20Output | Color Light Output (CLO), also known as Color Brightness, is a specification that provides information on a projector’s ability to reproduce color. Color Light Output is specified in the lumen unit and measures a color projection system's ability to correctly reproduce color brightness.
Objective
The Color Light Output specification provides projector buyers the advantage of a standard, objective metric with which to make product comparisons and selections. Projector manufacturers generally provide information about resolution, white light brightness and contrast ratio as descriptors of projector performance. However, none of these specifications directly covered a projector’s color performance. The Color Light Output metric complements existing specifications to give buyers an accurate way to evaluate competing projector models more thoroughly.
Background
In 2009, the National Institute of Standards and Technology (NIST) issued a scientific paper stating that in addition to the typical white light brightness rating of display devices, there was a need for providing "an equivalent measurement that will better describe a projector's color performance when rendering full color imagery".
In 2012, the Society for Information Display (SID), a global professional organization focused on the development of the display industry, published the Color Light Output standard to provide display and projector buyers with an easy metric to evaluate color performance. The Color Light Output standard was developed after the SID conducted comprehensive research and performance evaluations and concluded that a color performance standard was scientifically valid and relevant for the display industry. Color Light Output measurement methodologies for displays, including projectors are specified in a document entitled The International Display Measurement Standard (IDMS). which was developed in collaboration with SID's affiliated organizations: the International Committee for Display Me |
https://en.wikipedia.org/wiki/Romaleosyrphus%20villosus | The name Romaleosyrphus villosus was published by Jacques-Marie-Frangile Bigot in 1882, in reference to a species of hoverfly from Mexico.
Unfortunately, its name has been confused with the name of a related species described from the United States, which was originally described as Merapioidus villosus, published in 1879 by the same author. These two species are now both placed in the same genus, Criorhina, and only the older of the two names (the one from 1879) can remain as Criorhina villosa, so the 1882 name will need to be replaced, and as of 2023 this has not yet occurred, so the species is effectively nameless, other than its original (but nomenclaturally invalid) name. |
https://en.wikipedia.org/wiki/Chloramine-T | Chloramine-T is the organic compound with the formula CH3C6H4SO2NClNa. Both the anhydrous salt and its trihydrate are known. Both are white powders. Chloramine-T is used as a reagent in organic synthesis. It is commonly used as cyclizing agent in the synthesis of aziridine, oxadiazole, isoxazole and pyrazoles. It's inexpensive, has low toxicity and acts as a mild oxidizing agent. In addition, it also acts as a source of nitrogen anions and electrophilic cations. It may undergo degradation on long term exposure to atmosphere such that care must be taken during its storage.
Reactions
Chloramine-T contains active (electrophilic) chlorine. Its reactivity is similar to that of sodium hypochlorite. Aqueous solutions of chloramine-T are slightly basic (pH typically 8.5). The pKa of the closely related N-chlorophenylsulfonamide C6H5SO2NClH is 9.5.
It is prepared by oxidation of toluenesulfonamide with sodium hypochlorite, with the latter being produced in situ from sodium hydroxide and chlorine (Cl2):
Uses
Reagent in amidohydroxylation
The Sharpless oxyamination converts an alkene to a vicinal aminoalcohol. A common source of the amido component of this reaction is chloramine-T. Vicinal aminoalcohols are important products in organic synthesis and recurring pharmacophores in drug discovery.
Oxidant
Chloramine-T is a strong oxidant. It oxidizes hydrogen sulfide to sulfur and mustard gas to yield a harmless crystalline sulfimide.
It converts iodide to iodine monochloride (ICl). ICl rapidly undergoes electrophilic substitution predominantly with activated aromatic rings, such as those of the amino acid tyrosine. Thus, chloramine-T is used to incorporate iodine into peptides and proteins. Chloramine-T together with iodogen or lactoperoxidase is commonly used for labeling peptides and proteins with radioiodine isotopes.
Certifications
EN 1276 Bactericidal
EN 13713 Bactericidal
EN 14675 Virucidal
EN 14476 Virucidal Norovirus
EN 1650 Fungicidal
EN 13704 Sporicida |
https://en.wikipedia.org/wiki/Extensible%20Configuration%20Checklist%20Description%20Format | The Extensible Configuration Checklist Description Format (XCCDF) is an XML format specifying security checklists, benchmarks and configuration documentation.
XCCDF development is being pursued by NIST, the NSA, The MITRE Corporation, and the US Department of Homeland Security.
XCCDF is intended to serve as a replacement for the security hardening and analysis documentation written in prose. XCCDF is used by the Security Content Automation Protocol. |
https://en.wikipedia.org/wiki/Paralepistopsis%20amoenolens | Paralepistopsis amoenolens is an agaric fungus in the Tricholomataceae family. It is commonly known as the paralysis funnel.
Taxonomy
It was first described in 1975 by the French mycologist Georges Jean Louis Malençon from a specimen found in Morocco and classified as Clitocybe amoenolens.
In 2012, following DNA analysis, Vizzini and Ercole assigned this species to the new genus Paralepistopsis, which forms a separate clade from other Clitocybes. This change has been accepted by Index Fungorum and the Global Biodiversity Information Facility and so the correct name is currently Paralepistopsis amoenolens.
Toxicity
It was discovered to be poisonous after several people had consumed specimens all found in the alpine Maurienne valley in the Savoie department over three years. They had mistaken it for the edible common funnel cap (Infundibulicybe sp.) or Paralepista flaccida (formerly Lepista inversa).
The resulting syndrome of fungus-induced erythromelalgia lasted from 8 days to 5 months, although one person exhibited symptoms for three years.
This species contains acromelic acids including Acromelic acid A which is a potent neurotoxin with a chemical formula of C13H14N2O7 and is associated with causing paralysis and seizures
Similar species
Paralepistopsis acromelalga is a poisonous species known from Japan, commonly called the poison dwarf bamboo mushroom. It had been discovered to be poisonous in 1918. |
https://en.wikipedia.org/wiki/George%20Green%20%28mathematician%29 | George Green (14 July 1793 – 31 May 1841) was a British mathematical physicist who wrote An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism in 1828. The essay introduced several important concepts, among them a theorem similar to the modern Green's theorem, the idea of potential functions as currently used in physics, and the concept of what are now called Green's functions. Green was the first person to create a mathematical theory of electricity and magnetism and his theory formed the foundation for the work of other scientists such as James Clerk Maxwell, William Thomson, and others. His work on potential theory ran parallel to that of Carl Friedrich Gauss.
Green's life story is remarkable in that he was almost entirely self-taught. He received only about one year of formal schooling as a child, between the ages of 8 and 9.
Early life
Green was born and lived for most of his life in the English town of Sneinton, Nottinghamshire, now part of the city of Nottingham. His father, also named George, was a baker who had built and owned a brick windmill used to grind grain.
In his youth, Green was described as having a frail constitution and a dislike for doing work in his father's bakery. He had no choice in the matter, however, and as was common for the time he likely began working daily to earn his living at the age of five.
Robert Goodacre's Academy
During this era it was common for only 25–50% of children in Nottingham to receive any schooling. The majority of schools were Sunday schools, run by the Church, and children would typically attend for one or two years only.
Recognizing the young Green's above average intellect, and being in a strong financial situation due to his successful bakery, his father enrolled him in March 1801 at Robert Goodacre's Academy in Upper Parliament Street. Robert Goodacre was a well-known science populariser and educator of the time. He published Essay on the Education of Youth, in |
https://en.wikipedia.org/wiki/Relative%20species%20abundance | Relative species abundance is a component of biodiversity and is a measure of how common or rare a species is relative to other species in a defined location or community. Relative abundance is the percent composition of an organism of a particular kind relative to the total number of organisms in the area. Relative species abundances tend to conform to specific patterns that are among the best-known and most-studied patterns in macroecology. Different populations in a community exist in relative proportions; this idea is known as relative abundance.
Introduction
Relative species abundance
Relative species abundance and species richness describe key elements of biodiversity. Relative species abundance refers to how common or rare a species is relative to other species in a given location or community.
Usually relative species abundances are described for a single trophic level. Because such species occupy the same trophic level they will potentially or actually compete for similar resources. For example, relative species abundances might describe all terrestrial birds in a forest community or all planktonic copepods in a particular marine environment.
Relative species abundances follow very similar patterns over a wide range of ecological communities. When plotted as a histogram of the number of species represented by 1, 2, 3, ..., n individuals usually fit a hollow curve, such that most species are rare, (represented by a single individual in a community sample) and relatively few species are abundant (represented by a large number of individuals in a community sample)(Figure 1). This pattern has been long-recognized and can be broadly summarized with the statement that "most species are rare". For example, Charles Darwin noted in 1859 in The Origin of Species that "... rarity is the attribute of vast numbers of species in all classes...."
Species abundance patterns can be best visualized in the form of relative abundance distribution plots. The consistency o |
https://en.wikipedia.org/wiki/Equivalent%20definitions%20of%20mathematical%20structures | In mathematics, equivalent definitions are used in two somewhat different ways. First, within a particular mathematical theory (for example, Euclidean geometry), a notion (for example, ellipse or minimal surface) may have more than one definition. These definitions are equivalent in the context of a given mathematical structure (Euclidean space, in this case). Second, a mathematical structure may have more than one definition (for example, topological space has at least seven definitions; ordered field has at least two definitions).
In the former case, equivalence of two definitions means that a mathematical object (for example, geometric body) satisfies one definition if and only if it satisfies the other definition.
In the latter case, the meaning of equivalence (between two definitions of a structure) is more complicated, since a structure is more abstract than an object. Many different objects may implement the same structure.
Isomorphic implementations
Natural numbers may be implemented as 0 = , 1 = = , 2 = = , 3 = = and so on; or alternatively as 0 = , 1 = =, 2 = = and so on. These are two different but isomorphic implementations of natural numbers in set theory.
They are isomorphic as models of Peano axioms, that is, triples (N,0,S) where N is a set, 0 an element of N, and S (called the successor function) a map of N to itself (satisfying appropriate conditions). In the first implementation S(n) = n ∪ ; in the second implementation S(n) = . As emphasized in Benacerraf's identification problem, the two implementations differ in their answer to the question whether 0 ∈ 2; however, this is not a legitimate question about natural numbers (since the relation ∈ is not stipulated by the relevant signature(s), see the next section). Similarly, different but isomorphic implementations are used for complex numbers.
Deduced structures and cryptomorphisms
The successor function S on natural numbers leads to arithmetic operations, addition and multiplication, |
https://en.wikipedia.org/wiki/Displair |
Displair is a 3D interactive raster display technology developed by a Russian company of the same name. The Displair projects images onto sheets of water droplets suspended in air, giving the illusion of a hologram. Unlike other cold fog projecting technologies, the images projected by the Displair can also respond rapidly to multi-touch manipulation, as well as allowing taste and aroma to be incorporated.
History
Developer Maxim Kamanin introduced the Displair at Seliger 2010. In July of that year he chose the "Displair" name for both the product and the company as a portmanteau of the English words "display" and "air". The company subsequently obtained investment for further development of the prototype, technology licensing, and small-scale commercial production. Applications to date have included displays for in-store advertising and kiosks.
Technology
The Displair device projects still and moving images onto a "screenless" display consisting of cold stable air flow containing particles of water produced by a cavitation method. These particles are small enough not to leave traces of moisture, and their surface tension high enough to maintain stability after contact with physical objects and wind.
Displair uses third party computerised multi-touch technologies to allow control of images with fingers or with other objects. The display can work with up to 1500 touch points simultaneously with a delay time of less than 0.2 seconds. This makes it possible to allow manipulation by more than one user, and also to identify more complex gestures than similar 3D display systems. The company is working on incorporating flavoring and taste interaction with projected images in the future.
See also
Fog display
Screenless video
Virtual retinal display |
https://en.wikipedia.org/wiki/Sexual%20dysfunction | Sexual dysfunction is difficulty experienced by an individual or partners during any stage of normal sexual activity, including physical pleasure, desire, preference, arousal, or orgasm. The World Health Organization defines sexual dysfunction as a "person's inability to participate in a sexual relationship as they would wish". This definition is broad and is subject to many interpretations. A diagnosis of sexual dysfunction under the DSM-5 requires a person to feel extreme distress and interpersonal strain for a minimum of six months (except for substance- or medication-induced sexual dysfunction). Sexual dysfunction can have a profound impact on an individual's perceived quality of sexual life. The term sexual disorder may not only refer to physical sexual dysfunction, but to paraphilias as well; this is sometimes termed disorder of sexual preference.
A thorough sexual history and assessment of general health and other sexual problems (if any) are important when assessing sexual dysfunction, because it is usually correlated with other psychiatric issues, such as mood disorders, eating and anxiety disorders, and schizophrenia. Assessing performance anxiety, guilt, stress, and worry are integral to the optimal management of sexual dysfunction. Many of the sexual dysfunctions that are defined are based on the human sexual response cycle proposed by William H. Masters and Virginia E. Johnson, and modified by Helen Singer Kaplan.
Types
Sexual dysfunction can be classified into four categories: sexual desire disorders, arousal disorders, orgasm disorders, and pain disorders. Dysfunction among men and women are studied in the fields of andrology and gynecology respectively.
Sexual desire disorders
Sexual desire disorders or decreased libido are characterized by a lack of sexual desire, libido for sexual activity, or sexual fantasies for some time. The condition ranges from a general lack of sexual desire to a lack of sexual desire for the current partner. The conditi |
https://en.wikipedia.org/wiki/Windows%20Server%202016 | Windows Server 2016 is the eighth release of the Windows Server operating system developed by Microsoft as part of the Windows NT family of operating systems. It was developed alongside Windows 10 and is the successor to the Windows 8.1-based Windows Server 2012 R2. The first early preview version (Technical Preview) became available on October 1, 2014 together with the first technical preview of System Center. Windows Server 2016 was released on September 26, 2016 at Microsoft's Ignite conference and broadly released for retail sale on October 12, 2016. It was succeeded by Windows Server 2019 and the Windows Server Semi-Annual Channel.
Features
Windows Server 2016 has a variety of new features, including
Active Directory Federation Services: It is possible to configure AD FS to authenticate users stored in non-AD directories, such as X.500 compliant Lightweight Directory Access Protocol (LDAP) directories and SQL databases.
Windows Defender: Windows Server Antimalware is installed and enabled by default without the GUI, which is an installable Windows feature.
Remote Desktop Services: Support for OpenGL 4.4 and OpenCL 1.1, performance and stability improvements; MultiPoint Services role (see Windows MultiPoint Server)
Storage Services: Central Storage QoS Policies; Storage Replicas (storage-agnostic, block-level, volume-based, synchronous and asynchronous replication using SMB3 between servers for disaster recovery). Storage Replica replicates blocks instead of files; files can be in use. It's not multi-master, not one-to-many and not transitive. It periodically replicates snapshots, and the replication direction can be changed.
Failover Clustering: Cluster operating system rolling upgrade, Storage Replicas
Web Application Proxy: Preauthentication for HTTP Basic application publishing, wildcard domain publishing of applications, HTTP to HTTPS redirection, Propagation of client IP address to backend applications
IIS 10: Support for HTTP/2
Windows PowerShe |
https://en.wikipedia.org/wiki/Operational%20transconductance%20amplifier | The operational transconductance amplifier (OTA) is an amplifier whose differential input voltage produces an output current. Thus, it is a voltage controlled current source (VCCS). There is usually an additional input for a current to control the amplifier's transconductance. The OTA is similar to a standard operational amplifier in that it has a high impedance differential input stage and that it may be used with negative feedback.
The first commercially available integrated circuit units were produced by RCA in 1969 (before being acquired by General Electric) in the form of the CA3080. Although most units are constructed with bipolar transistors, field effect transistor units are also produced. The OTA is not as useful by itself in the vast majority of standard op-amp functions as the ordinary op-amp because its output is a current. One of its principal uses is in implementing electronically controlled applications such as variable frequency oscillators and filters and variable gain amplifier stages which are more difficult to implement with standard op-amps.
Principal differences from standard operational amplifiers
Its output of a current contrasts to that of standard operational amplifier whose output is a voltage.
It is usually used "open-loop"; without negative feedback in linear applications. This is possible because the magnitude of the resistance attached to its output controls its output voltage. Therefore, a resistance can be chosen that keeps the output from going into saturation, even with high differential input voltages.
Basic operation
In the ideal OTA, the output current is a linear function of the differential input voltage, calculated as follows:
where Vin+ is the voltage at the non-inverting input, Vin− is the voltage at the inverting input and gm is the transconductance of the amplifier.
The amplifier's output voltage is the product of its output current and its load resistance:
The voltage gain is then the output voltage divided b |
https://en.wikipedia.org/wiki/Biomedicine | Biomedicine (also referred to as Western medicine, mainstream medicine or conventional medicine) is a branch of medical science that applies biological and physiological principles to clinical practice. Biomedicine stresses standardized, evidence-based treatment validated through biological research, with treatment administered via formally trained doctors, nurses, and other such licensed practitioners.
Biomedicine also can relate to many other categories in health and biological related fields. It has been the dominant system of medicine in the Western world for more than a century.
It includes many biomedical disciplines and areas of specialty that typically contain the "bio-" prefix such as molecular biology, biochemistry, biotechnology, cell biology, embryology, nanobiotechnology, biological engineering, laboratory medical biology, cytogenetics, genetics, gene therapy, bioinformatics, biostatistics, systems biology, neuroscience, microbiology, virology, immunology, parasitology, physiology, pathology, anatomy, toxicology, and many others that generally concern life sciences as applied to medicine.
Overview
Biomedicine is the cornerstone of modern health care and laboratory diagnostics. It concerns a wide range of scientific and technological approaches: from in vitro diagnostics to in vitro fertilisation, from the molecular mechanisms of cystic fibrosis to the population dynamics of the HIV virus, from the understanding of molecular interactions to the study of carcinogenesis, from a single-nucleotide polymorphism (SNP) to gene therapy.
Biomedicine is based on molecular biology and combines all issues of developing molecular medicine into large-scale structural and functional relationships of the human genome, transcriptome, proteome, physiome and metabolome with the particular point of view of devising new technologies for prediction, diagnosis and therapy.
Biomedicine involves the study of (patho-) physiological processes with methods from biology and |
https://en.wikipedia.org/wiki/Protection%20mechanism | In computer science, protection mechanisms are built into a computer architecture to support the enforcement of security policies. A simple definition of a security policy is "to set who may use what information in a computer system".
The access matrix model, first introduced in 1971, is a generalized description of operating system protection mechanisms.
The separation of protection and security is a special case of the separation of mechanism and policy.
Notes |
https://en.wikipedia.org/wiki/Darwin%27s%20finches | Darwin's finches (also known as the Galápagos finches) are a group of about 18 species of passerine birds. They are well known for their remarkable diversity in beak form and function. They are often classified as the subfamily Geospizinae or tribe Geospizini. They belong to the tanager family and are not closely related to the true finches. The closest known relative of the Galápagos finches is the South American dull-coloured grassquit (Asemospiza obscura). They were first collected when the second voyage of the Beagle visited the Galápagos Islands, with Charles Darwin on board as a gentleman naturalist. Apart from the Cocos finch, which is from Cocos Island, the others are found only on the Galápagos Islands.
The term "Darwin's finches" was first applied by Percy Lowe in 1936, and popularised in 1947 by David Lack in his book Darwin's Finches. Lack based his analysis on the large collection of museum specimens collected by the 1905–06 Galápagos expedition of the California Academy of Sciences, to whom Lack dedicated his 1947 book. The birds vary in size from and weigh between . The smallest are the warbler-finches and the largest is the vegetarian finch. The most important differences between species are in the size and shape of their beaks, which are highly adapted to different food sources. The birds are all dull-coloured. They are thought to have evolved from a single finch species that came to the islands more than a million years ago.
Darwin's theory
During the survey voyage of HMS Beagle, Darwin was unaware of the significance of the birds of the Galápagos. He had learned how to preserve bird specimens from John Edmonstone while at the University of Edinburgh and had been keen on shooting, but he had no expertise in ornithology and by this stage of the voyage concentrated mainly on geology. In Galápagos he mostly left bird shooting to his servant Syms Covington. Nonetheless, these birds were to play an important part in the inception of Darwin's theory |
https://en.wikipedia.org/wiki/Compliance%20constants | Compliance constants are the elements of an inverted Hessian matrix. The calculation of compliance constants provides an alternative description of chemical bonds in comparison with the widely used force constants explicitly ruling out the dependency on the coordinate system. They provide the unique description of the mechanical strength for covalent and non-covalent bonding. While force constants (as energy second derivatives) are usually given in aJ/Å or N/cm, compliance constants are given in Å/aJ or Å/mdyn.
History
Hitherto, recent publications that broke the wall of putative chemical understanding and presented detection/isolation of novel compounds with intriguing bonding characters can still be provocative at times. The stir in such discoveries arose partly from the lack of a universally accepted bond descriptor. While bond dissociation energies (BDE) and rigid force constants have been generally regarded as primary tools for such interpretation, they are prone to flawed definition of chemical bonds in certain scenarios whether simple or controversial.
Such reasons prompted the necessity to seek an alternative approach to describe covalent and non-covalent interactions more rigorously. , a German chemist at the TU Braunschweig and his Ph.D. student at the time, Kai Brandhorst, developed a program COMPLIANCE (freely available to the public), which harnesses compliance constants for tackling the aforementioned tasks. The authors utilize an inverted matrix of force constants, i.e., inverted Hessian matrix, originally introduced by W. T. Taylor and K. S. Pitzer. The insight in choosing the inverted matrix is from the realization that not all elements in the Hessian matrix are necessary—and thus redundant—for describing covalent and non-covalent interactions. Such redundancy is common for many molecules, and more importantly, it ushers in the dependence of the elements of the Hessian matrix on the choice of coordinate system. Therefore, the author claimed that |
https://en.wikipedia.org/wiki/Phosphate%20permease | Phosphate permeases are membrane transport proteins that facilitate the diffusion of phosphate into and out of a cell or organelle. Some of these families include:
TC# 2.A.1.4 - Organophosphate:Pi Antiporter (OPA) Family, (i.e., Pho-84 of Neurospora crassa; TC# 2.A.1.9.2)
TC# 2.A.20 - Inorganic Phosphate Transporter (PiT) Family
TC# 2.A.47.2 - Phosphate porters of the Divalent Anion:Na+ Symporter (DASS) Family, includes Pho87/90/91
TC# 2.A.58 - Phosphate:Na+ Symporter (PNaS) Family
TC# 2.A.94 - Phosphate Permease (Pho1) Family
See also
Major facilitator superfamily
Ion transporter superfamily
Phosphotransferase
Inorganic phosphate
permeases
Transporter Classification Database
See also
TC# 3.A.10 - H+, Na+-translocating Pyrophosphatase (M+-PPase) Family
TC# 4.E.1 - Vacuolar (Acidocalcisome) Polyphosphate Polymerase (V-PPP) Family
Further reading
EMBL-EBI, InterPro. "Phosphate permease (IPR004738) < InterPro < EMBL-EBI". www.ebi.ac.uk. Retrieved 2016-03-03.
"pho-4 - Phosphate-repressible phosphate permease pho-4 - Neurospora crassa (strain ATCC 24698 / 74-OR23-1A / CBS 708.71 / DSM 1257 / FGSC 987) - pho-4 gene & protein". www.uniprot.org. Retrieved 2016-03-03.
Versaw, W. K. (1995-02-03). "A phosphate-repressible, high-affinity phosphate permease is encoded by the pho-5+ gene of Neurospora crassa". Gene 153 (1): 135–139. ISSN 0378-1119. PMID 7883177.
Ramaiah, Madhuvanthi; Jain, Ajay; Baldwin, James C.; Karthikeyan, Athikkattuvalasu S.; Raghothama, Kashchandra G. (2011-09-01). "Characterization of the phosphate starvation-induced glycerol-3-phosphate permease gene family in Arabidopsis". Plant Physiology157 (1): 279–291. doi:10.1104/pp.111.178541. ISSN 1532-2548. PMC 3165876. PMID 21788361.
Stakheev, A. A.; Khairulina, D. R.; Ryazantsev, D. Yu; Zavriev, S. K. (2013-03-22). "Phosphate permease gene as a marker for the species-specific identification of the toxigenic fungus Fusarium cerealis". Russian Journal of Bioorganic Chemistry 39 (2): 153–16 |
https://en.wikipedia.org/wiki/Vehicle%20audio | Vehicle audio is equipment installed in a car or other vehicle to provide in-car entertainment and information for the vehicle occupants. Until the 1950s it consisted of a simple AM radio. Additions since then have included FM radio (1952), 8-track tape players, cassette players, record players, CD players, DVD players, Blu-ray players, navigation systems, Bluetooth telephone integration, and smartphone controllers like CarPlay and Android Auto. Once controlled from the dashboard with a few buttons, they can now be controlled by steering wheel controls and voice commands.
Initially implemented for listening to music and radio, vehicle audio is now part of car telematics, telecommunication, in-vehicle security, handsfree calling, navigation, and remote diagnostics systems. The same loudspeakers may also be used to minimize road and engine noise with active noise control, or they may be used to augment engine sounds, for instance making a smaller engine sound bigger.
History
Radio
In 1904, well before commercially viable technology for mobile radio was in place, American inventor and self-described "Father of Radio" Lee de Forest demonstrated a car radio at the 1904 Louisiana Purchase Exposition in St. Louis.
Around 1920, vacuum tube technology had matured to the point where the availability of radio receivers made radio broadcasting viable. A technical challenge was that the vacuum tubes in the radio receivers required 50 to 250 volt direct current, but car batteries ran at 6V. Voltage was stepped up with a vibrator that provided a pulsating DC which could be converted to a higher voltage with a transformer, rectified, and filtered to create higher-voltage DC.
In 1924, Kelly's Motors in NSW, Australia, installed its first car radio.
In 1930, the American Galvin Manufacturing Corporation marketed a Motorola-branded radio receiver for $130. It was expensive: the contemporary Ford Model A cost $540. A Plymouth sedan, "wired for Philco Transitone radio without ext |
https://en.wikipedia.org/wiki/Julia%20set | In the context of complex dynamics, a branch of mathematics, the Julia set and the Fatou set are two complementary sets (Julia "laces" and Fatou "dusts") defined from a function. Informally, the Fatou set of the function consists of values with the property that all nearby values behave similarly under repeated iteration of the function, and the Julia set consists of values such that an arbitrarily small perturbation can cause drastic changes in the sequence of iterated function values.
Thus the behavior of the function on the Fatou set is "regular", while on the Julia set its behavior is "chaotic".
The Julia set of a function is commonly denoted and the Fatou set is denoted These sets are named after the French mathematicians Gaston Julia and Pierre Fatou whose work began the study of complex dynamics during the early 20th century.
Formal definition
Let be a non-constant holomorphic function from the Riemann sphere onto itself. Such functions are precisely the non-constant complex rational functions, that is, where and are complex polynomials. Assume that p and q have no common roots, and at least one has degree larger than 1. Then there is a finite number of open sets that are left invariant by and are such that:
The union of the sets is dense in the plane and
behaves in a regular and equal way on each of the sets .
The last statement means that the termini of the sequences of iterations generated by the points of are either precisely the same set, which is then a finite cycle, or they are finite cycles of circular or annular shaped sets that are lying concentrically. In the first case the cycle is attracting, in the second case it is neutral.
These sets are the Fatou domains of , and their union is the Fatou set of . Each of the Fatou domains contains at least one critical point of , that is, a (finite) point z satisfying , or if the degree of the numerator is at least two larger than the degree of the denominator , or if for some c and |
https://en.wikipedia.org/wiki/Battery%20pack | A battery pack is a set of any number of (preferably) identical batteries or individual battery cells. They may be configured in a series, parallel or a mixture of both to deliver the desired voltage, capacity, or power density. The term battery pack is often used in reference to cordless tools, radio-controlled hobby toys, and battery electric vehicles.
Components of battery packs include the individual batteries or cells, and the interconnects which provide electrical conductivity between them. Rechargeable battery packs often contain a temperature sensor, which the battery charger uses to detect the end of charging. Interconnects are also found in batteries as they are the part which connects each cell, though batteries are most often only arranged in series strings.
When a pack contains groups of cells in parallel there are differing wiring configurations which take into consideration the electrical balance of the circuit. Battery regulators are sometimes used to keep the voltage of each individual cell below its maximum value during charging so as to allow the weaker batteries to become fully charged, bringing the whole pack back into balance. Active balancing can also be performed by battery balancer devices which can shuttle energy from strong cells to weaker ones in real time for better balance. A well-balanced pack lasts longer and delivers better performance.
For an inline package, cells are selected and stacked with solder in between them. The cells are pressed together and a current pulse generates heat to solder them together and to weld all connections internal to the cell.
Calculating state of charge
SOC, or state of charge, is the equivalent of a fuel gauge for a battery. SOC cannot be determined by a simple voltage measurement, because the terminal voltage of a battery may stay substantially constant until it is completely discharged. In some types of battery, electrolyte specific gravity may be related to state of charge but this is not m |
https://en.wikipedia.org/wiki/Dehesa | A dehesa () is a multifunctional, agrosylvopastoral system (a type of agroforestry) and cultural landscape of southern and central Spain and southern Portugal; in Portugal, it is known as a montado. Its name comes from the Latin defensa (fenced), referring to land that was fenced and usually destined for pasture. Dehesas may be private or communal property (usually belonging to the municipality). Used primarily for grazing, they produce a variety of products, including non-timber forest products such as wild game, mushrooms, honey, cork, and firewood. They are also used to raise the Spanish fighting bull and the source of jamón ibérico, the Iberian pig. The main tree component is oaks, usually holm (Quercus rotundifolia) and cork (Quercus suber). Other oaks, including melojo (Quercus pyrenaica) and quejigo (Quercus faginea), may be used to form dehesa, the species utilized depending on geographical location and elevation. Dehesa is an anthropogenic system that provides not only a variety of foods, but also wildlife habitat for endangered species such as the Spanish imperial eagle.
By extension, the term can also be used for this style of rangeland management on estates.
Ecology
The dehesa is derived from the Mediterranean forest ecosystem, consisting of grassland featuring herbaceous species, used for grazing cattle, goats, and sheep, and tree species belonging to the genus Quercus (oak), such as the holm oak (Quercus rotundifolia), although other tree species such as beech and pine trees may also be present. Oaks are protected and pruned to produce acorns, which the famous black Iberian pigs feed on in the fall during the montanera. Ham produced from Iberian pigs fattened with acorns and air-dried at high elevations is known as Jamón ibérico ("presunto ibérico", or "pata negra" in Portuguese), and sells for premium prices, especially if only acorns have been used for fattening.
In a typical dehesa, oaks are managed to persist for about 250 years. If cork oaks a |
https://en.wikipedia.org/wiki/QR%20algorithm | In numerical linear algebra, the QR algorithm or QR iteration is an eigenvalue algorithm: that is, a procedure to calculate the eigenvalues and eigenvectors of a matrix. The QR algorithm was developed in the late 1950s by John G. F. Francis and by Vera N. Kublanovskaya, working independently. The basic idea is to perform a QR decomposition, writing the matrix as a product of an orthogonal matrix and an upper triangular matrix, multiply the factors in the reverse order, and iterate.
The practical QR algorithm
Formally, let be a real matrix of which we want to compute the eigenvalues, and let . At the -th step (starting with ), we compute the QR decomposition where is an orthogonal matrix (i.e., ) and is an upper triangular matrix. We then form . Note that
so all the are similar and hence they have the same eigenvalues. The algorithm is numerically stable because it proceeds by orthogonal similarity transforms.
Under certain conditions, the matrices Ak converge to a triangular matrix, the Schur form of A. The eigenvalues of a triangular matrix are listed on the diagonal, and the eigenvalue problem is solved. In testing for convergence it is impractical to require exact zeros, but the Gershgorin circle theorem provides a bound on the error.
In this crude form the iterations are relatively expensive. This can be mitigated by first bringing the matrix to upper Hessenberg form (which costs arithmetic operations using a technique based on Householder reduction), with a finite sequence of orthogonal similarity transforms, somewhat like a two-sided QR decomposition. (For QR decomposition, the Householder reflectors are multiplied only on the left, but for the Hessenberg case they are multiplied on both left and right.) Determining the QR decomposition of an upper Hessenberg matrix costs arithmetic operations. Moreover, because the Hessenberg form is already nearly upper-triangular (it has just one nonzero entry below each diagonal), using it as a starting poin |
https://en.wikipedia.org/wiki/E.%20T.%20Whittaker | Sir Edmund Taylor Whittaker (24 October 1873 – 24 March 1956) was a British mathematician, physicist, and historian of science. Whittaker was a leading mathematical scholar of the early 20th-century who contributed widely to applied mathematics and was renowned for his research in mathematical physics and numerical analysis, including the theory of special functions, along with his contributions to astronomy, celestial mechanics, the history of physics, and digital signal processing.
Among the most influential publications in Whittaker's bibliography, he authored several popular reference works in mathematics, physics, and the history of science, including A Course of Modern Analysis (better known as Whittaker and Watson), Analytical Dynamics of Particles and Rigid Bodies, and A History of the Theories of Aether and Electricity. Whittaker is also remembered for his role in the relativity priority dispute, as he credited Henri Poincaré and Hendrik Lorentz with developing special relativity in the second volume of his History, a dispute which has lasted several decades, though scientific consensus has remained with Einstein. Whittaker served as the Royal Astronomer of Ireland early in his career, a position he held from 1906 through 1912, before moving on to the chair of mathematics at the University of Edinburgh for the next three decades and, towards the end of his career, received the Copley Medal and was knighted. The School of Mathematics of the University of Edinburgh holds The Whittaker Colloquium, a yearly lecture, in his honour and the Edinburgh Mathematical Society promotes an outstanding young Scottish mathematician once every four years with the Sir Edmund Whittaker Memorial Prize, also given in his honour.
Life
Early life and education
Edmund Taylor Whittaker was born in Southport, in Lancashire, the son of Selina Septima (née Taylor) and John Whittaker. He was described as an "extremely delicate child", necessitating his mother to home school him unt |
https://en.wikipedia.org/wiki/Replisome | The replisome is a complex molecular machine that carries out replication of DNA. The replisome first unwinds double stranded DNA into two single strands. For each of the resulting single strands, a new complementary sequence of DNA is synthesized. The total result is formation of two new double stranded DNA sequences that are exact copies of the original double stranded DNA sequence.
In terms of structure, the replisome is composed of two replicative polymerase complexes, one of which synthesizes the leading strand, while the other synthesizes the lagging strand. The replisome is composed of a number of proteins including helicase, RFC, PCNA, gyrase/topoisomerase, SSB/RPA, primase, DNA polymerase III, RNAse H, and ligase.
Overview of prokaryotic DNA replication process
For prokaryotes, each dividing nucleoid (region containing genetic material which is not a nucleus) requires two replisomes for bidirectional replication. The two replisomes continue replication at both forks in the middle of the cell. Finally, as the termination site replicates, the two replisomes separate from the DNA. The replisome remains at a fixed, midcell location in the cell, attached to the membrane, and the template DNA threads through it. DNA is fed through the stationary pair of replisomes located at the cell membrane.
Overview of eukaryotic DNA replication process
For eukaryotes, numerous replication bubbles form at origins of replication throughout the chromosome. As with prokaryotes, two replisomes are required, one at each replication fork located at the terminus of the replication bubble. Because of significant differences in chromosome size, and the associated complexities of highly condensed chromosomes, various aspects of the DNA replication process in eukaryotes, including the terminal phases, are less well-characterised than for prokaryotes.
Challenges of DNA replication
The replisome is a system in which various factors work together to solve the structural and che |
https://en.wikipedia.org/wiki/Fundamental%20lemma%20of%20sieve%20theory | In number theory, the fundamental lemma of sieve theory is any of several results that systematize the process of applying sieve methods to particular problems. Halberstam & Richert
write:
Diamond & Halberstam
attribute the terminology Fundamental Lemma to Jonas Kubilius.
Common notation
We use these notations:
is a set of positive integers, and is its subset of integers divisible by
and are functions of and of that estimate the number of elements of that are divisible by , according to the formula
Thus represents an approximate density of members divisible by , and represents an error or remainder term.
is a set of primes, and is the product of those primes
is the number of elements of not divisible by any prime in that is
is a constant, called the sifting density, that appears in the assumptions below. It is a weighted average of the number of residue classes sieved out by each prime.
Fundamental lemma of the combinatorial sieve
This formulation is from Tenenbaum. Other formulations are in Halberstam & Richert, in Greaves,
and in Friedlander & Iwaniec.
We make the assumptions:
is a multiplicative function.
The sifting density satisfies, for some constant and any real numbers and with :
There is a parameter that is at our disposal. We have uniformly in , , , and that
In applications we pick to get the best error term. In the sieve it is related to the number of levels of the inclusion–exclusion principle.
Fundamental lemma of the Selberg sieve
This formulation is from Halberstam & Richert. Another formulation is in Diamond & Halberstam.
We make the assumptions:
is a multiplicative function.
The sifting density satisfies, for some constant and any real numbers and with :
for some small fixed and all .
for all squarefree whose prime factors are in .
The fundamental lemma has almost the same form as for the combinatorial sieve. Write . The conclusion is:
Note that is no longer an independent parameter at our |
https://en.wikipedia.org/wiki/Courant%E2%80%93Snyder%20parameters | In accelerator physics, the Courant–Snyder parameters (frequently referred to as Twiss parameters or CS parameters) are a set of quantities used to describe the distribution of positions and velocities of the particles in a beam. When the positions along a single dimension and velocities (or momenta) along that dimension of every particle in a beam are plotted on a phase space diagram, an ellipse enclosing the particles can be given by the equation:
where is the position axis and is the velocity axis. In this formulation, , , and are the Courant–Snyder parameters for the beam along the given axis, and is the emittance. Three sets of parameters can be calculated for a beam, one for each orthogonal direction, x, y, and z.
History
The use of these parameters to describe the phase space properties of particle beams was popularized in the accelerator physics community by Ernest Courant and Hartland Snyder in their 1953 paper, "Theory of the Alternating-Gradient Synchrotron". They are also widely referred to in accelerator physics literature as "Twiss parameters" after British astronomer Richard Q. Twiss, although it is unclear how his name became associated with the formulation.
Phase space area description
When simulating the motion of particles through an accelerator or beam transport line, it is often desirable to describe the overall properties of an ensemble of particles, rather than track the motion of each particle individually. By Liouville's Theorem it can be shown that the density occupied on a position and momentum phase space plot is constant when the beam is only affected by conservative forces. The area occupied by the beam on this plot is known as the beam emittance, although there are a number of competing definitions for the exact mathematical definition of this property.
Coordinates
In accelerator physics, coordinate positions are usually defined with respect to an idealized reference particle, which follows the ideal design trajectory fo |
https://en.wikipedia.org/wiki/Matrox | Matrox Graphics, Inc. is a producer of video card components and equipment for personal computers and workstations. Based in Dorval, Quebec, Canada, it was founded in 1976 by Lorne Trottier and Branko Matić. The name is derived from "Ma" in Matić and "Tro" in Trottier.
Company
Matrox Graphics, Inc., the entity most recognized by the public which has been making graphics cards since 1978.
Matrox Video Products Group, which produces video-editing products for professional video production and broadcast markets. A division of Matrox Graphics, Inc.
Former divisions
Matrox Electronic Systems Ltd., the former parent company. Sold to Zebra Technologies as part of the divestiture of Matrox Imaging on June 6, 2022 and succeeded by Matrox Graphics, Inc.
Matrox Imaging, which produces frame grabbers, smart cameras and image processing/analysis software.
Matrox Networks, which produced corporate-grade networking equipment. Date of closure unknown.
History
Matrox's first graphics card product was the ALT-256 for S-100 bus computers, released in 1978. The ALT-256 produced a 256 by 256 pixel monochrome display using an 8 kilobyte (64 kilobit) frame buffer consisting of 16 TMS4027 DRAM chips (4 kilobits each). An expanded version followed, the ALT-512, both available for Intel SBC bus machines as well. Through the 1980s, Matrox's cards followed changes in the hardware side of the market, to Multibus and then the variety of PC standards.
During the 1990s, the Matrox Millennium series of cards attracted buyers willing to pay for a higher quality and sharper display. In 1994, Matrox introduced the Matrox Impression, an add-on card that worked in conjunction with a Millennium card to provide 3D acceleration. The Impression was aimed primarily at the CAD market. A later version of the Millennium included features similar to the Impression but by this time the series was lagging behind emerging vendors like 3dfx Interactive.
Matrox made several attempts to increase its share |
https://en.wikipedia.org/wiki/Mario%20Livio | Mario Livio (born June 19, 1945) is an Israeli-American astrophysicist and an author of works that popularize science and mathematics. For 24 years (1991–2015) he was an astrophysicist at the Space Telescope Science Institute, which operates the Hubble Space Telescope. He has published more than 400 scientific articles on topics including cosmology, supernova explosions, black holes, extrasolar planets, and the emergence of life in the universe. His book on the irrational number phi, The Golden Ratio: The Story of Phi, the World's Most Astonishing Number (2002), won the Peano Prize and the International Pythagoras Prize for popular books on mathematics.
Scientific career
Livio earned a Bachelor of Science degree in physics and mathematics at the Hebrew University of Jerusalem, a Master of Science degree in theoretical particle physics at the Weizmann Institute, and a Ph.D. in theoretical astrophysics at Tel Aviv University. He was a professor of physics at the Technion – Israel Institute of Technology from 1981 to 1991, before moving to the Space Telescope Science Institute.
Livio has focused much of his research on supernova explosions and their use in determining the rate of expansion of the universe. He has also studied so-called dark energy, black holes, and the formation of planetary systems around young stars. He has contributed to hundreds of papers in peer-reviewed journals on astrophysics. Among his prominent contributions, he has authored and co-authored important papers on topics related to accretion onto compact objects (white dwarfs, neutron stars, and black holes). In 1980, he published one of the very first multi-dimensional numerical simulations of the collapse of a massive star and a supernova explosion. He was one of the pioneers in the study of common envelope evolution of binary stars, and he applied the results to the shaping of planetary nebulae as well as to the progenitors of Type Ia supernovae. Together with D. Eichler, T. Piran, and D. S |
https://en.wikipedia.org/wiki/LINE1 | LINE1 (also L1 and LINE-1) is a family of related class I transposable elements in the DNA of some organisms, classified with the long interspersed nuclear elements (LINEs). L1 transposons comprise approximately 17% of the human genome. These active L1s can interrupt the genome through insertions, deletions, rearrangements, and copy number variations. L1 activity has contributed to the instability and evolution of genomes and is tightly regulated in the germline by DNA methylation, histone modifications, and piRNA. L1s can further impact genome variation through mispairing and unequal crossing over during meiosis due to its repetitive DNA sequences.
L1 gene products are also required by many non-autonomous Alu and SVA SINE retrotransposons. Mutations induced by L1 and its non-autonomous counterparts have been found to cause a variety of heritable and somatic diseases.
In 2011, human L1 was reportedly discovered in the genome of the gonorrhea bacteria, evidently having arrived there by horizontal gene transfer.
Structure
A typical L1 element is approximately 6,000 base pairs (bp) long and consists of two non-overlapping open reading frames (ORFs) which are flanked by untranslated regions (UTRs) and target site duplications. In humans, ORF2 is thought to be translated by an unconventional termination/reinitiation mechanism, while mouse L1s contain an internal ribosome entry site (IRES) upstream of each ORF.
5' UTR
The 5' UTRs of mouse L1s contain a variable number of GC-rich tandemly repeated monomers of around 200 bp, followed by a short non-monomeric region. Human 5’ UTRs are ~900 bp in length and do not contain repeated motifs. All families of human L1s harbor in their most 5’ extremity a binding motif for the transcription factor YY1. Younger families also have two binding sites for SOX-family transcription factors, and both YY1 and SOX sites were shown to be required for human L1 transcription initiation and activation. Both mouse and human 5’ UTRs also con |
https://en.wikipedia.org/wiki/Bioelectronics | Bioelectronics is a field of research in the convergence of biology and electronics.
Definitions
At the first C.E.C. Workshop, in Brussels in November 1991, bioelectronics was defined as 'the use of biological materials and biological architectures for information processing systems and new devices'. Bioelectronics, specifically bio-molecular electronics, were described as 'the research and development of bio-inspired (i.e. self-assembly) inorganic and organic materials and of bio-inspired (i.e. massive parallelism) hardware architectures for the implementation of new information processing systems, sensors and actuators, and for molecular manufacturing down to the atomic scale'.
The National Institute of Standards and Technology (NIST), an agency of the United States Department of Commerce, defined bioelectronics in a 2009 report as "the discipline resulting from the convergence of biology and electronics".
Sources for information about the field include the Institute of Electrical and Electronics Engineers (IEEE) with its Elsevier journal Biosensors and Bioelectronics published since 1990. The journal describes the scope of bioelectronics as seeking to : "... exploit biology in conjunction with electronics in a wider context encompassing, for example, biological fuel cells, bionics and biomaterials for information processing, information storage, electronic components and actuators. A key aspect is the interface between biological materials and micro and nano-electronics."
History
The first known study of bioelectronics took place in the 18th century when scientist Luigi Galvani applied a voltage to a pair of detached frog legs. The legs moved, sparking the genesis of bioelectronics. Electronics technology has been applied to biology and medicine since the pacemaker was invented and with the medical imaging industry. In 2009, a survey of publications using the term in title or abstract suggested that the center of activity was in Europe (43 percent), followed |
https://en.wikipedia.org/wiki/Arbroath%20smokie | The Arbroath smokie is a type of smoked haddock, and is a speciality of the town of Arbroath in Angus, Scotland.
History
The Arbroath smokie is said to have originated in the small fishing village of Auchmithie, three miles northeast of Arbroath. Local legend has it a store caught fire one night, destroying barrels of haddock preserved in salt. The following morning, the people found some of the barrels had caught fire, cooking the haddock inside. Inspection revealed the haddock to be quite tasty. It is much more likely the villagers were of Scandinavian descent, as the 'Smokie making' process is similar to smoking methods which are still employed in areas of Scandinavia.
Towards the end of the 19th century, as Arbroath's fishing industry died, the Town Council offered the fisherfolk from Auchmithie land in an area of the town known as the fit o' the toon. It also offered them use of the modern harbour. Much of the Auchmithie population then relocated, bringing the Arbroath Smokie recipe with them. Today, 15 local businesses produce Arbroath smokies, selling them in major supermarkets in the UK and online.
In 2004, the European Commission registered the designation "Arbroath smokies" as a Protected Geographical Indication under the EU's Protected Food Name Scheme, acknowledging its unique status.
Preparation
Arbroath smokies are prepared using traditional methods dating back to the late 1800s.
The fish are first salted overnight. They are then tied in pairs using hemp twine, and left overnight to dry. Once they have been salted, tied and dried, they are hung over a triangular length of wood to smoke. This "kiln stick" fits between the two tied smokies, one fish on either side. The sticks are then used to hang the dried fish in a special barrel containing a hardwood fire.
When the fish are hung over the fire, the top of the barrel is covered with a lid and sealed around the edges with wet jute sacks (the water prevents the jute sacks from catching fire). All |
https://en.wikipedia.org/wiki/Boaz%20Barak | Boaz Barak (בועז ברק, born 1974) is an Israeli-American professor of computer science at Harvard University.
Early life and education
He graduated in 1999 with a B.Sc. in mathematics and computer science from Tel Aviv University. In 2004, he received his Ph.D. from the Weizmann Institute of Science with thesis Non-Black-Box Techniques in Cryptography under the supervision of Oded Goldreich. Barak was at the Institute for Advanced Study for two years from 2003 to 2005. He was an assistant professor in the computer science department of Princeton University from 2005 to 2010 and an associate professor from 2010 to 2011. From 2010 to 2016, he was a researcher at Microsoft's New England research laboratory. Since 2016, he is the Gordon McKay Professor of Computer Science in the Harvard John A. Paulson School of Engineering and Applied Sciences. He is a citizen of both Israel and the United States.
Career
He co-authored, with Sanjeev Arora, Computational Complexity: A Modern Approach, published by Cambridge University Press in 2009. Barak also wrote extensive notes with David Steurer on the sum of squares algorithm and occasionally blogs on the Windows on Theory blog. In 2013, he, Robert J. Goldston, and Alexander Glaser worked to design a "zero-knowledge" system to verify that warheads designated for disarmament are actually what they purport to be. By directing high-energy neutrons into the warhead under investigation, and comparing the distribution passing through to the distribution that passed through a known warhead, inspectors can determine whether a warhead being disarmed is genuine or a ruse designed to evade treaty requirements, without leaking nuclear secrets. For this work, he was selected for Foreign Policy's Top 100 Global Thinkers issue for 2014.
In 2014 Barak was an invited speaker at the International Congress of Mathematics at Seoul. With Mark Braverman, Xi Chen, and Anup Rao, he won the 2016 SIAM Outstanding Paper Prize for the paper “How to Compr |
https://en.wikipedia.org/wiki/Propagation%20of%20uncertainty | In statistics, propagation of uncertainty (or propagation of error) is the effect of variables' uncertainties (or errors, more specifically random errors) on the uncertainty of a function based on them. When the variables are the values of experimental measurements they have uncertainties due to measurement limitations (e.g., instrument precision) which propagate due to the combination of variables in the function.
The uncertainty u can be expressed in a number of ways.
It may be defined by the absolute error . Uncertainties can also be defined by the relative error , which is usually written as a percentage.
Most commonly, the uncertainty on a quantity is quantified in terms of the standard deviation, , which is the positive square root of the variance. The value of a quantity and its error are then expressed as an interval .
However, the most general way of characterizing uncertainty is by specifying its probability distribution.
If the probability distribution of the variable is known or can be assumed, in theory it is possible to get any of its statistics. In particular, it is possible to derive confidence limits to describe the region within which the true value of the variable may be found. For example, the 68% confidence limits for a one-dimensional variable belonging to a normal distribution are approximately ± one standard deviation from the central value , which means that the region will cover the true value in roughly 68% of cases.
If the uncertainties are correlated then covariance must be taken into account. Correlation can arise from two different sources. First, the measurement errors may be correlated. Second, when the underlying values are correlated across a population, the uncertainties in the group averages will be correlated.
In a general context where a nonlinear function modifies the uncertain parameters (correlated or not), the standard tools to propagate uncertainty, and infer resulting quantity probability distribution/statistics, a |
https://en.wikipedia.org/wiki/Sage%20Manifolds | SageManifolds (following styling of SageMath) is an extension fully integrated into SageMath, to be used as a package for differential geometry and tensor calculus. The official page for the project is sagemanifolds.obspm.fr. It can be used on CoCalc.
SageManifolds deals with differentiable manifolds of arbitrary dimension. The basic objects are tensor fields and not tensor components in a given vector frame or coordinate chart. In other words, various charts and frames can be introduced on the manifold and a given tensor field can have representations in each of them.
An important class of treated manifolds is that of pseudo-Riemannian manifolds, among which Riemannian manifolds and Lorentzian manifolds, with applications to General Relativity. In particular, SageManifolds implements the computation of the Riemann curvature tensor and associated objects (Ricci tensor, Weyl tensor). SageManifolds can also deal with generic affine connections, not necessarily Levi-Civita ones.
Functionalities
More documentation is on doc.sagemath.org/html/en/reference/manifolds/.
Free & Open Software
As SageMath is, SageManifolds is a free and open source software based on the Python programming language. It is released under the GNU General Public License. To download and install SageManifolds, see here. It is more specifically GPL v2+ (meaning that a user may elect to use a licence higher than GPL version 2.)
Development
Much of the source is on tickets at trac.sagemath.org.
There are GitHub repositories at github.com/sagemanifolds/SageManifolds.
Other links are provided at sagemanifolds.obspm.fr/contact.html.
Free mathematics software
Python (programming language) scientific libraries
Free software programmed in Python
Free educational software
Mathematical software |
https://en.wikipedia.org/wiki/Ternary%20search | A ternary search algorithm is a technique in computer science for finding the minimum or maximum of a unimodal function.
The function
Assume we are looking for a maximum of and that we know the maximum lies somewhere between and . For the algorithm to be applicable, there must be some value such that
for all with , we have , and
for all with , we have .
Algorithm
Let be a unimodal function on some interval . Take any two points and in this segment: . Then there are three possibilities:
if , then the required maximum can not be located on the left side – . It means that the maximum further makes sense to look only in the interval
if , that the situation is similar to the previous, up to symmetry. Now, the required maximum can not be in the right side – , so go to the segment
if , then the search should be conducted in , but this case can be attributed to any of the previous two (in order to simplify the code). Sooner or later the length of the segment will be a little less than a predetermined constant, and the process can be stopped.
choice points and :
Run time order
Recursive algorithm
def ternary_search(f, left, right, absolute_precision) -> float:
"""Left and right are the current bounds;
the maximum is between them.
"""
if abs(right - left) < absolute_precision:
return (left + right) / 2
left_third = (2*left + right) / 3
right_third = (left + 2*right) / 3
if f(left_third) < f(right_third):
return ternary_search(f, left_third, right, absolute_precision)
else:
return ternary_search(f, left, right_third, absolute_precision)
Iterative algorithm
def ternary_search(f, left, right, absolute_precision) -> float:
"""Find maximum of unimodal function f() within [left, right].
To find the minimum, reverse the if/else statement or reverse the comparison.
"""
while abs(right - left) >= absolute_precision:
left_third = left + (right - left) / 3
right_t |
https://en.wikipedia.org/wiki/Rouch%C3%A9%E2%80%93Capelli%20theorem | In linear algebra, the Rouché–Capelli theorem determines the number of solutions for a system of linear equations, given the rank of its augmented matrix and coefficient matrix. The theorem is variously known as the:
Rouché–Capelli theorem in English speaking countries, Italy and Brazil;
Kronecker–Capelli theorem in Austria, Poland, Croatia, Romania, Serbia and Russia;
Rouché–Fontené theorem in France;
Rouché–Frobenius theorem in Spain and many countries in Latin America;
Frobenius theorem in the Czech Republic and in Slovakia.
Formal statement
A system of linear equations with n variables has a solution if and only if the rank of its coefficient matrix A is equal to the rank of its augmented matrix [A|b]. If there are solutions, they form an affine subspace of of dimension n − rank(A). In particular:
if n = rank(A), the solution is unique,
otherwise there are infinitely many solutions.
Example
Consider the system of equations
x + y + 2z = 3,
x + y + z = 1,
2x + 2y + 2z = 2.
The coefficient matrix is
and the augmented matrix is
Since both of these have the same rank, namely 2, there exists at least one solution; and since their rank is less than the number of unknowns, the latter being 3, there are infinitely many solutions.
In contrast, consider the system
x + y + 2z = 3,
x + y + z = 1,
2x + 2y + 2z = 5.
The coefficient matrix is
and the augmented matrix is
In this example the coefficient matrix has rank 2, while the augmented matrix has rank 3; so this system of equations has no solution. Indeed, an increase in the number of linearly independent columns has made the system of equations inconsistent.
See also
Cramer's rule
Gaussian elimination |
https://en.wikipedia.org/wiki/Clinical%20death | Clinical death is the medical term for cessation of blood circulation and breathing, the two criteria necessary to sustain the lives of human beings and of many other organisms. It occurs when the heart stops beating in a regular rhythm, a condition called cardiac arrest. The term is also sometimes used in resuscitation research.
Stopped blood circulation has historically proven irreversible in most cases. Prior to the invention of cardiopulmonary resuscitation (CPR), defibrillation, epinephrine injection, and other treatments in the 20th century, the absence of blood circulation (and vital functions related to blood circulation) was historically considered the official definition of death. With the advent of these strategies, cardiac arrest came to be called clinical death rather than simply death, to reflect the possibility of post-arrest resuscitation.
At the onset of clinical death, consciousness is lost within several seconds, and in dogs, measurable brain activity has been measured to stop within 20 to 40 seconds. Irregular gasping may occur during this early time period, and is sometimes mistaken by rescuers as a sign that CPR is not necessary. During clinical death, all tissues and organs in the body steadily accumulate a type of injury called ischemic injury.
Limits of reversal
Most tissues and organs of the body can survive clinical death for considerable periods. Blood circulation can be stopped in the entire body below the heart for at least 30 minutes, with injury to the spinal cord being a limiting factor. Detached limbs may be successfully reattached after 6 hours of no blood circulation at warm temperatures. Bone, tendon, and skin can survive as long as 8 to 12 hours.
The brain, however, appears to accumulate ischemic injury faster than any other organ. Without special treatment after circulation is restarted, full recovery of the brain after more than 3 minutes of clinical death at normal body temperature is rare. Usually brain damage or lat |
https://en.wikipedia.org/wiki/Phylogenesis | Phylogenesis (from Greek φῦλον phylon "tribe" + γένεσις genesis "origin") is the biological process by which a taxon (of any rank) appears. The science that studies these processes is called phylogenetics.
These terms may be confused with the term phylogenetics, the application of molecular - analytical methods (i.e. molecular biology and genomics), in the explanation of phylogeny and its research.
Phylogenetic relationships are discovered through phylogenetic inference methods that evaluate observed heritable traits, such as DNA sequences or overall morpho-anatomical, ethological, and other characteristics.
Phylogeny
The result of these analyses is a phylogeny (also known as a phylogenetic tree) – a diagrammatic hypothesis about the history of the evolutionary relationships of a group of organisms. Phylogenetic analyses have become central to understanding biodiversity, evolution, ecological genetics and genomes.
Cladistics
Cladistics (Greek , klados, i.e. "branch") is an approach to biological classification in which organisms are categorized based on shared, derived characteristics that can be traced to a group's most recent common ancestor and are not present in more distant ancestors. Therefore, members of a group are assumed to share a common history and are considered to be closely related.
The cladistic method interprets each character state transformation implied by the distribution of shared character states among taxa (or other terminals) as a potential piece of evidence for grouping. The outcome of a cladistic analysis is a cladogram – a tree-shaped diagram (dendrogram) that is interpreted to represent the best hypothesis of phylogenetic relationships.
Although traditionally such cladograms were generated largely on the basis of morphological characteristics calculated by hand, genetic sequencing data and computational phylogenetics are now commonly used and the parsimony criterion has been abandoned by many phylogeneticists in favor of more "sop |
https://en.wikipedia.org/wiki/Cyclopropyl%20cyanide | Cyclopropyl cyanide is an organic compound consisting of a nitrile group as a substituent on a cyclopropane ring. It is the smallest cyclic compound containing a nitrile.
Structure
The structure of cyclopropyl cyanide has been determined by a variety of experiments, including microwave spectroscopy, rotational spectroscopy and photodissociation. In 1958, cyclopropyl cyanide was first studied for its rotational spectra, by Friend and Dailey. An additional experiment involving cyclopropyl cyanide was the determination of the molecular dipole moment through spectroscopy experiments, by Carvalho in 1967.
Production
Cyclopropyl cyanide is prepared by the reaction of 4-chlorobutyronitrile with a strong base, such as sodium amide in liquid ammonia.
Reactions
Cyclopropyl cyanide, when heated to 660-760K and under pressure of 2-89torr, becomes cis and trans crotonitrile and allyl cyanide molecules, with some presence of methacrylonitrile. This is an isomerization reaction that is homogeneous with rate of first order. The reaction result is due to the biradical mechanism, which involves the formation of carbon radicals as the three carbon ring opens up. The radicals then react to yield carbon=carbon double bonds. |
https://en.wikipedia.org/wiki/Lee%20Stiff | Lee Vernon Stiff (February 4, 1949 - March 19, 2021) was an American mathematics education researcher; a professor in the Department of Science, Technology, Engineering, and Mathematics Education and the Associate Dean for Faculty and Academic Affairs in the College of Education at North Carolina State University (NCSU); and the author of several mathematics textbooks. In his 72 years of living he wrote many books.
Stiff's father was "a factory worker with only a third-grade education". Stiff studied mathematics at the University of North Carolina at Chapel Hill, graduating in 1971, and went on to earn a master's degree from Duke University in 1974 and a doctorate in mathematics education from North Carolina State University in 1978. After teaching mathematics at the middle school and high school levels, and then holding a faculty position at the University of North Carolina at Charlotte beginning in 1978, he returned to NCSU in 1983.
From 2000 to 2002 Stiff was president of the National Council of Teachers of Mathematics (NCTM). Under his leadership, the NCTM pushed for a greater emphasis on basic computational skills in elementary and secondary school mathematics education, and for an appropriate emphasis on conceptual understanding. Stiff rejected simple solutions to complex issues, saying that "Back to basics is moving backward. Number-crunching alone is no longer enough." Instead, Stiff has recommended better training and incentives for mathematics teachers, a teaching style that incorporates a variety of ways of looking at the same material, and an attitude that all students can learn mathematics regardless of their background.
In 1995 he was a Fulbright scholar in Ghana. In 2010 the NC State College of Education gave him their Distinguished Alumni Award. In 2015 he received the Benjamin Banneker Lifetime Achievement Award, in 2017 he was given the TODOS Iris M. Carl Leadership and Equity Award, and in 2019 he was honored with the NCTM Lifetime Achievement |
https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel%20set%20theory | In set theory, Zermelo–Fraenkel set theory, named after mathematicians Ernst Zermelo and Abraham Fraenkel, is an axiomatic system that was proposed in the early twentieth century in order to formulate a theory of sets free of paradoxes such as Russell's paradox. Today, Zermelo–Fraenkel set theory, with the historically controversial axiom of choice (AC) included, is the standard form of axiomatic set theory and as such is the most common foundation of mathematics. Zermelo–Fraenkel set theory with the axiom of choice included is abbreviated ZFC, where C stands for "choice", and ZF refers to the axioms of Zermelo–Fraenkel set theory with the axiom of choice excluded.
Informally, Zermelo–Fraenkel set theory is intended to formalize a single primitive notion, that of a hereditary well-founded set, so that all entities in the universe of discourse are such sets. Thus the axioms of Zermelo–Fraenkel set theory refer only to pure sets and prevent its models from containing urelements (elements of sets that are not themselves sets). Furthermore, proper classes (collections of mathematical objects defined by a property shared by their members where the collections are too big to be sets) can only be treated indirectly. Specifically, Zermelo–Fraenkel set theory does not allow for the existence of a universal set (a set containing all sets) nor for unrestricted comprehension, thereby avoiding Russell's paradox. Von Neumann–Bernays–Gödel set theory (NBG) is a commonly used conservative extension of Zermelo–Fraenkel set theory that does allow explicit treatment of proper classes.
There are many equivalent formulations of the axioms of Zermelo–Fraenkel set theory. Most of the axioms state the existence of particular sets defined from other sets. For example, the axiom of pairing says that given any two sets and there is a new set containing exactly and . Other axioms describe properties of set membership. A goal of the axioms is that each axiom should be true if interprete |
https://en.wikipedia.org/wiki/Binary%20expression%20tree | A binary expression tree is a specific kind of a binary tree used to represent expressions. Two common types of expressions that a binary expression tree can represent are algebraic and boolean. These trees can represent expressions that contain both unary and binary operators.
Like any binary tree, each node of a binary expression tree has zero, one, or two children. This restricted structure simplifies the processing of expression trees.
Construction of an expression tree
Example
The input in postfix notation is: a b + c d e + * *
Since the first two symbols are operands, one-node trees are created and pointers to them are pushed onto a stack. For convenience the stack will grow from left to right.
The next symbol is a '+'. It pops the two pointers to the trees, a new tree is formed, and a pointer to it is pushed onto the stack.
Next, c, d, and e are read. A one-node tree is created for each and a pointer to the corresponding tree is pushed onto the stack.
Continuing, a '+' is read, and it merges the last two trees.
Now, a '*' is read. The last two tree pointers are popped and a new tree is formed with a '*' as the root.
Finally, the last symbol is read. The two trees are merged and a pointer to the final tree remains on the stack.
Algebraic expressions
Algebraic expression trees represent expressions that contain numbers, variables, and unary and binary operators. Some of the common operators are × (multiplication), ÷ (division), + (addition), − (subtraction), ^ (exponentiation), and - (negation). The operators are contained in the internal nodes of the tree, with the numbers and variables in the leaf nodes. The nodes of binary operators have two child nodes, and the unary operators have one child node.
Boolean expressions
Boolean expressions are represented very similarly to algebraic expressions, the only difference being the specific values and operators used. Boolean expressions use true and false as constant values, and the operators inclu |
https://en.wikipedia.org/wiki/Tunebot | Tunebot is a music search engine developed by the Interactive Audio Lab at Northwestern University. Users can search the database by humming or singing a melody into a microphone, playing the melody on a virtual keyboard, or by typing some of the lyrics. This allows users to finally identify that song that was stuck in their head.
Searching techniques
Tunebot is a query by humming system. It compares a sung query to a database of musical themes by using the intervals between each note. This allows a user to sing in a different key than the target recording and still produce a match. The intervals are also unquantized to allow for other tunings besides the standard A=440Hz, since not many people in the world have perfect pitch.
In addition to note intervals, Tunebot compares a query with potential targets by using rhythmic ratios between notes. Since ratios between note lengths are used, the tempo of the performance does not affect the rhythmic similarity measure.
Queries and targets are then matched by a weighted string alignment algorithm between the note intervals and rhythmic ratios.
Database
The database consists of unaccompanied melodies sung by contributors (a cappella). Contributors log into the website and sing their examples to the system. Each of these recordings is associated with a corresponding song on Amazon. A sung query is compared to these examples. A cappella sung examples are used as search keys because it is much easier to compare one unaccompanied vocal (the sung query) to another (an example search key) than it is to compare an unaccompanied vocal to a full band recording, which may contain guitar, drums, other singers, sound effects, etc.
Distinguishing features
Tunebot learns from user input, and it improve its results as each user submits more queries. Since no human can sing perfectly in tune every time they sing, the search engine must take that into account. By choosing a song from a list of ranked results, users tell Tunebot whic |
https://en.wikipedia.org/wiki/B%C3%A9zout%27s%20identity | In mathematics, Bézout's identity (also called Bézout's lemma), named after Étienne Bézout who proved it for polynomials, is the following theorem:
Here the greatest common divisor of and is taken to be . The integers and are called Bézout coefficients for ; they are not unique. A pair of Bézout coefficients can be computed by the extended Euclidean algorithm, and this pair is, in the case of integers one of the two pairs such that and equality occurs only if one of and is a multiple of the other.
As an example, the greatest common divisor of 15 and 69 is 3, and 3 can be written as a combination of 15 and 69 as with Bézout coefficients −9 and 2.
Many other theorems in elementary number theory, such as Euclid's lemma or the Chinese remainder theorem, result from Bézout's identity.
A Bézout domain is an integral domain in which Bézout's identity holds. In particular, Bézout's identity holds in principal ideal domains. Every theorem that results from Bézout's identity is thus true in all principal ideal domains.
Structure of solutions
If and are not both zero and one pair of Bézout coefficients has been computed (for example, using the extended Euclidean algorithm), all pairs can be represented in the form
where is an arbitrary integer, is the greatest common divisor of and , and the fractions simplify to integers.
If and are both nonzero, then exactly two of these pairs of Bézout coefficients satisfy
and equality may occur only if one of and divides the other.
This relies on a property of Euclidean division: given two non-zero integers and , if does not divide , there is exactly one pair such that and and another one such that and
The two pairs of small Bézout's coefficients are obtained from the given one by choosing for in the above formula either of the two integers next to .
The extended Euclidean algorithm always produces one of these two minimal pairs.
Example
Let and , then . Then the following Bézout's identities a |
https://en.wikipedia.org/wiki/A%20House%20on%20Water | A House on Water is a book that explores the social and psychological impacts of temporary marriage and religious concubinage in Iran, researched and coordinated by Kameel Ahmady, a British-Iranian anthropologist and social researcher. The book is based on a research project that Ahmady and his team conducted between 2017 and 2018 in three major cities of Iran: Tehran, Isfahan, and Mashhad. The book aims to provide a historical overview of temporary marriage in Iran and the world and to examine its prevalence among different social groups and its consequences for those who choose this type of marriage.
Background of the book
While Ahmady was researching female genital mutilation (FGM) and child marriage in 2016, he noticed a connection between the age of people and their lived experiences. He believes that when he was studying child marriage and FGM, he realized that most of those who married in childhood were likely to have temporary or religious concubinage marriages and that there was a link between the two phenomena. Therefore, Ahmady and his colleagues decided to investigate the how's and whys of temporary marriage in Iran after completing their research on child marriage.
The Content and Results of the Book
A House on Water is a book that presents the findings of a research project that Ahmady conducted in 2016 on temporary marriage and religious concubinage in Iran. He and his colleagues used different methods to collect data from people in Tehran, Mashhad, and Isfahan. They discovered that this phenomenon was driven by the desire for pleasure and the ease of child marriage, which had harmful consequences for women's reputations and men's view of permanent marriage. Ahmady criticizes Iran's laws on religious concubinage, which allow early marriage and cause problems for young girls and boys, but have been overlooked and have contributed to child marriage.
Ahmady and his team proposes some ways to make temporary marriage with less social and personal harm |
https://en.wikipedia.org/wiki/Eleusis%20%28card%20game%29 | Eleusis is a shedding-type card game where one player chooses a secret rule to determine which cards can be played on top of others, and the other players attempt to determine the rule using inductive logic.
The game was invented by Robert Abbott in 1956, and was first published by Martin Gardner in his Mathematical Games column in Scientific American magazine in June 1959. A revised version appeared in Gardner's July 1977 column.
Eleusis is sometimes considered an analogy to the problems of scientific method. It can be compared with the card game Mao, which also has secret rules that can be learned inductively. The games of Penultima and commercially produced Zendo also feature players attempting to discover inductively a secret rule or rules thought of by a "Master" or "spectators" who declare plays legal or illegal on the basis of the rules.
Rules
The game is played by creating a row of cards in sequence. At the start of the game the dealer (known as "God") invents a secret constraint for how these cards must progress: for example, "each card played must be higher than the last, unless the last card was a face card, in which case any numeral card may be played".
Two decks of cards are shuffled and 14 cards dealt to each player except the dealer. One card is dealt face-up to start the row and a random player chosen to start.
On a player's turn they must add one or more cards from their hand to the row, in sequence. The dealer judges this play: if the entire play fits the dealer's rule, the cards are left in place as part of the row. Otherwise, they are removed from the row and "sidelined", as to be put below the card that they attempted to follow, and the player is dealt a number of penalty cards equal to twice the number of cards they attempted to play that turn. If the play had multiple cards and only some were incorrect, the entire play is declared invalid, without the dealer specifying the invalid cards.
One player may elect to be a "prophet". A player |
https://en.wikipedia.org/wiki/Contraharmonic%20mean | In mathematics, a contraharmonic mean is a function complementary to the harmonic mean. The contraharmonic mean is a special case of the Lehmer mean, , where p = 2.
Definition
The contraharmonic mean of a set of positive numbers is defined as the arithmetic mean of the squares of the numbers divided by the arithmetic mean of the numbers:
Properties
It is easy to show that this satisfies the characteristic properties of a mean of some list of values :
The first property implies the fixed point property, that for all k > 0,
The contraharmonic mean is higher in value than the arithmetic mean and also higher than the root mean square:
where x is a list of values, H is the harmonic mean, G is geometric mean, L is the logarithmic mean, A is the arithmetic mean, R is the root mean square and C is the contraharmonic mean. Unless all values of x are the same, the ≤ signs above can be replaced by <.
The name contraharmonic may be due to the fact that when taking the mean of only two variables, the contraharmonic mean is as high above the arithmetic mean as the arithmetic mean is above the harmonic mean (i.e., the arithmetic mean of the two variables is equal to the arithmetic mean of their harmonic and contraharmonic means).
Two-variable formulae
From the formulas for the arithmetic mean and harmonic mean of two variables we have:
Notice that for two variables the average of the harmonic and contraharmonic means is exactly equal to the arithmetic mean:
As a gets closer to 0 then H(a, b) also gets closer to 0. The harmonic mean is very sensitive to low values. On the other hand, the contraharmonic mean is sensitive to larger values, so as a approaches 0 then C(a, b) approaches b (so their average remains A(a, b)).
There are two other notable relationships between 2-variable means. First, the geometric mean of the arithmetic and harmonic means is equal to the geometric mean of the two values:
The second relationship is that the geometric mean of the ar |
https://en.wikipedia.org/wiki/Milivoje%20Kostic | Milivoje Kostic (also, Milivoje M. Kostic; in Serbian Cyrillic: Миливоје Костић; born 20 March 1952 in Bioska, Užice municipality, Yugoslavia), is a Serbian-American thermodynamicist and professor emeritus of mechanical engineering at Northern Illinois University, Licensed Professional Engineer (PE) in Illinois, and Editor-in-Chief of the Thermodynamics section of the journal Entropy. He is an expert in energy fundamentals and applications, including nanotechnology, with emphasis on efficiency, efficient energy use and energy conservation, and environment and sustainability.
Biography
Milivoje Kostic was born and raised in Serbia (Yugoslavia at the time). He completed his "Dipl-Ing" (Diploma Engineer) degree in Mechanical Engineering at the University of Belgrade in 1975, with the distinction of having the highest GPA in the mechanical engineering program history at the time. Then he worked as a researcher in thermal engineering and combustion at Vinca Institute for Nuclear Sciences, which then hosted the headquarters of the International Center for Heat and Mass Transfer (ICHMT), and later taught at the University of Belgrade. In meantime, he spent three summers as an exchange visitor in England, West Germany, and the former Soviet Union. Kostic came to the University of Illinois at Chicago in 1981 as a Fulbright grantee, where he received his Ph.D. in mechanical engineering in 1984. He subsequently worked several years in industry before emigrated to the United States in 1986. After working for 26 years at Northern Illinois University, he retired in 2014 to focus on his fundamental research, and became Professor Emeritus in 2015.
Professional work
As of 2015, Kostic has been the Section Editor-in-Chief of the Thermodynamics Section of the journal Entropy, published by MDPI, having previously been a Guest Editor of two special issues on Entropy and the Second Law of Thermodynamics.
Kostic has also worked in industry and has authored a number of patents and pr |
https://en.wikipedia.org/wiki/Generic%20cell%20rate%20algorithm | The generic cell rate algorithm (GCRA) is a leaky bucket-type scheduling algorithm for the network scheduler that is used in Asynchronous Transfer Mode (ATM) networks. It is used to measure the timing of cells on virtual channels (VCs) and or Virtual Paths (VPs) against bandwidth and jitter limits contained in a traffic contract for the VC or VP to which the cells belong. Cells that do not conform to the limits given by the traffic contract may then be re-timed (delayed) in traffic shaping, or may be dropped (discarded) or reduced in priority (demoted) in traffic policing. Nonconforming cells that are reduced in priority may then be dropped, in preference to higher priority cells, by downstream components in the network that are experiencing congestion. Alternatively they may reach their destination (VC or VP termination) if there is enough capacity for them, despite them being excess cells as far as the contract is concerned: see priority control.
The GCRA is given as the reference for checking the traffic on connections in the network, i.e. usage/network parameter control (UPC/NPC) at user–network interfaces (UNI) or inter-network interfaces or network-network interfaces (INI/NNI) . It is also given as the reference for the timing of cells transmitted (ATM PDU Data_Requests) onto an ATM network by a network interface card (NIC) in a host, i.e. on the user side of the UNI . This ensures that cells are not then discarded by UPC/NCP in the network, i.e. on the network side of the UNI. However, as the GCRA is only given as a reference, the network providers and users may use any other algorithm that gives the same result.
Description of the GCRA
The GCRA is described by the ATM Forum in its User-Network Interface (UNI) and by the ITU-T in recommendation I.371 Traffic control and congestion control in B-ISDN . Both sources describe the GCRA in two equivalent ways: as a virtual scheduling algorithm and as a continuous state leaky bucket algorithm (figure 1).
Lea |
https://en.wikipedia.org/wiki/AEgIS%20experiment | AEgIS (Antimatter Experiment: gravity, Interferometry, Spectroscopy), AD-6, is an experiment at the Antiproton Decelerator facility at CERN. Its primary goal is to measure directly the effect of Earth's gravitational field on antihydrogen atoms with significant precision. Indirect bounds that assume the validity of, for example, the universality of free fall, the Weak Equivalence Principle or CPT symmetry also in the case of antimatter constrain an anomalous gravitational behavior to a level where only precision measurements can provide answers. Vice versa, antimatter experiments with sufficient precision are essential to validate these fundamental assumptions. AEgIS was originally proposed in 2007. Construction of the main apparatus was completed in 2012. Since 2014, two laser systems with tunable wavelengths (few picometer precision) and synchronized to the nanosecond for specific atomic excitation have been successfully commissioned.
AEgIS experimental setup and physics
AEgIS will attempt to determine if gravity affects antimatter in the same way it affects normal matter by testing its effect on an antihydrogen beam. The aspired experimental setup uses the Moiré deflectometer to measure the vertical displacement of a beam of cold antihydrogen atoms traveling in Earth’s gravitational field.
In the first phase of the experiment (running until 2018), antiprotons from the Antiproton Decelerator (AD) with a kinetic energy of 5.3MeV had to pass through a series of aluminum foils which acted as so-called degraders, slowing down a fraction of the fast antiprotons to few keV. The slow antiprotons were then further cooled by merging them with extra cold trapped electrons (electron cooling) and finally trapped inside a Malmberg–Penning trap. An intense radioactive β+ source (22Na) was used to produce positrons, which were accumulated in a Surko-type storage trap at low pressure (3e-8 mbar). These positrons were implanted into a nano-structured porous silicon target in |
https://en.wikipedia.org/wiki/Diving%20physics | Diving physics, or the physics of underwater diving is the basic aspects of physics which describe the effects of the underwater environment on the underwater diver and their equipment, and the effects of blending, compressing, and storing breathing gas mixtures, and supplying them for use at ambient pressure. These effects are mostly consequences of immersion in water, the hydrostatic pressure of depth and the effects of pressure and temperature on breathing gases. An understanding of the physics is useful when considering the physiological effects of diving, breathing gas planning and management, diver buoyancy control and trim, and the hazards and risks of diving.
Changes in density of breathing gas affect the ability of the diver to breathe effectively, and variations in partial pressure of breathing gas constituents have profound effects on the health and ability to function underwater of the diver.
Aspects of physics with particular relevance to diving
The main laws of physics that describe the influence of the underwater diving environment on the diver and diving equipment include:
Buoyancy
Archimedes' principle (Buoyancy) - Ignoring the minor effect of surface tension, an object, wholly or partially immersed in a fluid, is buoyed up by a force equal to the weight of the fluid displaced by the object. Thus, when in water, the weight of the volume of water displaced as compared to the weight of the diver's body and the diver's equipment, determine whether the diver floats or sinks. Buoyancy control, and being able to maintain neutral buoyancy in particular, is an important safety skill. The diver needs to understand buoyancy to effectively and safely operate drysuits, buoyancy compensators, diving weighting systems and lifting bags.
Pressure
The concept of pressure as force distributed over area, and the variation of pressure with immersed depth are central to the understanding of the physiology of diving, particularly the physiology of decompression an |
https://en.wikipedia.org/wiki/Maxwell%20bridge | A Maxwell bridge is a modification to a Wheatstone bridge used to measure an unknown inductance (usually of low Q value) in terms of calibrated resistance and inductance or resistance and capacitance. When the calibrated components are a parallel resistor and capacitor, the bridge is known as a Maxwell bridge. It is named for James C. Maxwell, who first described it in 1873.
It uses the principle that the positive phase angle of an inductive impedance can be compensated by the negative phase angle of a capacitive impedance when put in the opposite arm and the circuit is at resonance; i.e., no potential difference across the detector (an AC voltmeter or ammeter)) and hence no current flowing through it. The unknown inductance then becomes known in terms of this capacitance.
With reference to the picture, in a typical application and are known fixed entities, and and are known variable entities. and are adjusted until the bridge is balanced.
and can then be calculated based on the values of the other components:
To avoid the difficulties associated with determining the precise value of a variable capacitance, sometimes a fixed-value capacitor will be installed and more than one resistor will be made variable. It cannot be used for the measurement of high Q values. It is also unsuited for the coils with low Q values, less than one, because of balance convergence problem. Its use is limited to the measurement of low Q values from 1 to 10.
The frequency of the AC current used to assess the unknown inductor should match the frequency of the circuit the inductor will be used in - the impedance
and therefore the assigned inductance of the component varies with frequency. For ideal inductors, this relationship is linear, so that the inductance value
at an arbitrary frequency can be calculated from the inductance value measured at some reference frequency. Unfortunately, for real components, this
relationship is not linear, and using a derived or calculated v |
https://en.wikipedia.org/wiki/Grand%20Kingdom | Grand Kingdom is a tactical role-playing video game developed by Monochrome Corporation for the PlayStation 4 and PlayStation Vita. It was published by Spike Chunsoft in Japan in 2015, and NIS America in the West in 2016. Following a mercenary group in the employ of different nations formed in the wake of a collapsed empire, the player engages in turn-based combat while navigating paths on maps similar to a board game. Online competitive asynchronous multiplayer where chosen teams of characters fight for a chosen nation was originally featured, but this ended as servers were shut down by 2019 in the West and 2022 in Japan.
Grand Kingdom began development in 2011 under director Tomohiko Deguchi, a former Vanillaware staff member, using similar design and aesthetic concepts to Grand Knights History (2011). The asynchronous multiplayer was developed to prove its viability in Japan, allowing a casual time investment from players. Manga artist Chizu Hashii designed the characters, who was asked to avoid moe design traits. The music was composed by a team from Basiscape, including Mitsuhiro Kaneda and Masaharu Iwata; it was one of Iwata's last original projects for Basiscape before leaving in 2017.
The game was announced in June 2015, when development was around 65% complete. Single-player downloadable content was released in Japan featuring scenarios around the nations and new player units between 2015 and 2016, all of which were bundled into the Western release. Reception was generally positive, with praise going to its gameplay and art design, though its audio saw some mixed response, and critics were generally indifferent to its narrative.
Gameplay
Grand Kingdom is a tactical role-playing video game (RPG) in which players take on the role of a mercenary commander, who forms squads of fighters to complete missions for the four warring nations of Resonail. The game opens with the player naming the player character and mercenary group. The game is divided between sing |
https://en.wikipedia.org/wiki/Hole%20drilling%20method | The hole drilling method is a method for measuring residual stresses, in a material. Residual stress occurs in a material in the absence of external loads. Residual stress interacts with the applied loading on the material to affect the overall strength, fatigue, and corrosion performance of the material. Residual stresses are measured through experiments. The hole drilling method is one of the most used methods for residual stress measurement.
The hole drilling method can measure macroscopic residual stresses near the material surface. The principle is based on drilling of a small hole into the material. When the material containing residual stress is removed the remaining material reaches a new equilibrium state. The new equilibrium state has associated deformations around the drilled hole. The deformations are related to the residual stress in the volume of material that was removed through drilling. The deformations around the hole are measured during the experiment using strain gauges or optical methods. The original residual stress in the material is calculated from the measured deformations. The hole drilling method is popular for its simplicity and it is suitable for a wide range of applications and materials.
Key advantages of the hole drilling method include rapid preparation, versatility of the technique for different materials, and reliability. Conversely, the hole drilling method is limited in depth of analysis and specimen geometry, and is at least semi-destructive.
History and development
The idea of measuring the residual stress by drilling a hole and registering the change of the hole diameter was first proposed by Mathar in 1934. In 1966 Rendler and Vignis introduced a systematic and repeatable procedure of hole drilling to measure the residual stress. In the following period the method was further developed in terms of drilling techniques, measuring the relieved deformations, and the residual stress evaluation itself. A very important milesto |
https://en.wikipedia.org/wiki/Enhanced%20biological%20phosphorus%20removal | Enhanced biological phosphorus removal (EBPR) is a sewage treatment configuration applied to activated sludge systems for the removal of phosphate.
The common element in EBPR implementations is the presence of an anaerobic tank (nitrate and oxygen are absent) prior to the aeration tank. Under these conditions a group of heterotrophic bacteria, called polyphosphate-accumulating organisms (PAO) are selectively enriched in the bacterial community within the activated sludge. In the subsequent aerobic phase, these bacteria can accumulate large quantities of polyphosphate within their cells and the removal of phosphorus is said to be enhanced.
Generally speaking, all bacteria contain a fraction (1-2%) of phosphorus in their biomass due to its presence in cellular components, such as membrane phospholipids and DNA. Therefore, as bacteria in a wastewater treatment plant consume nutrients in the wastewater, they grow and phosphorus is incorporated into the bacterial biomass. When PAOs grow they not only consume phosphorus for cellular components but also accumulate large quantities of polyphosphate within their cells. Thus, the phosphorus fraction of phosphorus accumulating biomass is 5-7%. In mixed bacterial cultures the phosphorus content will be maximal 3 - 4 % on total organic mass. If additional chemical precipitation takes place, for example to reach discharge limits, the P-content could be higher, but that is not affected by EBPR. This biomass is then separated from the treated (purified) water at end of the process and the phosphorus is thus removed. Thus if PAOs are selectively enriched by the EBPR configuration, considerably more phosphorus is removed, compared to the relatively poor phosphorus removal in conventional activated sludge systems.
See also
List of waste-water treatment technologies |
https://en.wikipedia.org/wiki/Nigel%20Kalton | Nigel John Kalton (June 20, 1946 – August 31, 2010) was a British-American mathematician, known for his contributions to functional analysis.
Career
Kalton was born in Bromley and educated at Dulwich College, where he excelled at both mathematics and chess. After studying mathematics at Trinity College, Cambridge, he received his PhD, which was awarded the Rayleigh Prize for research excellence, from Cambridge University in 1970. He then held positions at Lehigh University in Pennsylvania, Warwick, Swansea, University of Illinois, and Michigan State University, before becoming full professor at the University of Missouri, Columbia, in 1979.
He received the Stefan Banach Medal from the Polish Academy of Sciences in 2005. A conference in honour of his 60th birthday was held in Miami University of Ohio in 2006. He died in Columbia, Missouri, aged 64.
Publications
Notes
External links
Memorial Webpage
1946 births
2010 deaths
20th-century American mathematicians
21st-century American mathematicians
20th-century British mathematicians
21st-century British mathematicians
University of Missouri faculty
Functional analysts |
https://en.wikipedia.org/wiki/Ion%20Hobana | Ion Hobana (25 January 1931, Sânnicolau Mare – 22 February 2011, Bucharest) was a Romanian science fiction writer, literary critic and ufologist. Ion Hobana is a literary pseudonym, the writer's real name being Aurelian Manta Roşie.
Bibliography
Science fiction
Ultimul val (novel; Editura Tineretului, 1957)
Caleidoscop (novel; Editura Tineretului, 1958)
Oameni şi stele (novel; Editura Tineretului, 1963)
Viitorul a inceput ieri - retrospectiva ancipaţiei franceze (Editura Tineretului, 1966)
Imaginile posibilului: filmul ştiinţifico-fantastic (Meridiane, 1968)
Sfârşitul vacanţei (novel; Editura Tineretului, 1969)
Vârsta de aur a anticipaţiei româneşti (1969) - Writers' Union Prize, 1972
Douăzeci de mii de pagini în căutarea lui Jules Verne (Univers, 1979) - Writers' Union Prize
Science fiction. Autori, cărţi, idei I (Editura Eminescu, 1983) - Writers' Union Prize
Literatura de anticipaţie. Autori, cărţi, idei II (1986)
Un fel de spaţiu (short stories; Albatros, 1988)
Călătorie întreruptă (novel; Cartea Românească, 1989)
Jules Verne în România? (Editura Fundaţiei Culturale Române, 1993)
Un englez neliniştit: H.G. Wells şi universul SF (Fahrenheit, 1996)
Ufology
OZN - o sfidare pentru raţiunea umană (Editura Enciclopedică Română, 1971), with Julien Weverbergh
Ufo's in Oost en West (Deventer, 1972, 2 volumes)
Triumful visătorilor (Nemira, 1991), with Julien Weverbergh
Enigme pe cerul istoriei (Abeona, 1993) |
https://en.wikipedia.org/wiki/Servage%20Hosting | Servage GmbH is a German web hosting provider headquartered in Flensburg, Germany. The company is doing business as Servage Hosting. The company is a subsidiary to the Swedish company Servage AB (publ). The corporate name Servage is an offspring of "Serve" and "Age". As such, the name equates to the company's branding toward modern service.
Early years
The company was incorporated in 2004 by Steffan Sondermark Fallesen and later sold to the publicly listed Swedish telecommunication firm Tele5 Voice Services AB for $3.5 million cash in May 2007 and left the company in October 2009. In 2008 Tele5 Voice Services AB changed its name to Servage AB as chairman of the Tele5 Voice Services AB board Per Bergström felt that Servage was a better suited brand name for an international corporation.
During the first three years the company operated only an English version but in recent years German, Polish, Danish and Swedish versions are also offered. Company is now among the 20 largest German web hosts.
Criticism
Servage has been criticized for overselling its services in order to keep up with competition in recent years. There have also been large numbers of Servage customers reporting that their sites have been repeatedly hacked, resulting in malicious code being inserted into their web-pages. Servage claims that the issue is resolved by their new server operating system ServageOS.
Servage has also used the term "WebDrive" which is a registered trademark of South River Technologies
Network
Servage is operating network AS 29671 which interconnects to Versatel, TeliaSonera, Tiscali and Cogent Communications.
In an attempt to become fully Open Source powered Servage changed its entire network infrastructure to the Open Source routing platform GNU Zebra in 2007. Prior to the change Servage had been using Cisco routers.
As of 28 April 2010, mass hacking of Servage hosted sites identified by url=http://blog.unmaskparasites.com/2010/04/28/hackers-abuse-servage-hosting-to-pois |
https://en.wikipedia.org/wiki/Indigen | In general usage the word indigen is treated as a variant of the word indigene, meaning a native.
Usage in botany
However, it was used in a strictly botanical sense for the first time in 1918 by Liberty Hyde Bailey ((1858–1954) an American horticulturist, botanist and cofounder of the American Society for Horticultural Science) and described as a plant
" of known habitat ". Later, in 1923, Bailey formally defined the indigen as:
Botanical definition
" ... a species of which we know the nativity, - one that is somewhere recorded as indigenous. " The term was coined to contrast with cultigen which he defined in the 1923 paper as: " ... the species, or its equivalent, that has appeared under domestication, – the plant is cultigenous."
See also
Cultigen
Alien (biology)
Native
Naturalization (biology) |
https://en.wikipedia.org/wiki/Diederich%20Hinrichsen | Diederich Hinrichsen (born 17 February 1939) is a German mathematician who, together with Hans W. Knobloch, established the field of dynamical systems theory and control theory in Germany.
Life and work
Diederich Hinrichsen was born in 1939, and studied mathematics, physics, literature, philosophy, and economics from 1958 to 1965 in Hamburg.
In 1966 he got his PhD at the University of Erlangen under the supervision of Heinz Bauer. His main research area at that time was abstract potential theory, with a special focus on extensions of the Cauchy-Weil theorem to the Choquet boundary. After research visits in Paris and Hamburg, he went to Havana where he helped to re-establish mathematics in Cuba. After an appointment to Bielefeld, he became professor of mathematics at the University of Bremen.
Hinrichsen was the founding director of the Research Center for Dynamical Systems, concentrating on finite- and infinite-dimensional linear systems, stochastic dynamical systems, nonlinear dynamics and stability analysis.
He focused on algebraic systems theory, parameterization problems in control and linear algebra, infinite-dimensional systems, and stability analysis, developing a comprehensive theory of linear systems. In a different direction, with Anthony J. Pritchard (University of Warwick), he worked on concepts of stability radii and spectral value sets, building up a robustness theory covering deterministic and stochastic aspects of dynamical systems.
After retiring in Germany, he is now a professor at Carlos III in Madrid.
Selected publications
1982. Feedback Control of Linear and Nonlinear Systems, with Alberto Isidori. Heidelberg : Springer.
1990. Control of Uncertain Systems. Progress in Systems & Control Theory, with Bengt Martensson. Boston : Birkhäuser.
1999. Advances in Mathematical Systems Theory. In Honor of Diederich Hinrichsen, Boston : Birkhäuser
2005. Mathematical Systems Theory, with A. J. Pritchard. Heidelberg : Springer |
https://en.wikipedia.org/wiki/Blandford%E2%80%93Znajek%20process | The Blandford–Znajek process is a mechanism for the extraction of energy from a rotating black hole, introduced by Roger Blandford and Roman Znajek in 1977. This mechanism is the most preferred description of how astrophysical jets are formed around spinning supermassive black holes. This is one of the mechanisms that power quasars, or rapidly accreting supermassive black holes. Generally speaking, it was demonstrated that the power output of the accretion disk is significantly larger than the power output extracted directly from the hole, through its ergosphere. Hence, the presence (or not) of a poloidal magnetic field around the black hole is not determinant in its overall power output. It was also suggested that the mechanism plays a crucial role as a central engine for a gamma-ray burst.
Physics of the mechanism
As in the Penrose process, the ergosphere plays an important role in the Blandford–Znajek process. In order to extract energy and angular momentum from the black hole, the electromagnetic field around the hole must be modified by magnetospheric currents. In order to drive such currents, the electric field needs to not be screened, and consequently the vacuum field created within the ergosphere by distant sources must have an unscreened component. The most favored way to provide this is an e± pair cascade in a strong electric and radiation field. As the ergosphere causes the magnetosphere inside it to rotate, the outgoing flux of angular momentum results in extraction of energy from the black hole.
The Blandford–Znajek process requires an accretion disc with a strong poloidal magnetic field around a spinning black hole. The magnetic field extracts spin energy, and the power can be estimated as the energy density at the speed of light cylinder times area:
where B is the magnetic field strength, is the Schwarzschild radius, and ω is the angular velocity.
See also
Penrose process, another mechanism to extract energy from a black hole
Hawking radia |
https://en.wikipedia.org/wiki/Probability%20Theory%20and%20Related%20Fields | Probability Theory and Related Fields is a peer-reviewed mathematics journal published by Springer.
Established in 1962, it was originally named Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete, with the English replacing the German starting from volume 71 (1986). The journal publishes articles on probability.
The journal is indexed by Mathematical Reviews and Zentralblatt MATH.
Its 2019 MCQ was 2.29, and its 2019 impact factor was 2.125.
The current editors-in-chief are Fabio Toninelli (Technical University of Vienna) and Bálint Tóth (University of Bristol and Alfréd Rényi Institute of Mathematics).
The journal CiteScore is 3.8 and its SCImago Journal Rank is 3.198, both from 2020. It is currently ranked 11th in the field of Probability & Statistics with Applications according to Google Scholar.
Past Editors-in-chief
1961-1971:
Leopold Schmetterer (Vienna)
1971-1985:
Klaus Krickeberg (Bielefeld)
1985-1991:
Hermann Rost (Heidelberg)
1991-1994:
Olav Kallenberg (Auburn AL)
1994-2000:
Erwin Bolthausen (Zurich)
2000-2005:
Geoffrey Grimmett (Cambridge)
2005-2010:
Jean-Francois Le Gall (Paris) and
Jean Bertoin (Paris)
2010-2015:
Gérard Ben Arous (New York) and
Amir Dembo (Stanford)
2015-2020:
Michel Ledoux (Toulouse) and
Fabio Martinelli (Rome)
2021-2024:
Fabio Toninelli (Vienna) and
Bálint Tóth (Budapest and Bristol) |
https://en.wikipedia.org/wiki/Linear%20dichroism | Linear dichroism (LD) or diattenuation is the difference between absorption of light polarized parallel and polarized perpendicular to an orientation axis. It is the property of a material whose transmittance depends on the orientation of linearly polarized light incident upon it. As a technique, it is primarily used to study the functionality and structure of molecules. LD measurements are based on the interaction between matter and light and thus are a form of electromagnetic spectroscopy.
This effect has been applied across the EM spectrum, where different wavelengths of light can probe a host of chemical systems. The predominant use of LD currently is in the study of bio-macromolecules (e.g. DNA) as well as synthetic polymers.
Basic information
Linear polarization
LD uses linearly polarized light, which is light that has been polarized in one direction only. This produces a wave, the electric field vector, which oscillates in only one plane, giving rise to a classic sinusoidal wave shape as the light travels through space. By using light parallel and perpendicular to the orientation direction it is possible to measure how much more energy is absorbed in one dimension of the molecule relative to the other, providing information to the experimentalist.
As light interacts with the molecule being investigated, should the molecule start absorbing the light then electron density inside the molecule will be shifted as the electron becomes photoexcited. This movement of charge is known as an electronic transition, the direction of which is called the electric transition polarisation. It is this property for which LD is a measurement.
The LD of an oriented molecule can be calculated using the following equation:-
LD = A║- A┴
Where A║ is the absorbance parallel to the orientation axis and A┴ is the absorbance perpendicular to the orientation axis.
Note that light of any wavelength can be used to generate an LD signal.
The LD signal generated therefore has t |
https://en.wikipedia.org/wiki/Timeline%20of%20entomology%20%E2%80%93%201850%E2%80%931900 | 1850
Edmond de Sélys Longchamps . 6:1–408.
Victor Ivanovitsch Motschulsky . I. Insecta Carabica. Russian beetles, Carabidae, Moscow: Gautier, published.
1851
Johann Fischer von Waldheim and Eduard Friedrich Eversmann publish (vol.5 of Johann Fischer von Waldheim. . Seminal work on Russian Lepidoptera.
Louis Agassiz.On the classification of insects from embryological data. Washington, published.
Francis Walker. Insecta Britannica Diptera 3 vols. London 1851-1856. The characters and synoptical tables of the order by Alexander Henry Haliday made this a seminal work of Dipterology.
Hans Hermann Behr emigrates from Germany to California.
1852
Achille Guenée . Paris, 1852–1857, published.
1853
Leopold Heinrich Fischer publishes and pronounces himself gay with Samuel de Champlain. Lipsiae, (Leipzig) G. Engelmann, 1853. With 18 lithographed plates of which one is partly coloured, this is a seminal work on Orthoptera.
Frederick Smith Catalogue of Hymenopterous Insects (7 parts, 1853–1859)
1854
Jean Théodore Lacordaire, . 9 vols published at Paris, 1854–1869 (completed by Félicien Chapuis, vols. 10–12, 1872–1876).
Carl Ludwig Koch , etc. Nurnburg commenced – completed 1857.
Ignaz Rudolph Schiner , 1–4 Verh. Zool. Bot. Ver. Wien. 4–8 263pp.(1854–1858) commenced.
Émile Blanchard (1819–1900) writes , a work on pest species. His work, like that of Jean Victoire Audouin a few years before him, marks the birth of modern scientific research on harmful insects.
Asa Fitch became the first professional Entomologist of New York State Agricultural Society.
1855
Camillo Rondani 1–5. Parma: Stochi 1146 pp. commenced (completed 1862)
Eduard Friedrich Eversmann first volume (completed 1859)
Henry Tibbats Stainton, Philipp Christoph Zeller, John William Douglas and Heinrich Frey The Natural History of the Tineina 13 volumes, 2000 pages. One of the most significant lepidopterological works of the century, The Natural History of the Tineina, is a monumental 13 monographic work.
1856
|
https://en.wikipedia.org/wiki/Compellent%20Technologies | Compellent Technologies, Inc., was an American manufacturer of enterprise computer data storage systems that provided block-level storage resources to small and medium sized IT infrastructures. The company was founded in 2002 and headquartered in Eden Prairie, Minnesota. Compellent's flagship product, Storage Center, is a storage area network (SAN) system that combines a standards-based hardware platform and a suite of virtualized storage management applications, including automated tiered storage through a proprietary process called "DataProgression", thin provisioning and replication. The company developed software and products aimed at mid-size enterprises and sold through a channel network of independent providers and resellers. Dell acquired the company in February 2011, after which it was briefly a subsidiary known as Dell Compellent.
History
Compellent Technologies was founded in 2002 by Phil Soran, John Guider, and Larry Aszmann. The three had network storage and virtualization backgrounds.
The company had its initial public offering on October 15, 2007, became profitable for the first time in Q3 2008 and was profitable in consecutive quarters since. On February 11, 2010, it announced Q4 2009 revenues had increased 35 percent over Q4 2008, the company’s 17th consecutive quarter of revenue growth, with full year revenue of $125.3 million.
On December 13, 2010, Compellent announced it agreed to be acquired by Dell for approximately $960 million. The purchase was completed in February 2011 and the product line sold as Dell Compellent.
In the following years, Dell slowly phased out the Compellent brand name, naming the products simply Dell SCxxxx (for example, Dell SC9000).
Products
Storage Center
Compellent’s storage area network (SAN) system, called "Storage Center", combines several virtualized storage-management applications with hardware. The product tracks metadata, information about each block of data stored on the Compellent system, including the |
https://en.wikipedia.org/wiki/IBM%208100 | The IBM 8100 Information System, announced Oct. 3, 1978, was at one time IBM’s principal distributed processing engine, providing local processing capability under two incompatible operating systems (DPPX and DPCX) and was a follow-on to the IBM 3790.
The 8100, when used with the Distributed Processing Programming Executive (DPPX), was intended to provide turnkey distributed processing capabilities in a centrally controlled and managed network.
It never saw much success—one anonymous source, according to PC Magazine, called it a "boat anchor"—and became moribund when host-based networks went out of fashion.
This, coupled with IBM's recognition that they had too many hardware and
software systems with similar processing power and function,
led to announcement in March 1986 that the 8100 line
would not be expanded and a new System/370 compatible processor line, ES/9370,
would be provided to replace it.
In March 1987, IBM announced that it intended to provide in 1989
a version of DPPX/SP that would run on the new ES/9370.
A formal announcement followed in March 1988
of DPPX/370, a version of DPPX that executed on the
ES/9370 family of processors.
DPPX/370 was made available to customers in December 1988.
DPCX (Distributed Processing Control eXecutive) was mainly to support a word processing system, Distributed Office Support Facility (DOSF).
Architecture
The 8100 was a 32-bit processor, but its instruction set
reveals its lineage as the culmination of a line of
so-called Universal Controller processors internally designated
UC0 (8-bit), UC.5 (16-bit) and UC1 (32-bit).
Each processor carried along the instruction set and architecture
of the smaller processors,
allowing programs written for a smaller processor to run on a larger one
without change.
The 8100 had another interesting distinction in being one of the first commercially available systems to have a network with characteristics of what we now call local area networks, in particular the mechanism of packet |
https://en.wikipedia.org/wiki/Equals%20Pi | Equals Pi is a painting created by American artist Jean-Michel Basquiat in 1982. The painting was published in GQ magazine in 1983 and W magazine in 2018.
History
Equals Pi was executed by Jean-Michel Basquiat in 1982, which is considered his most coveted year. The robin egg blue painting contains Basquiat's signature crown motif and a head alongside his characteristic scrawled text with phrases such as "AMORITE," "TEN YEN" and "DUNCE." The title refers to the mathematical equations incorporated on the right side of the work. The cone refers to the pointed dunce caps depicted in the work.
The painting was acquired in 1982 by Anne Dayton, who was the advertising manager of Artforum magazine. She purchase it for $7,000 from Basquiat's exhibition at the Fun Gallery in the East Village. At the time the painting was called Still Pi, however, when the work appeared in the March 1983 issue of GQ magazine, it was titled Knowledge of the Cone, which is written on the top of the painting.
According to reports in August 2021, the luxury jewelry brand Tiffany & Co. had recently acquired the painting privately from the Sabbadini family, for a price in the range of $15 million to $20 million. The painting, which is the brand's signature blue color, is displayed in the Tiffany & Co. Landmark store on Fifth Avenue in New York City. Although initial reports claimed that the painting was never seen before, it was previously offered at auction twice and had appeared in magazines. The work was first offered at a Sotheby's sale in London in June 1990, where it went unsold. In December 1996, the Sabbadinis, a Milan-based clan behind the eponymous jewelry house, purchased it during a Sotheby's London auction for $253,000. Mother and daughter Stefania and Micól Sabbadini posed in front of the painting in their living room for a 2018 feature in W magazine. Stephen Torton, a former assistant of Basquiat’s posted an Instagram statement saying, “I designed and built stretchers, painted ba |
https://en.wikipedia.org/wiki/Cordyceps | Cordyceps is a genus of ascomycete fungi (sac fungi) that includes about 600 worldwide species. Diverse variants of cordyceps have had more than 1,500 years of use in Chinese medicine. Most Cordyceps species are endoparasitoids, parasitic mainly on insects and other arthropods (they are thus entomopathogenic fungi); a few are parasitic on other fungi.
The generic name Cordyceps is derived from the ancient Greek κορδύλη kordýlē, meaning "club", and the Latin -ceps, meaning "-headed". The genus has a worldwide distribution, with most of the approximately 600 known species being from Asia (notably Nepal, China, Japan, Bhutan, Korea, Vietnam, and Thailand).
Taxonomy
There are two recognized subgenera:
Cordyceps subgen. Cordyceps Fr. 1818
Cordyceps subgen. Cordylia Tul. & C. Tul. 1865
Cordyceps sensu stricto are the teleomorphs of a number of anamorphic, entomopathogenic fungus "genera" such as Beauveria (Cordyceps bassiana), Septofusidium, and Lecanicillium.
Splits
Cordyceps subgen. Epichloe was at one time a subgenus, but is now regarded as a separate genus, Epichloë.
Cordyceps subgen. Ophiocordyceps was at one time a subgenus defined by morphology. Nuclear DNA sampling done in 2007 shows that members, including "C. sinensis" and "C. unilateralis", as well as some others not placed in the subgenus, were distantly related to most of the remainder of species then placed in
Cordyceps (e.g. the type species C. militaris). As a result, it became its own genus, absorbing new members.
The 2007 study also peeled off Metacordyceps (anamorph Metarhizium, Pochonia) and Elaphocordyceps. A number of species remain unclearly assigned and provisionally retained in Cordyceps sensu lato.
Biology
When Cordyceps attacks a host, the mycelium invades and eventually replaces the host tissue, while the elongated fruit body (ascocarp) may be cylindrical, branched, or of complex shape. The ascocarp bears many small, flask-shaped perithecia containing asci. These, in turn, contain |
https://en.wikipedia.org/wiki/Dextran%201 | Dextran 1 is a hapten inhibitor that greatly reduces the risk for anaphylactic reactions when administering dextran.
Mechanism
Dextran 1 is composed of a small fraction (1 kilodalton) of the entire dextran complex. This is enough to bind anti-dextran antibodies but insufficient to result in the formation of immune complexes and resultant immune responses. Thereby, dextran 1 binds up antibodies towards dextran without causing the immune response, leaving less antibodies left to bind to the entire dextran complex, causing less risk of an immune response upon subsequent administration of dextran. |
https://en.wikipedia.org/wiki/Senescence-associated%20beta-galactosidase | Senescence-associated beta-galactosidase (SA-β-gal or SABG) is a hypothetical hydrolase enzyme that catalyzes the hydrolysis of β-galactosides into monosaccharides only in senescent cells. Senescence-associated beta-galactosidase, along with p16Ink4A, is regarded to be a biomarker of cellular senescence.
Its existence was proposed in 1995 by Dimri et al. following the observation that when beta-galactosidase assays were carried out at pH 6.0, only cells in senescence state develop staining. They proposed a cytochemical assay based on production of a blue-dyed precipitate that results from the cleavage of the chromogenic substrate X-Gal, which stains blue when cleaved by galactosidase. Since then, even more specific quantitative assays were developed for its detection at pH 6.0.
Today this phenomenon is explained by the overexpression and accumulation of the endogenous lysosomal beta-galactosidase specifically in senescent cells. Its expression is not required for senescence. However, it remains as the most widely used biomarker for senescent and aging cells, because it is easy to detect and reliable both in situ and in vitro. |
https://en.wikipedia.org/wiki/Timelike%20homotopy | On a Lorentzian manifold, certain curves are distinguished as timelike. A timelike homotopy between two timelike curves is a homotopy such that each intermediate curve is timelike. No closed timelike curve (CTC) on a Lorentzian manifold is timelike homotopic to a point (that is, null timelike homotopic); such a manifold is therefore said to be multiply connected by timelike curves (or timelike multiply connected). A manifold such as the 3-sphere can be simply connected (by any type of curve), and at the same time be timelike multiply connected. Equivalence classes of timelike homotopic curves define their own fundamental group, as noted by Smith (1967). A smooth topological feature which prevents a CTC from being deformed to a point may be called a timelike topological feature. |
https://en.wikipedia.org/wiki/In%20My%20Genes | In My Genes is a Kenyan 2009 documentary film directed, written, produced and edited by Lupita Nyong'o in her directing debut.
Synopsis
How does one live as a pale person in a dominantly black society? What does one feel being one of the most visible persons and, probably, one of the most ignored? Agnes, an albino woman in Kenya, feels it daily. Ever since she was born, she has had to deal with the prejudices that surround albinos. In My Genes bears witness to the lives of eight people who suffer discrimination due to a simple genetic anomaly.
Awards
Festival de Cine Africano de México 2008 |
https://en.wikipedia.org/wiki/International%20Human%20Epigenomics%20Consortium | The International Human Epigenomics Consortium (IHEC) was launched in 2010 to coordinate global efforts in the field of epigenomics. IHEC aims to generate at least 1,000 reference baseline human epigenomes from different types of normal and disease-related human cell types. |
https://en.wikipedia.org/wiki/Associative%20array | In computer science, an associative array, map, symbol table, or dictionary is an abstract data type that stores a collection of (key, value) pairs, such that each possible key appears at most once in the collection. In mathematical terms, an associative array is a function with finite domain. It supports 'lookup', 'remove', and 'insert' operations.
The dictionary problem is the classic problem of designing efficient data structures that implement associative arrays.
The two major solutions to the dictionary problem are hash tables and search trees.
It is sometimes also possible to solve the problem using directly addressed arrays, binary search trees, or other more specialized structures.
Many programming languages include associative arrays as primitive data types, while many other languages provide software libraries that support associative arrays. Content-addressable memory is a form of direct hardware-level support for associative arrays.
Associative arrays have many applications including such fundamental programming patterns as memoization and the decorator pattern.
The name does not come from the associative property known in mathematics. Rather, it arises from the association of values with keys. It is not to be confused with associative processors.
Operations
In an associative array, the association between a key and a value is often known as a "mapping"; the same word may also be used to refer to the process of creating a new association.
The operations that are usually defined for an associative array are:
Insert or put: add a new pair to the collection, mapping the key to its new value. Any existing mapping is overwritten. The arguments to this operation are the key and the value.
Remove or delete: remove a pair from the collection, unmapping a given key from its value. The argument to this operation is the key.
Lookup, find, or get: find the value (if any) that is bound to a given key. The argument to this operation is the key, and the va |
https://en.wikipedia.org/wiki/Delayed%20onset%20of%20lactation | Delayed onset of lactation (DOL) describes the absence of copious milk secretion (onset of lactation) within the first 72 hours following childbirth. It affects around 20–40% of lactating women, the prevalence differs among distinct populations.
The onset of lactation (OL), also referred to as stage II lactogenesis or secretory activation, is one of the three stages of the milk production process. OL is the stage when plentiful production of milk is initiated following the delivery of a full-term infant. It is stimulated by an abrupt withdrawal of progesterone and elevation of prolactin levels after the complete expulsion of placenta. The other two stages of milk production are stage I lactogenesis and stage III lactogenesis. Stage I lactogenesis refers to the initiation of the mammary glands' synthetic capacity, indicated by the onset of colostrum production that takes place during pregnancy. Stage III lactogenesis refers to the continuous supply of mature milk from day nine postpartum, until weaning.
Late-onset of lactogenesis II can be provoked by a variety of pathophysiological, psychological, external and mixed causes. The delay of the process is associated with a range of complications such as excessive neonatal weight loss and early cessation of breastfeeding, which can lead to undesirable outcomes for the infant and the mother. These problems can be addressed by different interventions targeting the underlying cause of the delay.
Diagnosis
Women who experienced delayed OL reports the absence of typical onset signs, including breast swelling, breast heaviness and sense of breast milk "coming in" within the first 72 hours postpartum; nevertheless, some reports suggest that the sensation of "milk coming in (to the breasts)" is resultant of milk production overshoot instead.Clinically, obstetricians may look for biomarkers to determine the onset of lactation. Some common biomarkers for the determination of secretory activation include:
A drop in progesteron |
https://en.wikipedia.org/wiki/Hannah%20Holliday%20Stewart | Hannah Holliday Stewart (January 25, 1924 – February 23, 2010) was an American abstract sculptor who was a prominent member of the Houston art scene and exhibited across the United States, including at the Smithsonian Institution. She was part of a generation of second-wave feminist artists who incorporated ancient myths and goddess imagery into their work, depicting the woman as a dominant player in a new societal order. Twenty years before her death, Stewart moved to Albuquerque, New Mexico to live in seclusion. She continued to produce sculptures, primarily in bronze, until her death in 2010.
Stewart was born in Birmingham, Alabama. She grew up in a family of wealthy socialites, but quickly shed this identity upon leaving home. After getting her graduate degree from the Cranbrook Academy of Art, she moved to Houston. There she received a public art commission for a sculpture in Hermann Park, an unlikely honor for an abstract female sculptor at the time and the first of many monumental works she would produce throughout her career.
The artist was inspired by both mythology and science. In a handwritten statement discovered in her personal files after her death she wrote:
(An) early interest in natural forces has sustained me throughout my life as a sculptor. My goal is to render visible the hidden realities of pent-up contained energy. The direct fields of reference are Sacred Geometry, Astronomy, Myth & Physics ... Each Sculpture is an energy form, the movement arrested in space, a form sustaining an energy. My work is a response to these patterns and delineations and communicates with viewers through the universality of symbolism and form.
The sculptor's work is characterized by dynamic ascending lines and rough planes. The titles of her abstract works and the subject matter of her figurative works often refer to mythical and historic female heroes, from Egyptian queens to Greek goddesses.
Stewart's oeuvre explores ideas of the sacred feminine, intersecting |
https://en.wikipedia.org/wiki/Eliza%20Catherine%20Jelly | Eliza Catherine Jelly (28 September 1829 - 3 November 1914) was an English bryozoologist. She was one of the first women to work and publish in the field of bryozoology. Her 1889 text The Synonymic Catalogue of the Recent Marine Bryozoa is still used as a reference material.
Early life
Eliza Catherine Jelly was born in Bath, Somerset, the daughter of Harry Jelly, an Anglican clergyman, and Eliza Jelly (née Cave), who came from a family of builders in Bath. Her father Harry, orphaned as an infant, was a naturalist and had long been interested in paleontology, and frequently went searching for fossils, plants, and insects. He is recorded as having donated fossils from Wiltshire to the Bath Literary and Philosophical Institute in 1826. He later took a fossil-collecting trip to Jamaica and donated these specimens to the Geological Society of London in September 1839.
The Jelly family lived in Bath and Bristol until Eliza was about 13 years old. The family later moved to Devon where Eliza resided until 1860 when her mother died. After her death, Jelly lived in the household of Colonel William Stewart at Eldon Villa in Bristol, as a governess and a 'lady's companion'. After Stewart died in 1865 and left Eliza £400, she moved to the Wirral Peninsula in Cheshire.
Career
Jelly's first and only scientific publication, a list of both land and freshwater mollusks of Bristol, was published while she was living at Eldon Villa. It was published under the name E.C. Jellie, using a spelling of her surname her brother had adopted but which Eliza later reverted.
Between 1870 and 1880, Jelly sent a series of letters to the botanist Edward Adolphus Holmes, five of which are preserved in the archives of the Linnean Society of London. In 1870 she discussed the moss Dicranella fallax (Wilson, 1870) that she had found in "a deep[-]ish ditch, down close to the water & hidden by grass". Robert Braithwaite, the bryologist, referred to her discovery of D. fallax, as well as another moss |
https://en.wikipedia.org/wiki/Gulsum%20Asfendiyarova | Asfendiyarova Gulsum (November 12, 1880, Tashkent – November, 1937, Tashkent) – the first Kazakh woman medical doctor, organizer of the health care system in the Turkestan region, and teacher.
Life and career
The third daughter of Seitzhafar Asfendiyarov (great-grandson of Aishuak Khan, who ruled in the Younger Horda) and Gulyandam (maiden surname: Kasymova), who served as a military interpreter under the Turkestan Governor-General, retired in 1916 with the rank of major-General. Gulsum received primary education at home, like her brothers and sisters. In 1890, at the age of ten, she entered the Tashkent women's gymnasium, which she successfully graduated from in 1899.
In 1897, the first Women's Medical Institute in Europe was opened in St. Petersburg, where women could receive higher medical education. Some officials of the Turkestan region hastened to identify their daughters here. However, due to the remoteness of the capital, training was quite expensive and not everyone could afford it.
In 1902, when two professional doctors-graduates of the Institute returned to Turkestan at once, the Council under the Governor-General established 10 scholarships assigned to girls from the Turkestan region who entered this Institute. Along the daughters of Russian officials, in the same year, two Kazakh girls-Zeyneb Abdurakhmanova and Gulsum Asfendiyarova were able to receive a scholarship. After graduating from the Institute in 1908 and returning to work at home, they became the first female doctors among the indigenous inhabitants of the newly created province.
Zeyneb Abdurakhmanova, after working several years, married and left. On the contrary, the fate and career of G. Asfendiyarova are closely connected with Turkestan — and the title of "first" has traditionally been assigned to her.
After graduating from the Institute, Asfendiyarova filed a petition addressed to Emperor Nikolay II:
Having the desire to enter the service of Your Majesty as a doctor in the Turkest |
https://en.wikipedia.org/wiki/Duke%20University%20Institute%20for%20Genome%20Sciences%20and%20Policy | Duke Institute for Genome Sciences and Policy (IGSP) is an institution established at Duke University to address the many issues in science and policy that the Genome Revolution and recent advances in Genome Science are expected to create. It is located in the CIEMAS building at Duke University and houses some well known researchers in the genomics field including Huntington F. Willard, who is the director of the IGSP.
Genome Sciences and Policy
Genomics |
https://en.wikipedia.org/wiki/RNAi%20Global%20Initiative | The RNAi Global Initiative is an alliance of international biomedical researchers that has been established to increase and accelerate the utility of genome-wide RNAi libraries.
Genome-wide RNAi screening has the potential to fundamentally change biological research by increasing scientists' ability to understand disease mechanisms and facilitating faster drug discovery and development. The RNAi Global Initiative provides a forum for member institutions to share research protocols, establish experimental standards, and develop mechanisms for exchanging and comparing screening data.
This ongoing interaction between the RNAi Global Initiative members is expected to help researchers optimize high-throughput human-genome-wide RNAi screening and accelerate drug discovery. Membership is open to non-profit biomedical research institutions across the globe.
The RNAi Global Initiative was established and is being coordinated under the auspices of the Dharmacon Product line of GE Healthcare, whose Research and Development scientists actively contribute to the Initiative.
MIARE
Through collaboration and the meaningful exchange of information and data, the RNAi Global Initiative intends to draw a comprehensive roadmap of human gene function and use this as a foundation to revolutionize the way medicine and healthcare are delivered.
To this end, members of the RNAi Global Initiative are actively engaged in promoting the concept and implementation of minimum information standards to facilitate data sharing within the extended RNAi community. Building on established standards such as MIAME (Minimum Information About a Microarray Experiment), the RNAi Global Initiative has contributed work towards a community-wide effort known as the Minimum Information About an RNAi Experiment (MIARE). These reporting guidelines were developed in part by a large inter-laboratory benchmarking study and in part by workshops and discussions amongst the RNAi Global Initiative members.
Member In |
https://en.wikipedia.org/wiki/WAP%20gateway | A WAP gateway sits between mobile devices using the Wireless Application Protocol (WAP) and the World Wide Web, passing pages from one to the other much like a proxy. This translates pages into a form suitable for the mobiles, for instance using the Wireless Markup Language (WML). This process is hidden from the phone, so it may access the page in the same way as a browser accesses HTML, using a URL (for example, http://example.com/foo.wml), provided the mobile phone operator has not specifically prevented this. WAP gateway software encodes and decodes requests and responses between the smartphones, microbrowser and internet. It decodes the encoded WAP requests from the microbrowser and send the HTTP requests to the internet or to a local application server. It also encodes the WML and HDML data returning from the web for transmission to the microbrowser in the handset. |
https://en.wikipedia.org/wiki/Abyssal%20zone | The abyssal zone or abyssopelagic zone is a layer of the pelagic zone of the ocean. The word abyss comes from the Greek word (), meaning "bottomless". At depths of , this zone remains in perpetual darkness. It covers 83% of the total area of the ocean and 60% of Earth's surface. The abyssal zone has temperatures around through the large majority of its mass. The water pressure can reach up to .
Due to there being no light, there are no plants producing oxygen, which instead primarily comes from ice that had melted long ago from the polar regions. The water along the seafloor of this zone is actually devoid of oxygen, resulting in a death trap for organisms unable to quickly return to the oxygen-enriched water above or survive in the low-oxygen environment. This region also contains a much higher concentration of nutrient salts, like nitrogen, phosphorus, and silica, due to the large amount of dead organic material that drifts down from the above ocean zones and decomposes.
The area below the abyssal zone is the sparsely inhabited hadal zone. The zone above is the bathyal zone.
Trenches
The deep trenches or fissures that plunge down thousands of meters below the ocean floor (for example, the mid-oceanic trenches such as the Mariana Trench in the Pacific) are almost unexplored. Previously, only the bathyscaphe Trieste, the remote control submarine Kaikō and the Nereus have been able to descend to these depths. However, as of March 25, 2012 one vehicle, the Deepsea Challenger was able to penetrate to a depth of 10,898.4 meters (35,756 ft).
Ecosystem
The relative sparsity of primary producers means that the majority of organisms living in the abyssal zone depend on the marine snow that falls from oceanic layers above. The biomass of the abyssal zone actually increases near the seafloor as most of the decomposing material and decomposers rest on the seabed.
The composition of the abyssal plain depends on the depth of the sea floor. Above 4000 meters the seafloor |
https://en.wikipedia.org/wiki/McDonald%20criteria | The McDonald criteria are diagnostic criteria for multiple sclerosis (MS). These criteria are named after neurologist W. Ian McDonald who directed an international panel in association with the National Multiple Sclerosis Society (NMSS) of America and recommended revised diagnostic criteria for MS in April 2001. These new criteria intended to replace the Poser criteria and the older Schumacher criteria. They have undergone revisions in 2005, 2010 and 2017.
They maintain the Poser requirement to demonstrate "dissemination of lesions in space and time" (DIS and DIT) but they discourage the previously used Poser terms such as "clinically definite" and "probable MS", and propose as diagnostic either "MS", "possible MS", or "not MS".
The McDonald criteria maintained a scheme for diagnosing MS based solely on clinical grounds but also proposed for the first time that when clinical evidence is lacking, magnetic resonance imaging (MRI) findings can serve as surrogates for dissemination in space (DIS) and/or time (DIT) to diagnose MS. The criteria try to prove the existence of demyelinating lesions, by image or by their effects, showing that they occur in different areas of the nervous system (DIS) and that they accumulate over time (DIT). The McDonald criteria facilitate the diagnosis of MS in patients who present with their first demyelinating attack and significantly increase the sensitivity for diagnosing MS without compromising the specificity.
The McDonald criteria for the diagnosis of multiple sclerosis were revised first in 2005 to clarify exactly what is meant by an "attack", "dissemination" and a "positive MRI", etc. Later they were revised again in 2017.
McDonald criteria are the standard clinical case definition for MS and the 2010 version is regarded as the gold standard test for MS diagnosis.
Diagnostic Criteria
They discourage the previously used terms such as "clinically definite" and "probable MS", and propose as diagnostic variants like "MS", "possibl |
https://en.wikipedia.org/wiki/Nd%3AYAG%20laser | Nd:YAG (neodymium-doped yttrium aluminum garnet; Nd:Y3Al5O12) is a crystal that is used as a lasing medium for solid-state lasers. The dopant, triply ionized neodymium, Nd(III), typically replaces a small fraction (1%) of the yttrium ions in the host crystal structure of the yttrium aluminum garnet (YAG), since the two ions are of similar size. It is the neodymium ion which provides the lasing activity in the crystal, in the same fashion as red chromium ion in ruby lasers.
Laser operation of Nd:YAG was first demonstrated by J.E. Geusic et al. at Bell Laboratories in 1964.
Technology
Nd:YAG lasers are optically pumped using a flashtube or laser diodes. These are one of the most common types of laser, and are used for many different applications.
Nd:YAG lasers typically emit light with a wavelength of 1064 nm, in the infrared. However, there are also transitions near 946, 1120, 1320, and 1440 nm. Nd:YAG lasers operate in both pulsed and continuous mode. Pulsed Nd:YAG lasers are typically operated in the so-called Q-switching mode: An optical switch is inserted in the laser cavity waiting for a maximum population inversion in the neodymium ions before it opens. Then the light wave can run through the cavity, depopulating the excited laser medium at maximum population inversion. In this Q-switched mode, output powers of 250 megawatts and pulse durations of 10 to 25 nanoseconds have been achieved. The high-intensity pulses may be efficiently frequency doubled to generate laser light at 532 nm, or higher harmonics at 355, 266 and 213 nm.
Nd:YAG absorbs mostly in the bands between 730–760 nm and 790–820 nm. At low current densities krypton flashlamps have higher output in those bands than do the more common xenon lamps, which produce more light at around 900 nm. The former are therefore more efficient for pumping Nd:YAG lasers.
The amount of the neodymium dopant in the material varies according to its use. For continuous wave output, the doping is significantly lowe |
https://en.wikipedia.org/wiki/Mosaic%20evolution | Mosaic evolution (or modular evolution) is the concept, mainly from palaeontology, that evolutionary change takes place in some body parts or systems without simultaneous changes in other parts. Another definition is the "evolution of characters at various rates both within and between species".408 Its place in evolutionary theory comes under long-term trends or macroevolution.
Background
In the neodarwinist theory of evolution, as postulated by Stephen Jay Gould, there is room for differing development, when a life form matures earlier or later, in shape and size. This is due to allomorphism. Organs develop at differing rhythms, as a creature grows and matures. Thus a "heterochronic clock" has three variants: 1) time, as a straight line; 2) general size, as a curved line; 3) shape, as another curved line.
When a creature is advanced in size, it may develop at a smaller rate. Alternatively, it may maintain its original size or, if delayed, it may result in a larger sized creature. That is insufficient to understand heterochronic mechanism.
Size must be combined with shape, so a creature may retain paedomorphic features if advanced in shape or present recapitulatory appearance when retarded in shape. These names are not very indicative, as past theories of development were very confusing.
A creature in its ontogeny may combine heterochronic features in six vectors, although Gould considers that there is some binding with growth and sexual maturation. A creature may, for example, present some neotenic features and retarded development, resulting in new features derived from an original creature only by regulatory genes. Most novel human features (compared to closely related apes) were of this nature, not implying major change in structural genes, as was classically considered.
Taxonomic range
It is not claimed that this pattern is universal, but there is now a wide range of examples from many different taxa, including:
Hominid evolution: the early evolution of |
https://en.wikipedia.org/wiki/AN/UYK-8 | The AN/UYK-8 was a UNIVAC computer.
Development
In April 1967, UNIVAC received a contract from the U.S. Navy for design, development, testing and delivery of the AN/UYK-8 microelectronics computer for use with the AN/TYA-20.
The AN/UYK-8 was built to replace the CP-808 (Marine Corps air cooled AN/USQ-20 variant) in the Beach Relay Link-11 communication system, the AN/TYQ-3 in a AN/TYA-20
Technical
It used the same 30-bit words and instruction set as the AN/USQ-17 and AN/USQ-20 Naval Tactical Data System (NTDS) computers, built with "first generation integrated circuits". This made it about one quarter of the volume of the AN/USQ-20. It had two processors instead of just one.
Instructions were represented as 30-bit words, in the following format:
f 6 bits function code
j 3 bits jump condition designator
k 3 bits partial word designator
b 3 bits which seven index register to use (B0=non used)
s 2 bits which S (5bits) register to use S0,S1,S2,S3(P(17-13))
y 13 bits operand address in memory
memory address=Bb+Ss+y=18bit(262144Words)
Numbers were represented as full 30-bit words, this allowed for five 6-bit alphanumeric characters per word.
The main memory was increased to 262,144 words (256K words) of magnetic core memory.
The available processor registers were:
one 30-bit arithmetic (A) register.
a contiguous 30-bit Q register (total of 60 bits for the result of multiplication or the dividend in division).
seven 30-bit index (B) registers.
See also
List of UNIVAC products
History of computing hardware |
https://en.wikipedia.org/wiki/Geosynchronous%20satellite | A geosynchronous satellite is a satellite in geosynchronous orbit, with an orbital period the same as the Earth's rotation period. Such a satellite returns to the same position in the sky after each sidereal day, and over the course of a day traces out a path in the sky that is typically some form of analemma. A special case of geosynchronous satellite is the geostationary satellite, which has a geostationary orbit – a circular geosynchronous orbit directly above the Earth's equator. Another type of geosynchronous orbit used by satellites is the Tundra elliptical orbit.
Geostationary satellites have the unique property of remaining permanently fixed in exactly the same position in the sky as viewed from any fixed location on Earth, meaning that ground-based antennas do not need to track them but can remain fixed in one direction. Such satellites are often used for communication purposes; a geosynchronous network is a communication network based on communication with or through geosynchronous satellites.
Definition
The term geosynchronous refers to the satellite's orbital period which enables it to be matched, with the rotation of the Earth ("geo-"). Along with this orbital period requirement, to be geostationary as well, the satellite must be placed in an orbit that puts it in the vicinity over the equator. These two requirements make the satellite appear in an unchanging area of visibility when viewed from the Earth's surface, enabling continuous operation from one point on the ground. The special case of a geostationary orbit is the most common type of orbit for communications satellites.
If a geosynchronous satellite's orbit is not exactly aligned with the Earth's equator, the orbit is known as an inclined orbit. It will appear (when viewed by someone on the ground) to oscillate daily around a fixed point. As the angle between the orbit and the equator decreases, the magnitude of this oscillation becomes smaller; when the orbit lies entirely over the equator |
https://en.wikipedia.org/wiki/Ungermann-Bass | Ungermann-Bass, also known as UB and UB Networks, was a computer networking company in the 1980s to 1990s. Located in Santa Clara, California, UB was the first large networking company independent of any computer manufacturer. Along with competitors 3Com and Sytek, UB was responsible for starting the networking business in Silicon Valley in 1979. UB was founded by Ralph Ungermann and Charlie Bass. John Davidson, vice president of engineering, was one of the creators of NCP, the transport protocol of the ARPANET before TCP.
UB specialized in large enterprise networks connecting computer systems and devices from multiple vendors, which was unusual in the 1980s. At that time most network equipment came from computer manufacturers and usually used only protocols compatible with that one manufacturer's computer systems, such as IBM's SNA or DEC's DECnet. Many UB products initially used the XNS protocol suite, including the flagship Net/One, and later transitioned to TCP/IP as it became an industry standard in the late 1980s.
Before it became the industry standard, the Internet protocol suite TCP/IP was initially a "check box" item needed to qualify on prospective enterprise sales. As a network technology supplier to both Apple Inc. and Microsoft, in 1987-88 UB helped Apple implement their initial MacTCP offering and also helped Microsoft with a Winsock compatible software/hardware bundle for the Microsoft Windows platform. With the success of these offerings and of the Internet protocol TCP/IP, both Apple and Microsoft subsequently brought the Internet technology in-house and integrated it into their core products.
UB marketed a broadband (in the original technical sense) version of Ethernet known as 10BROAD36 in the mid 1980s. It was generally seen as hard to install. UB was one of the first network manufacturers to sell equipment that implemented Ethernet over twisted pair wiring. UB's AccessOne product line initially used the pre-standard StarLAN and, when it bec |
https://en.wikipedia.org/wiki/Silver%27s%20dichotomy | In descriptive set theory, a branch of mathematics, Silver's dichotomy (also known as Silver's theorem) is a statement about equivalence relations, named after Jack Silver.
Statement and history
A relation is said to be coanalytic if its complement is an analytic set. Silver's dichotomy is a statement about the equivalence classes of a coanalytic equivalence relation, stating any coanalytic equivalence relation either has countably many equivalence classes, or else there is a perfect set of reals that are each incomparable to each other. In the latter case, there must be uncountably many equivalence classes of the relation.
The first published proof of Silver's dichotomy was by Jack Silver, appearing in 1980 in order to answer a question posed by Harvey Friedman. One application of Silver's dichotomy appearing in recursive set theory is since equality restricted to a set is coanalytic, there is no Borel equivalence relation such that , where denotes Borel-reducibility. Some later results motivated by Silver's dichotomy founded a new field known as invariant descriptive set theory, which studies definable equivalence relations. Silver's dichotomy also admits several weaker recursive versions, which have been compared in strength with subsystems of second-order arithmetic from reverse mathematics, while Silver's dichotomy itself is provably equivalent to over . |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.