source
stringlengths 31
227
| text
stringlengths 9
2k
|
|---|---|
https://en.wikipedia.org/wiki/HAZUS
|
Hazus is a geographic information system-based natural hazard analysis tool developed and freely distributed by the Federal Emergency Management Agency (FEMA).
In 1997 FEMA released its first edition of a commercial off-the-shelf loss and risk assessment software package built on GIS technology. This product was termed HAZUS97. The current version is Hazus-MH 4.0 (where MH stands for 'Multi-Hazard') and was released in 2017. Currently, Hazus can model multiple types of hazards: flooding, hurricanes, coastal surge, tsunamis, and earthquakes. The model estimates the risk in three steps. First, it calculates the exposure for a selected area. Second, it characterizes the level or intensity of the hazard affecting the exposed area. Lastly, it uses the exposed area and the hazard to calculate the potential losses in terms of economic losses, structural damage, etc.
Although it was developed with the US continent in focus, the Hazus toolset has been adopted by emergency management organizations worldwide such as Singapore, Canada, Australia, and Pakistan.
Description
US nationally applicable standardized methodology that contains models for estimating potential losses from earthquakes, floods and hurricanes. Hazus uses Geographic Information Systems (GIS) technology to estimate physical, economic and social impacts of disasters. It graphically illustrates the limits of identified high-risk locations due to earthquake, hurricane and floods. Users can then visualize the spatial relationships between populations and other more permanently fixed geographic assets or resources for the specific hazard being modeled, a crucial function in the pre-disaster planning process.
Hazus is used for mitigation and recovery, as well as preparedness and response. Government planners, GIS specialists and emergency managers use Hazus to determine losses and the most beneficial mitigation approaches to take to minimize them. Hazus can be used in the assessment step in the mitigation p
|
https://en.wikipedia.org/wiki/Protein-fragment%20complementation%20assay
|
Within the field of molecular biology, a protein-fragment complementation assay, or PCA, is a method for the identification and quantification of protein–protein interactions. In the PCA, the proteins of interest ("bait" and "prey") are each covalently linked to fragments of a third protein (e.g. DHFR, which acts as a "reporter"). Interaction between the bait and the prey proteins brings the fragments of the reporter protein in close proximity to allow them to form a functional reporter protein whose activity can be measured. This principle can be applied to many different reporter proteins and is also the basis for the yeast two-hybrid system, an archetypical PCA assay.
Split protein assays
Any protein that can be split into two parts and reconstituted non-covalently to form a functional protein may be used in a PCA. The two fragments however have low affinity for each other and must be brought together by other interacting proteins fused to them (often called "bait" and "prey" since the bait protein can be used to identify a prey protein, see figure). The protein that produces a detectable readout is called "reporter". Usually enzymes which confer resistance to nutrient deprivation or antibiotics, such as dihydrofolate reductase or beta-lactamase respectively, or proteins that give colorimetric or fluorescent signals are used as reporters. When fluorescent proteins are reconstituted the PCA is called Bimolecular fluorescence complementation assay. The following proteins have been used in split protein PCAs:
Beta-lactamase
Dihydrofolate reductase (DHFR)
Focal adhesion kinase (FAK)
Gal4, a yeast transcription factor (as in the classical yeast two-hybrid system)
GFP (split-GFP), e.g. EGFP (enhanced green fluorescent protein)
Horseradish peroxidase
Infrared fluorescent protein IFP1.4, an engineered chromophore-binding domain (CBD) of a bacteriophytochrome from Deinococcus radiodurans
LacZ (beta-galactosidase)
Luciferase, including ReBiL (recombinase enhanc
|
https://en.wikipedia.org/wiki/Ciona%20intestinalis
|
Ciona intestinalis (sometimes known by the common name of vase tunicate) is an ascidian (sea squirt), a tunicate with very soft tunic. Its Latin name literally means "pillar of intestines", referring to the fact that its body is a soft, translucent column-like structure, resembling a mass of intestines sprouting from a rock. It is a globally distributed cosmopolitan species. Since Linnaeus described the species, Ciona intestinalis has been used as a model invertebrate chordate in developmental biology and genomics. Studies conducted between 2005 and 2010 have shown that there are at least two, possibly four, sister species. More recently it has been shown that one of these species has already been described as Ciona robusta. By anthropogenic means, the species has invaded various parts of the world and is known as an invasive species.
Although Linnaeus first categorised this species as a kind of mollusk, Alexander Kovalevsky found a tadpole-like larval stage during development that shows similarity to vertebrates. Recent molecular phylogenetic studies as well as phylogenomic studies support that sea squirts are the closest invertebrate relatives of vertebrates. Its full genome has been sequenced using a specimen from Half Moon Bay in California, US, showing a very small genome size, less than 1/20 of the human genome, but having a gene corresponding to almost every family of genes in vertebrates.
Description
Ciona intestinalis is a solitary tunicate with a cylindrical, soft, gelatinous body, up to long. The body colour and colour at the distal end of siphons are major external characters distinguishing sister species within the species complex.
The body of Ciona is bag-like and covered by a tunic, which is a secretion of the epidermal cells. The body is attached by a permanent base located at the posterior end, while the opposite extremity has two openings, the buccal and atrial siphons. Water is drawn into the ascidian through the buccal (oral) siphon and l
|
https://en.wikipedia.org/wiki/Flat%20Display%20Mounting%20Interface
|
The Flat Display Mounting Interface (FDMI), also known as VESA Mounting Interface Standard (MIS) or colloquially as VESA mount, is a family of standards defined by the Video Electronics Standards Association for mounting flat panel monitors, televisions, and other displays to stands or wall mounts. It is implemented on most modern flat-panel monitors and televisions.
As well as being used for mounting monitors, the standards can be used to attach a small PC to the monitor mount.
The first standard in this family was introduced in 1997 and was originally called Flat Panel Monitor Physical Mounting Interface (FPMPMI), it corresponds to part D of the current standard.
Variants
Most sizes of VESA mount have four screw-holes arranged in a square on the mount, with matching tapped holes on the device. The horizontal and vertical distance between the screw centres respectively labelled as 'A', and 'B'. The original layout was a square of 100mm. A was defined for smaller displays. Later, variants were added for screens with as small as a diagonal.
The FDMI was extended in 2006 with additional screw patterns that are more appropriate for larger TV screens. Thus the standard now specifies seven sizes, each with more than one variant. These are referenced as parts B to F of the standard or with official abbreviations, usually prefixed by the word "VESA".
Unofficially, the variants are sometimes referenced as just "VESA" followed by the pattern size in mm, which is slightly ambiguous for the names "VESA 50" (four possibilities), "VESA 75" (two possibilities) and "VESA 200" (three possibilities). However, if "VESA 100" is accepted as meaning the original variant ("VESA MIS-D, 100"), then all but "VESA MIS-E" and "VESA MIS-F, 200" have at least one unique dimension that can be used in this way, as can be seen from the tables below.
Notes
If a screen is heavier or larger than specified in table 1, it should use a larger variant from the table, for instance, a 30-in L
|
https://en.wikipedia.org/wiki/Frozen%20section%20procedure
|
The frozen section procedure is a pathological laboratory procedure to perform rapid microscopic analysis of a specimen. It is used most often in oncological surgery. The technical name for this procedure is cryosection. The microtome device that cold cuts thin blocks of frozen tissue is called a cryotome.
The quality of the slides produced by frozen section is of lower quality than formalin fixed paraffin embedded tissue processing. While diagnosis can be rendered in many cases, fixed tissue processing is preferred in many conditions for more accurate diagnosis.
The intraoperative consultation is the name given to the whole intervention by the pathologist, which includes not only frozen section but also gross evaluation of the specimen, examination of cytology preparations taken on the specimen (e.g. touch imprints), and aliquoting of the specimen for special studies (e.g. molecular pathology techniques, flow cytometry). The report given by the pathologist is often limited to a "benign" or "malignant" diagnosis, and communicated to the surgeon operating via intercom. When operating on a previously confirmed malignancy, the main purpose of the pathologist is to inform the surgeon if the resection margin is clear of residual cancer, or if residual cancer is present at the resection margin. The method of processing is usually done with the bread loafing technique. But margin controlled surgery (CCPDMA) can be performed using a variety of tissue cutting and mounting methods, including Mohs surgery.
History
The frozen section procedure as practiced today in medical laboratories is based on the description by Dr Louis B. Wilson in 1905. Wilson developed the technique from earlier reports at the request of Dr William Mayo, surgeon and one of the founders of the Mayo Clinic Earlier reports by Dr Thomas S. Cullen at Johns Hopkins Hospital in Baltimore also involved frozen section, but only after formalin fixation, and pathologist Dr William Welch, also at Hopkins, expe
|
https://en.wikipedia.org/wiki/List%20of%20first-order%20theories
|
In first-order logic, a first-order theory is given by a set of axioms in some
language. This entry lists some of the more common examples used in model theory and some of their properties.
Preliminaries
For every natural mathematical structure there is a signature σ listing the constants, functions, and relations of the theory together with their arities, so that the object is naturally a σ-structure. Given a signature σ there is a unique first-order language Lσ that can be used to capture the first-order expressible facts about the σ-structure.
There are two common ways to specify theories:
List or describe a set of sentences in the language Lσ, called the axioms of the theory.
Give a set of σ-structures, and define a theory to be the set of sentences in Lσ holding in all these models. For example, the "theory of finite fields" consists of all sentences in the language of fields that are true in all finite fields.
An Lσ theory may:
be consistent: no proof of contradiction exists;
be satisfiable: there exists a σ-structure for which the sentences of the theory are all true (by the completeness theorem, satisfiability is equivalent to consistency);
be complete: for any statement, either it or its negation is provable;
have quantifier elimination;
eliminate imaginaries;
be finitely axiomatizable;
be decidable: There is an algorithm to decide which statements are provable;
be recursively axiomatizable;
be model complete or sub-model complete;
be κ-categorical: All models of cardinality κ are isomorphic;
be stable or unstable;
be ω-stable (same as totally transcendental for countable theories);
be superstable
have an atomic model;
have a prime model;
have a saturated model.
Pure identity theories
The signature of the pure identity theory is empty, with no functions, constants, or relations.
Pure identity theory has no (non-logical) axioms. It is decidable.
One of the few interesting properties that can be stated in the language of pure identity theory
|
https://en.wikipedia.org/wiki/MI1
|
MI1 or British Military Intelligence, Section 1 was a department of the British Directorate of Military Intelligence, part of the War Office. It was set up during World War I. It contained "C&C", which was responsible for code breaking.
Its subsections in World War I were:
MI1a: Distribution of reports, intelligence records.
MI1b: Interception and cryptanalysis.
MI1c: The Secret Service/SIS.
MI1d: Communications security.
MI1e: Wireless telegraphy.
MI1f: Personnel and finance.
MI1g: Security, deception and counter intelligence.
In 1919 MI1b and the Royal Navy's (NID25) "Room 40" were closed down and merged into the inter-service Government Code and Cypher School (GC&CS), which subsequently developed into the Government Communications Headquarters (GCHQ) at Cheltenham.
From 1915, MI1(b) was headed by Malcolm Vivian Hay. Oliver Strachey was in MI1 during World War I. He transferred to GC&CS and served there during World War II. John Tiltman was seconded to MI1 shortly before it merged with Room 40.
Notes
|
https://en.wikipedia.org/wiki/E%C3%B6tv%C3%B6s%20effect
|
The Eötvös effect is the change in measured Earth's gravity caused by the change in centrifugal acceleration resulting from eastbound or westbound velocity. When moving eastbound, the object's angular velocity is increased (in addition to Earth's rotation), and thus the centrifugal force also increases, causing a perceived reduction in gravitational force.
Discovery
In the early 1900s, a German team from the Geodetic Institute of Potsdam carried out gravity measurements on moving ships in the Atlantic, Indian, and Pacific oceans. While studying their results, the Hungarian nobleman and physicist Baron Roland von Eötvös (Loránd Eötvös) noticed that the readings were lower when the boat moved eastwards, higher when it moved westward. He identified this as primarily a consequence of Earth's rotation. In 1908, new measurements were made in the Black Sea on two ships, one moving eastward and one westward. The results substantiated Eötvös' claim.
Formulation
Geodesists use the following formula to correct for velocity relative to Earth during a gravimetric run.
Here,
is the relative acceleration
is the rotation rate of the Earth
is the velocity in longitudinal direction (east-west)
is the latitude where the measurements are taken.
is the velocity in latitudinal direction (north-south)
is the radius of the Earth
The first term in the formula, 2Ωu cos(ϕ), corresponds to the Eötvös effect. The second term is a refinement that under normal circumstances is much smaller than the Eötvös effect.
Physical explanation
The most common design for a gravimeter for field work is a spring-based design; a spring that suspends an internal weight. The suspending force provided by the spring counteracts the gravitational force. A well-manufactured spring has the property that the amount of force that the spring exerts is proportional to the extension of the spring from its equilibrium position (Hooke's law). The stronger the effective gravity at a particular location, the mor
|
https://en.wikipedia.org/wiki/Engineering%20Research%20Center%20for%20Wireless%20Integrated%20Microsystems
|
The NSF Engineering Research Center for Wireless Integrated Microsystems (ERC WIMS) was formed in 2000 in Michigan — through the collaboration of the University of Michigan (UM), Michigan State University (MSU), and Michigan Technological University.
The center is funded by the National Science Foundation. Additional contributions came from the state of Michigan, the three partnering core universities, other federal agencies, and a consortium of about twenty companies.
Purpose
The center researches innovations for wireless integrated microsystemss. The ERC WIMS works on merging micropower circuits, wireless interfaces, biomedical, and environmental sensors and subsystems, and advanced packaging to create microsystems that will have a pervasive impact on society during the next two decades.
The partnership combined UM's programs in sensors and microsystems with MSU's leadership in materials, especially in diamond and in carbon nanotubes, and Michigan Tech's expertise in packaging, micromilling, and hot embossing.
See also
External links
NSF Engineering Research Center for Wireless Integrated Microsystems at the University of Michigan
Engineering research institutes
Science and technology in Michigan
Michigan State University
Michigan Technological University
University of Michigan
Economy of Metro Detroit
Wireless network organizations
Research institutes established in 2000
2000 establishments in Michigan
|
https://en.wikipedia.org/wiki/Bifidobacterium%20animalis
|
Bifidobacterium animalis is a gram-positive, anaerobic, rod-shaped bacterium of the Bifidobacterium genus which can be found in the large intestines of most mammals, including humans.
Bifidobacterium animalis and Bifidobacterium lactis were previously described as two distinct species. Presently, both are considered B. animalis with the subspecies Bifidobacterium animalis subsp. animalis and Bifidobacterium animalis subsp. lactis.
Both old names B. animalis and B. lactis are still used on product labels, as this species is frequently used as a probiotic. In most cases, which subspecies is used in the product is not clear.
Trade names
Several companies have attempted to trademark particular strains, and as a marketing technique, have invented scientific-sounding names for the strains.
Danone (Dannon in the United States) markets the subspecies strain as Bifidus Digestivum (UK), Bifidus Regularis (US and Mexico), Bifidobacterium Lactis or B.L. Regularis (Canada), DanRegularis (Brazil), Bifidus Actiregularis (Argentina, Austria, Belgium, Bulgaria, Chile, Czech Republic, France, Germany, Greece, Hungary, Israel, Italy, Kazakhstan, Netherlands, Portugal, Romania, Russia, South Africa, Spain and the UK), and Bifidus Essensis in the Middle East (and formerly in Hungary, Bulgaria, Romania and The Netherlands) through Activia from Safi Danone KSA.
Chr. Hansen A/S from Denmark has a similar claim on a strain of Bifidobacterium animalis subsp. lactis, marketed under the trademark BB-12.
Lidl lists "Bifidobacterium BB-12" in its "Proviact" yogurt.
Bifidobacterium lactis Bl-04 and Bi-07 are strains from DuPont's Danisco FloraFIT range. They are used in many dietary probiotic supplements.
Theralac contains the strains Bifidobacterium lactis BI-07 and Bifidobacterium lactis BL-34 (also called BI-04) in its probiotic capsule.
Bifidobacterium lactis HN019 is a strain from Fonterra licensed to DuPont, which markets it as HOWARU Bifido. It is sold in a variety of commercial
|
https://en.wikipedia.org/wiki/Arcuate%20nucleus%20%28medulla%29
|
In the medulla oblongata, the arcuate nucleus is a group of neurons located on the anterior surface of the medullary pyramids. These nuclei are the extension of the pontine nuclei. They receive fibers from the corticospinal tract and send their axons through the anterior external arcuate fibers and medullary striae to the cerebellum via the inferior cerebellar peduncle.
Arcuate nuclei are capable of chemosensitivity and have a proven role in the respiratory center controlling the breathing rate.
Additional images
External links
PubMed article
Respiratory physiology
Medulla oblongata
|
https://en.wikipedia.org/wiki/NETtalk%20%28artificial%20neural%20network%29
|
NETtalk is an artificial neural network. It is the result of research carried out in the mid-1980s by Terrence Sejnowski and Charles Rosenberg. The intent behind NETtalk was to construct simplified models that might shed light on the complexity of learning human level cognitive tasks, and their implementation as a connectionist model that could also learn to perform a comparable task.
NETtalk is a program that learns to pronounce written English text by being shown text as input and matching phonetic transcriptions for comparison.
The network was trained on a large amount of English words and their corresponding pronunciations, and is able to generate pronunciations for unseen words with a high level of accuracy. The success of the NETtalk network inspired further research in the field of pronunciation generation and speech synthesis and demonstrated the potential of neural networks for solving complex NLP problems.
The network is designed to handle the complexity of the English language, including its irregular spelling-to-sound relationships, and was trained in a purely unsupervised manner, without the use of any annotated data.
Achievements and limitations
NETtalk was created to explore the mechanisms of learning to correctly pronounce English text. The authors note that learning to read involves a complex mechanism involving many parts of the human brain. NETtalk does not specifically model the image processing stages and letter recognition of the visual cortex. Rather, it assumes that the letters have been pre-classified and recognized, and these letter sequences comprising words are then shown to the neural network during training and during performance testing. It is NETtalk's task to learn proper associations between the correct pronunciation with a given sequence of letters based on the context in which the letters appear. In other words, NETtalk learns to use the letters around the currently pronounced phoneme that provide cues as to its intended phone
|
https://en.wikipedia.org/wiki/Insertion%20sequence
|
Insertion element (also known as an IS, an insertion sequence element, or an IS element) is a short DNA sequence that acts as a simple transposable element. Insertion sequences have two major characteristics: they are small relative to other transposable elements (generally around 700 to 2500 bp in length) and only code for proteins implicated in the transposition activity (they are thus different from other transposons, which also carry accessory genes such as antibiotic resistance genes). These proteins are usually the transposase which catalyses the enzymatic reaction allowing the IS to move, and also one regulatory protein which either stimulates or inhibits the transposition activity. The coding region in an insertion sequence is usually flanked by inverted repeats. For example, the well-known IS911 (1250 bp) is flanked by two 36bp inverted repeat extremities and the coding region has two genes partially overlapping orfA and orfAB, coding the transposase (OrfAB) and a regulatory protein (OrfA). A particular insertion sequence may be named according to the form ISn, where n is a number (e.g. IS1, IS2, IS3, IS10, IS50, IS911, IS26 etc.); this is not the only naming scheme used, however. Although insertion sequences are usually discussed in the context of prokaryotic genomes, certain eukaryotic DNA sequences belonging to the family of Tc1/mariner transposable elements may be considered to be, insertion sequences.
In addition to occurring autonomously, insertion sequences may also occur as parts of composite transposons; in a composite transposon, two insertion sequences flank one or more accessory genes, such as an antibiotic resistance gene (e.g. Tn10, Tn5). Nevertheless, there exist another sort of transposons, called unit transposons, that do not carry insertion sequences at their extremities (e.g. Tn7).
A complex transposon does not rely on flanking insertion sequences for resolvase. The resolvase is part of the tns genome and cuts at flanking inverted rep
|
https://en.wikipedia.org/wiki/Catuaba
|
The name Catuaba ( , via Portuguese from Guarani) is used for the infusions of the bark of a number of trees native to Brazil. The most widely used barks are derived from the trees Trichilia catigua and Erythroxylum vaccinifolium. Other catuaba preparations use the bark of trees from the following genera or families: Anemopaegma, Ilex, Micropholis, Phyllanthus, Secondatia, Tetragastris and species from the Myrtaceae.
It is often claimed that catuaba is derived from the tree Erythroxylum catuaba, but this tree has been described only once, in 1904, and it is not known today to what tree this name referred. E. catuaba is therefore not a recognised species (Kletter et al.; 2004).
Local synonyms are Chuchuhuasha, Tatuaba, Pau de Reposta, Piratancara and Caramuru. A commercial liquid preparation, Catuama, contains multiple ingredients, one of these being catuaba from Trichilia catigua.
An infusion of the bark is used in traditional Brazilian medicine as an aphrodisiac and central nervous system stimulant. These claims have not been confirmed in scientific studies. In catuaba is found a group of three alkaloids dubbed catuabine A, B and C.
A study by Manabe et al. (1992) showed that catuaba extracts from Catuaba casca (Erythroxylum catuaba Arr. Cam.) were useful in preventing potentially lethal bacterial infections and HIV infection in mice.
Notes
|
https://en.wikipedia.org/wiki/KAME%20project
|
The KAME project, a sub-project of the WIDE Project, was a joint effort of six organizations in Japan which aimed to provide a free IPv6 and IPsec (for both IPv4 and IPv6) protocol stack implementation for variants of the BSD Unix computer operating-system. The project began in 1998 and on November 7, 2005 it was announced that the project would be finished at the end of March 2006. The name KAME is a short version of Karigome, the location of the project's offices beside Keio University SFC.
KAME Project's code is based on "WIDE Hydrangea" IPv6/IPsec stack by WIDE Project.
The following organizations participated in the project:
ALAXALA Networks Corporation
Fujitsu, Ltd.
Hitachi, Ltd.
Internet Initiative Japan Inc.
Keio University
NEC Corporation
University of Tokyo
Toshiba Corporation
Yokogawa Electric Corporation
FreeBSD, NetBSD and DragonFly BSD integrated IPsec and IPv6 code from the KAME project; OpenBSD integrated just IPv6 code rather than both (having developed their own IPsec stack). Linux also integrated code from the project in its native IPsec implementation.
The KAME project collaborated with the TAHI Project (which develops and provides verification-technology for IPv6), the USAGI Project and the WIDE Project.
Racoon
racoon, KAME's user-space daemon, handles Internet Key Exchange (IKE). In Linux systems it forms part of the ipsec-tools package.
|
https://en.wikipedia.org/wiki/Phallus%20indusiatus
|
Phallus indusiatus, commonly called the bamboo mushrooms, bamboo pith, long net stinkhorn, crinoline stinkhorn, bridal veil, or veiled lady, is a fungus in the family Phallaceae, or stinkhorns. It has a cosmopolitan distribution in tropical areas, and is found in southern Asia, Africa, the Americas, and Australia, where it grows in woodlands and gardens in rich soil and well-rotted woody material. The fruit body of the fungus is characterised by a conical to bell-shaped cap on a stalk and a delicate lacy "skirt", or indusium, that hangs from beneath the cap and reaches nearly to the ground. First described scientifically in 1798 by French botanist Étienne Pierre Ventenat, the species has often been referred to a separate genus Dictyophora along with other Phallus species featuring an indusium. P. indusiatus can be distinguished from other similar species by differences in distribution, size, color, and indusium length.
Mature fruit bodies are up to tall with a conical to bell-shaped cap that is wide. The cap is covered with a greenish-brown spore-containing slime, which attracts flies and other insects that eat the spores and disperse them. An edible mushroom featured as an ingredient in Chinese haute cuisine, it is used in stir-fries and chicken soups. The mushroom, grown commercially and commonly sold in Asian markets, is rich in protein, carbohydrates, and dietary fiber. The mushroom also contains various bioactive compounds, and has antioxidant and antimicrobial properties. P. indusiatus has a recorded history of use in Chinese medicine extending back to the 7th century CE, and features in Nigerian folklore.
Description
Immature fruit bodies of P. indusiatus are initially enclosed in an egg-shaped to roughly spherical subterranean structure encased in a peridium. The "egg" ranges in color from whitish to buff to reddish-brown, measures up to in diameter, and usually has a thick mycelial cord attached at the bottom. As the mushroom matures, the pressure ca
|
https://en.wikipedia.org/wiki/Michigan%20Life%20Sciences%20Corridor
|
The Michigan Life Sciences Corridor (MLSC) is a $1 billion biotechnology initiative in the U.S. state of Michigan.
The MLSC invests in biotech research at four Michigan institutions: the University of Michigan in Ann Arbor; Michigan State University in East Lansing; Wayne State University in Detroit; and the Van Andel Institute in Grand Rapids.
The Michigan Economic Development Corporation administers the program. It began in 1999 with money from the state's settlement with the tobacco industry. When the program's funds distributions are completed in 2019, the goal is that the investments in high tech research will have notably expanded the state's economic base.
History
In 1998, the State of Michigan, along with 45 other states, reached the $8.5 billion Tobacco Master Settlement Agreement, a settlement with the U.S. tobacco industry. Former Governor John Engler created the Michigan Life Sciences Corridor in 1999 when he signed Public Act 120 of 1999. The bill appropriated money from the state's settlement with the tobacco industry to fund biotech research at four of Michigan's largest research institutions.
Under the management of the Michigan Economic Development Corporation, the MLSC allocated $1 billion over the course of 20 years, including $50 million in 1999 to fund research on aging. The following year, the MLSC awarded $100 million to 63 Michigan universities. In 2002, Governor Jennifer Granholm incorporated the MLSC into the Michigan Technology Tri-Corridor, adding funding for homeland security and alternative fuel research.
In 2009, the University of Michigan added a 30-building, North Campus Research Complex by acquiring the former Pfizer pharmaceutical corporation facility.
A BioEnterprise Midwest Healthcare Venture report found that Michigan attracted $451.8 million in new biotechnology venture capital investments from 2005 to 2009.
See also
University Research Corridor
|
https://en.wikipedia.org/wiki/Essure
|
Essure was a device for female sterilization. It is a metal coil which when placed into each fallopian tube induces fibrosis and blockage. Essure was designed as an alternative to tubal ligation. However, it was recalled by Bayer in 2018, and the device is no longer sold due to complications secondary to its implantation. The company has reported that several patients implanted with the Essure System for Permanent Birth Control have experienced and/or reported adverse effects, including: perforation of the uterus and/or fallopian tubes, identification of inserts in the abdominal or pelvic cavity, persistent pain, and suspected allergic or hypersensitivity reaction.
Although designed to remain in place for a lifetime, it was approved based on short-term safety studies. Of the 745 women with implants in the original premarket studies, 92% were followed up at one year, and 25% for two years, for safety outcomes. A 2009 review concluded that Essure appeared safe and effective based on short-term studies, that it was less invasive and could be cheaper than laparoscopic bilateral tubal ligation. About 750,000 women have received the device worldwide.
Initial trials found about 4% of women had tubal perforation, expulsion, or misplacement of the device at the time of the procedure. Since 2013, the product has been controversial, with thousands of women reporting severe side effects leading to surgical extraction. Rates of repeat surgery in the first year were ten times greater with Essure than with tubal ligation. Campaigner Erin Brockovich has been hosting a website where women can share their stories after having the procedure. As of 2015 many adverse events, including tubal perforations, intractable pain and bleeding leading to hysterectomies, possible device-related deaths, and hundreds of unintended pregnancies occurred, according to the US FDA adverse events database and other studies.
It was developed by Conceptus Inc. and approved for use in the United States in
|
https://en.wikipedia.org/wiki/Joint%20capsule
|
In anatomy, a joint capsule or articular capsule is an envelope surrounding a synovial joint. Each joint capsule has two parts: an outer fibrous layer or membrane, and an inner synovial layer or membrane.
Membranes
Each capsule consists of two layers or membranes:
an outer (fibrous membrane, fibrous stratum) composed of avascular white fibrous tissue
an inner (synovial membrane, synovial stratum) which is a secreting layer
On the inside of the capsule, articular cartilage covers the end surfaces of the bones that articulate within that joint.
The outer layer is highly innervated by the same nerves which perforate through the adjacent muscles associated with the joint.
Fibrous membrane
The fibrous membrane of the joint capsule is attached to the whole circumference of the articular end of each bone entering into the joint, and thus entirely surrounds the articulation. It is made up of dense connective tissue. It's a long spongy tissue.
Clinical significance
Frozen shoulder (adhesive capsulitis) is a disorder in which the shoulder capsule becomes inflamed.
Plica syndrome is a disorder in which the synovial plica becomes inflamed and causes abnormal biomechanics in the knee.
Gallery
See also
Articular capsule of the humerus
Articular capsule of the knee joint
Atlanto-axial joint
Capsule of atlantooccipital articulation
Capsule of hip joint
Capsule of temporomandibular joint
|
https://en.wikipedia.org/wiki/Natural%20frequency
|
Natural frequency, also known as eigenfrequency, is the frequency at which a system tends to oscillate in the absence of any driving force.
The motion pattern of a system oscillating at its natural frequency is called the normal mode (if all parts of the system move sinusoidally with that same frequency).
If the oscillating system is driven by an external force at the frequency at which the amplitude of its motion is greatest (close to a natural frequency of the system), this frequency is called resonant frequency.
Overview
Free vibrations of an elastic body, also called natural vibrations, occur at the natural frequency. Natural vibrations are different from forced vibrations which happen at the frequency of an applied force (forced frequency). If the forced frequency is equal to the natural frequency, the vibrations' amplitude increases manyfold. This phenomenon is known as resonance.
In analysis of systems, it is convenient to use the angular frequency rather than the frequency f, or the complex frequency domain parameter .
In a mass–spring system, with mass m and spring stiffness k, the natural angular frequency can be calculated as:
In an electrical network, ω is a natural angular frequency of a response function f(t) if the Laplace transform F(s) of f(t) includes the term , where for a real σ, and is a constant. Natural frequencies depend on network topology and element values but not their input. It can be shown that the set of natural frequencies in a network can be obtained by calculating the poles of all impedance and admittance functions of the network. A pole of the network transfer function is associated with a natural angular frequencies of the corresponding response variable; however there may exist some natural angular frequency that does not correspond to a pole of the network function. These happen at some special initial states.
In LC and RLC circuits, its natural angular frequency can be calculated as:
See also
Fundamental frequency
|
https://en.wikipedia.org/wiki/Meat%20extract
|
Meat extract is highly concentrated meat stock, usually made from beef or chicken. It is used to add meat flavor in cooking, and to make broth for soups and other liquid-based foods.
Meat extract was invented by Baron Justus von Liebig, a German 19th-century organic chemist. Liebig specialised in chemistry and the classification of food and wrote a paper on how the nutritional value of a meat is lost by boiling. Liebig's view was that meat juices, as well as the fibres, contained much important nutritional value and that these were lost by boiling or cooking in unenclosed vessels. Fuelled by a desire to help feed the undernourished, in 1840 he developed a concentrated beef extract, Extractum carnis Liebig, to provide a nutritious meat substitute for those unable to afford the real thing. However, it took 30 kg of meat to produce 1 kg of extract, making the extract too expensive.
Commercialization
Liebig's Extract of Meat Company
Liebig went on to co-found the Liebig's Extract of Meat Company, (later Oxo), in London whose factory, opened in 1865 in Fray Bentos, a port in Uruguay, took advantage of meat from cattle being raised for their hides — at one third the price of British meat. Before that, it was the Giebert et Compagnie (April 1863).
Bovril
In the 1870s, John Lawson Johnston invented 'Johnston's Fluid Beef', later renamed Bovril. Unlike Liebig's meat extract, Bovril also contained flavourings. It was manufactured in Argentina and Uruguay which could provide cheap cattle.
Effects
Liebig and Bovril were important contributors to the beef industry in South America.
Bonox
On the market in 1919 and created by the Fred Walker and Company Bonox is manufactured in Australia. When it was created it was often offered as an alternative hot drink with it being common to offer "Coffee, tea or Bonox".
Today
Meat extracts have largely been supplanted by bouillon cubes and yeast extract. Some brands of meat extract, such as Oxo and Bovril, now contain yeast extrac
|
https://en.wikipedia.org/wiki/Sagittaria%20sagittifolia
|
Sagittaria sagittifolia (also called arrowhead because of the shape of its leaves) is a flowering plant in the family Alismataceae, native to wetlands in most of Europe from Ireland and Portugal to Finland and Bulgaria, and in Russia, Ukraine, Siberia, Japan, Turkey, China, India, Australia, Vietnam and the Caucasus. It is also cultivated as a food crop in some other countries. In Britain it is the only native Sagittaria.
Sagittaria sagittifolia is a herbaceous perennial plant, growing in water from 10–50 cm deep. The leaves above water are arrowhead-shaped, the leaf blade 15–25 cm long and 10–22 cm broad, on a long petiole holding the leaf up to 45 cm above water level. The plant also has narrow linear submerged leaves, up to 80 cm long and 2 cm broad. The flowers are 2-2.5 cm broad, with three small sepals and three white petals, and numerous purple stamens.
Cultivation and uses
The round tuber is edible. It tastes bland, with a starchy texture, similar to a potato but somewhat crunchier, even when cooked. In Japan, it is known as () and its tuber is eaten particularly during the New Year. In China, it is known as and often used in winter hot pots. In Vietnam, the plant's young petiole leaves and rhizomes are used for soups.
Remnants of Sagittaria sagittifolia have been found in the Paleolithic/Mesolithic site of Całowanie in Poland.
Sagittaria sagittifolia is used in Chinese medicine, and in 2006 seven new ent-rosane diterpenoids and a new labdane diterpene were purified from the plant. Four of these compounds (Sagittine A–D) exhibited antibacterial activity against Streptococcus mutans and Actinomyces naeslundii while another (Sagittine E) was only active against A. naeslundii (MIC = 62.5 μg ml–1). Recently, the same group identified five new diterpenoids from Sagittaria pygmaea. None displayed activity against A. actinomycetemcomitans, while four of the others were active against A. viscosus and three against S. mutans, of which 18-ß-D-3',6'-diacetoxyg
|
https://en.wikipedia.org/wiki/Fine%20chemical
|
In chemistry, fine chemicals are complex, single, pure chemical substances, produced in limited quantities in multipurpose plants by multistep batch chemical or biotechnological processes. They are described by exacting specifications, used for further processing within the chemical industry and sold for more than $10/kg (see the comparison of fine chemicals, commodities and specialties). The class of fine chemicals is subdivided either on the basis of the added value (building blocks, advanced intermediates or active ingredients), or the type of business transaction, namely standard or exclusive products.
Fine chemicals are produced in limited volumes (< 1000 tons/year) and at relatively high prices (> $10/kg) according to exacting specifications, mainly by traditional organic synthesis in multipurpose chemical plants. Biotechnical processes are gaining ground. Fine chemicals are used as starting materials for specialty chemicals, particularly pharmaceuticals, biopharmaceuticals and agrochemicals. Custom manufacturing for the life science industry plays a big role; however, a significant portion of the fine chemicals total production volume is manufactured in-house by large users. The industry is fragmented and extends from small, privately owned companies to divisions of big, diversified chemical enterprises. The term "fine chemicals" is used in distinction to "heavy chemicals", which are produced and handled in large lots and are often in a crude state.
Since the late 1970s, fine chemicals have become an important part of the chemical industry. Their global total production value of $85 billion is split about 60-40 between in-house production in the life-science industry—the products' main consumers—and companies producing them for sale. The latter pursue both a "supply push" strategy, whereby standard products are developed in-house and offered ubiquitously, and a "demand pull" strategy, whereby products or services determined by the customer are provided excl
|
https://en.wikipedia.org/wiki/Pseudo%20algebraically%20closed%20field
|
In mathematics, a field is pseudo algebraically closed if it satisfies certain properties which hold for algebraically closed fields. The concept was introduced by James Ax in 1967.
Formulation
A field K is pseudo algebraically closed (usually abbreviated by PAC) if one of the following equivalent conditions holds:
Each absolutely irreducible variety defined over has a -rational point.
For each absolutely irreducible polynomial with and for each nonzero there exists such that and .
Each absolutely irreducible polynomial has infinitely many -rational points.
If is a finitely generated integral domain over with quotient field which is regular over , then there exist a homomorphism such that for each .
Examples
Algebraically closed fields and separably closed fields are always PAC.
Pseudo-finite fields and hyper-finite fields are PAC.
A non-principal ultraproduct of distinct finite fields is (pseudo-finite and hence) PAC. Ax deduces this from the Riemann hypothesis for curves over finite fields.
Infinite algebraic extensions of finite fields are PAC.
The PAC Nullstellensatz. The absolute Galois group of a field is profinite, hence compact, and hence equipped with a normalized Haar measure. Let be a countable Hilbertian field and let be a positive integer. Then for almost all -tuples , the fixed field of the subgroup generated by the automorphisms is PAC. Here the phrase "almost all" means "all but a set of measure zero". (This result is a consequence of Hilbert's irreducibility theorem.)
Let K be the maximal totally real Galois extension of the rational numbers and i the square root of −1. Then K(i) is PAC.
Properties
The Brauer group of a PAC field is trivial, as any Severi–Brauer variety has a rational point.
The absolute Galois group of a PAC field is a projective profinite group; equivalently, it has cohomological dimension at most 1.
A PAC field of characteristic zero is C1.
|
https://en.wikipedia.org/wiki/Classical-map%20hypernetted-chain%20method
|
The classical-map hypernetted-chain method (CHNC method) is a method used in many-body theoretical physics for interacting uniform electron liquids in two and three dimensions, and for non-ideal plasmas. The method extends the famous hypernetted-chain method (HNC) introduced by J. M. J van Leeuwen et al. to quantum fluids as well. The classical HNC, together with the Percus–Yevick approximation, are the two pillars which bear the brunt of most calculations in the theory of interacting classical fluids. Also, HNC and PY have become important in providing basic reference schemes in the theory of fluids, and hence they are of great importance to the physics of many-particle systems.
The HNC and PY integral equations provide the pair distribution functions of the particles in a classical fluid, even for very high coupling strengths. The coupling strength is measured by the ratio of the potential energy to the kinetic energy. In a classical fluid, the kinetic energy is proportional to the temperature. In a quantum fluid, the situation is very complicated as one needs to deal with quantum operators, and matrix elements of such operators, which appear in various perturbation methods based on Feynman diagrams. The CHNC method provides an approximate "escape" from these difficulties, and applies to regimes beyond perturbation theory. In Robert B. Laughlin's famous Nobel Laureate work on the fractional quantum Hall effect, an HNC equation was used within a classical plasma analogy.
In the CHNC method, the pair-distributions of the interacting particles are calculated using a mapping which ensures that the quantum mechanically correct non-interacting pair distribution function is recovered when the Coulomb interactions are switched off. The value of the method lies in its ability to calculate the interacting pair distribution functions g(r) at zero and finite temperatures. Comparison of the calculated g(r) with results from Quantum Monte Carlo show remarkable agreement, ev
|
https://en.wikipedia.org/wiki/Coffee%20ring%20effect
|
In physics, a "coffee ring" is a pattern left by a puddle of particle-laden liquid after it evaporates. The phenomenon is named for the characteristic ring-like deposit along the perimeter of a spill of coffee. It is also commonly seen after spilling red wine. The mechanism behind the formation of these and similar rings is known as the coffee ring effect or in some instances, the coffee stain effect, or simply ring stain.
Flow mechanism
The coffee-ring pattern originates from the capillary flow induced by the evaporation of the drop: liquid evaporating from the edge is replenished by liquid from the interior. The resulting current can carry nearly all the dispersed material to the edge. As a function of time, this process exhibits a "rush-hour" effect, that is, a rapid acceleration of the flow towards the edge at the final stage of the drying process.
Evaporation induces a Marangoni flow inside a droplet. The flow, if strong, redistributes particles back to the center of the droplet. Thus, for particles to accumulate at the edges, the liquid must have a weak Marangoni flow, or something must occur to disrupt the flow. For example, surfactants can be added to reduce the liquid's surface tension gradient, disrupting the induced flow. Water has a weak Marangoni flow to begin with, which is then reduced significantly by natural surfactants.
Interaction of the particles suspended in a droplet with the free surface of the droplet is important in creating a coffee ring. "When the drop evaporates, the free surface collapses and traps the suspended particles ... eventually all the particles are captured by the free surface and stay there for the rest of their trip towards the edge of the drop." This result means that surfactants can be used to manipulate the motion of the solute particles by changing the surface tension of the drop, rather than trying to control the bulk flow inside the drop. A number of interesting morphologies of the deposited particles can result.
|
https://en.wikipedia.org/wiki/A%20series%20and%20B%20series
|
In metaphysics, the A series and the B series are two different descriptions of the temporal ordering relation among events. The two series differ principally in their use of tense to describe the temporal relation between events and the resulting ontological implications regarding time.
John McTaggart introduced these terms in 1908, in an argument for the unreality of time. They are now commonly used by contemporary philosophers of time.
History
Metaphysical debate about temporal orderings reaches back to the ancient Greek philosophers Heraclitus and Parmenides. Parmenides thought that reality is timeless and unchanging. Heraclitus, in contrast, believed that the world is a process of ceaseless change, flux and decay. Reality for Heraclitus is dynamic and ephemeral. Indeed, the world is so fleeting, according to Heraclitus, that it is impossible to step twice into the same river.
McTaggart's series
McTaggart distinguished the ancient conceptions as a set of relations. According to McTaggart, there are two distinct modes in which all events can be ordered in time.
A series
In the first mode, events are ordered as future, present, and past. Futurity and pastness allow of degrees, while the present does not. When we speak of time in this way, we are speaking in terms of a series of positions which run from the remote past through the recent past to the present, and from the present through the near future all the way to the remote future. The essential characteristic of this descriptive modality is that one must think of the series of temporal positions as being in continual transformation, in the sense that an event is first part of the future, then part of the present, and then past. Moreover, the assertions made according to this modality correspond to the temporal perspective of the person who utters them. This is the A series of temporal events.
Although originally McTaggart defined tenses as relational qualities, i.e. qualities that events possess by sta
|
https://en.wikipedia.org/wiki/Richard%20Bird%20%28computer%20scientist%29
|
Richard Simpson Bird (4 February 1943 – 4 April 2022) was an English computer scientist.
Posts
He was a Supernumerary Fellow of Computation at Lincoln College, University of Oxford, in Oxford England, and former director of the Oxford University Computing Laboratory (now the Department of Computer Science, University of Oxford). Formerly, Bird was at the University of Reading.
Research interests
Bird's research interests lay in algorithm design and functional programming, and he was known as a regular contributor to the Journal of Functional Programming, and as author of several books promoting use of the programming language Haskell, including Introduction to Functional Programming using Haskell, Thinking Functionally with Haskell, Algorithm Design with Haskell co-authored with Jeremy Gibbons, and other books on related topics. His name is associated with the Bird–Meertens formalism, a calculus for deriving programs from specifications in a functional programming style.
Other organisational affilitations
He was a member of the International Federation for Information Processing (IFIP) IFIP Working Group 2.1 on Algorithmic Languages and Calculi, which specified, supports, and maintains the programming languages ALGOL 60 and ALGOL 68.
|
https://en.wikipedia.org/wiki/U.S.%20critical%20infrastructure%20protection
|
In the U.S., critical infrastructure protection (CIP) is a concept that relates to the preparedness and response to serious incidents that involve the critical infrastructure of a region or the nation.
The American Presidential directive PDD-63 of May 1998 set up a national program of "Critical Infrastructure Protection". In 2014 the NIST Cybersecurity Framework was published after further presidential directives.
History
The U.S. CIP is a national program to ensure the security of vulnerable and interconnected infrastructures of the United States. In May 1998, President Bill Clinton issued presidential directive PDD-63 on the subject of critical infrastructure protection. This recognized certain parts of the national infrastructure as critical to the national and economic security of the United States and the well-being of its citizenry, and required steps to be taken to protect it.
This was updated on December 17, 2003, by President Bush through Homeland Security Presidential Directive HSPD-7 for Critical Infrastructure Identification, Prioritization, and Protection. The directive describes the United States as having some critical infrastructure that is "so vital to the United States that the incapacity or destruction of such systems and assets would have a debilitating impact on security, national economic security, national public health or safety."
Overview
The systems and networks that make up the infrastructure of society are often taken for granted, yet a disruption to just one of those systems can have dire consequences across other sectors.
Take, for example, a computer virus that disrupts the distribution of natural gas across a region. This could lead to a consequential reduction in electrical power generation, which in turn leads to the forced shutdown of computerized controls and communications. Road traffic, air traffic, and rail transportation might then become affected. Emergency services might also be hampered.
An entire region can become
|
https://en.wikipedia.org/wiki/Apple%E2%80%93Intel%20architecture
|
The Apple–Intel architecture, or Mactel, is an unofficial name used for Macintosh personal computers developed and manufactured by Apple Inc. that use Intel x86 processors, rather than the PowerPC and Motorola 68000 ("68k") series processors used in their predecessors or the ARM-based Apple silicon SoCs used in their successors. As Apple changed the architecture of its products, they changed the firmware from the Open Firmware used on PowerPC-based Macs to the Intel-designed Extensible Firmware Interface (EFI). With the change in processor architecture to x86, Macs gained the ability to boot into x86-native operating systems (such as Microsoft Windows), while Intel VT-x brought near-native virtualization with macOS as the host OS.
Technologies
Background
Apple uses a subset of the standard PC architecture, which provides support for Mac OS X and support for other operating systems. Hardware and firmware components that must be supported to run an operating system on Apple-Intel hardware include the Extensible Firmware Interface.
The EFI and GUID Partition Table
With the change in architecture, a change in firmware became necessary. Extensible Firmware Interface (EFI) is the firmware-based replacement for the PC BIOS from Intel. Designed by Intel, it was chosen by Apple to replace Open Firmware, used on PowerPC architectures. Since many operating systems, such as Windows XP and many versions of Windows Vista, are incompatible with EFI, Apple released a firmware upgrade with a Compatibility Support Module that provides a subset of traditional BIOS support with its Boot Camp product.
GUID Partition Table (GPT) is a standard for the layout of the partition table on a physical hard disk. It is a part of the Extensible Firmware Interface (EFI) standard proposed by Intel as a substitute for the earlier PC BIOS. The GPT replaces the Master Boot Record (MBR) used with BIOS.
Booting
To Mac operating systems
Intel Macs can boot in two ways: directly via EFI, or in a "le
|
https://en.wikipedia.org/wiki/Settling%20time
|
In control theory the settling time of a dynamical system such as an amplifier or other output device is the time elapsed from the application of an ideal instantaneous step input to the time at which the amplifier output has entered and remained within a specified error band.
Settling time includes a propagation delay, plus the time required for the output to slew to the vicinity of the final value, recover from the overload condition associated with slew, and finally settle to within the specified error.
Systems with energy storage cannot respond instantaneously and will exhibit transient responses when they are subjected to inputs or disturbances.
Definition
Tay, Mareels and Moore (1998) defined settling time as "the time required for the response curve to reach and stay within a range of certain percentage (usually 5% or 2%) of the final value."
Mathematical detail
Settling time depends on the system response and natural frequency.
The settling time for a second order, underdamped system responding to a step response can be approximated if the damping ratio by
A general form is
Thus, if the damping ratio , settling time to within 2% = 0.02 is:
See also
Rise time
Time constant
|
https://en.wikipedia.org/wiki/X%20hyperactivation
|
X hyperactivation refers to the process in Drosophila by which genes on the X chromosome in male flies become twice as active as genes on the X chromosome in female flies.
Because male flies have a single X chromosome and female flies have two X chromosomes, the higher level of activation in males ensures that X chromosome genes are overall expressed at the same level in males and females. X hyperactivation is one mechanism of dosage compensation, whereby organisms that use genetic sex determination systems balance the gene dosage from the sex chromosomes between males and females. X hyperactivation is regulated by the alternative splicing of a gene called sex-lethal. The gene was named sex-lethal due to its mutant phenotype which has little to no effect on male flies but results in the death of females due to X hyperactivation of the two X chromosomes. In female Drosophila, the sex-lethal protein causes the female-specific splicing of the sex-lethalgene to produce more of the sex-lethal protein. This produces a positive feedback loop as the sex-lethal protein splices the sex-lethal gene to produce more of the sex-lethal protein. In male Drosophila, there isn’t enough sex-lethal to activate the female-specific splicing of the sex-lethal gene, and it goes through the "default" splicing. This means that section of the gene that is spliced out in females remains in males. This portion contains an early stop codon resulting in no protein being made from it. In females, the sex-lethal protein inhibits the male-specific lethal (msl) gene complex that would normally activate X-linked genes that result in an increase in the male transcription rate. The msl gene complex was named due to the loss-of-function mutant that results in the improper increase in the male transcription rate that results in the death of males. In males, the absence of the necessary amount of sex-lethal allows for the increase in the male transcription rate due to the msl gene complex no longer being
|
https://en.wikipedia.org/wiki/X%3AA%20ratio
|
The X:A ratio is the ratio between the number of X chromosomes and the number of sets of autosomes in an organism. This ratio is used primarily for determining the sex of some species, such as drosophila flies and the C. elegans nematode. The first use of this ratio for sex determination is ascribed to Victor M. Nigon.
Generally, a 1:1 ratio results in a female and a 1:2 ratio results in a male. When calculating the ratio, Y chromosomes are ignored. For example, for a diploid drosophila that has XX, the ratio is 1:1 (2 Xs to 2 sets of autosomes, since it is a diploid). For a diploid drosophila that has XY, the ratio is 1:2 (1 X to 2 sets of autosomes, since it is diploid).
Drosophilla sex chromosome ratio determines the factors it encodes which enhances the synthesis of sxl protein which in turn activates the female specific pathway.
See also
Notes
|
https://en.wikipedia.org/wiki/SOS%20box
|
SOS box is the region in the promoter of various genes to which the LexA repressor binds to repress the transcription of SOS-induced proteins. This occurs in the absence of DNA damage. In the presence of DNA damage the binding of LexA is inactivated by the RecA activator. SOS boxes differ in DNA sequences and binding affinity towards LexA from organism to organism. Furthermore, SOS boxes may be present in a dual fashion, which indicates that more than one SOS box can be within the same promoter.
Examples
See Nucleic acid nomenclature for an explanation of non-GATC nucleotide letters.
See also
SOS response
SOS gene
LexA
RecA
|
https://en.wikipedia.org/wiki/Inductive%20data%20type
|
Inductive data type may refer to:
Algebraic data type, a datatype each of whose values is data from other datatypes wrapped in one of the constructors of the datatype
Inductive family, a family of inductive data types indexed by another type or value
Recursive data type, a data type for values that may contain other values of the same type
See also
Inductive type
Induction (disambiguation)
Type theory
Dependently typed programming
|
https://en.wikipedia.org/wiki/Homonym%20%28biology%29
|
In biology, a homonym is a name for a taxon that is identical in spelling to another such name, that belongs to a different taxon.
The rule in the International Code of Zoological Nomenclature is that the first such name to be published is the senior homonym and is to be used (it is "valid"); any others are junior homonyms and must be replaced with new names. It is, however, possible that if a senior homonym is archaic, and not in "prevailing usage," it may be declared a nomen oblitum and rendered unavailable, while the junior homonym is preserved as a nomen protectum.
For example:
Cuvier proposed the genus Echidna in 1797 for the spiny anteater.
However, Forster had already published the name Echidna in 1777 for a genus of moray eels.
Forster's use thus has priority, with Cuvier's being a junior homonym.
Illiger published the replacement name Tachyglossus in 1811.
Similarly, the International Code of Nomenclature for algae, fungi, and plants (ICN) specifies that the first published of two or more homonyms is to be used: a later homonym is "illegitimate" and is not to be used unless conserved (or sanctioned, in the case of fungi).
Example: the later homonym Myroxylon L.f. (1782), in the family Leguminosae, is conserved against the earlier homonym Myroxylon J.R.Forst. & G.Forst. (1775) (now called Xylosma, in the family Salicaceae).
Parahomonyms
Under the botanical code, names that are similar enough that they are likely to be confused are also considered to be homonymous (article 53.3). For example, Astrostemma Benth. (1880) is an illegitimate homonym of Asterostemma Decne. (1838). The zoological code has a set of spelling variations (article 58) that are considered to be identical.
Hemihomonyms
Both codes only consider taxa that are in their respective scope (animals for the ICZN; primarily plants for the ICN). Therefore, if an animal taxon has the same name as a plant taxon, both names are valid. Such names are called hemihomonyms. For example, the name E
|
https://en.wikipedia.org/wiki/Bead%20theory
|
The bead theory is a disproved hypothesis that genes are arranged on the chromosome like beads on a necklace. This theory was first proposed by Thomas Hunt Morgan after discovering genes through his work with breeding red and white eyed fruit flies. According to this theory, the existence of a gene as a unit of inheritance is recognized through its mutant alleles. A mutant allele affects a single phenotypic character, maps to one chromosome locus, gives a mutant phenotype when paired and shows a Mendelian ratio when intercrossed. Several tenets of the bead theory are worth emphasizing :-
1. The gene is viewed as a fundamental unit of structure, indivisible by crossing over. Crossing over take place between genes ( the beads in this model ) but never within them.
2. The gene is viewed as the fundamental unit of change or mutation. It changes in toto from one allelic form into another; there are no smaller components within it that can change.
3. The gene is viewed as the fundamental unit of function ( although the precise function of gene is not specified in this model ). Parts of a gene, if they exist cannot function. Guido Pontecorvo continued to work under the basis of this theory until
Seymour Benzer showed in the 1950s that the bead theory was not correct. He demonstrated that a gene can be defined as a unit of function. A gene can be subdivided into a linear array of sites that are mutable and that can be recombined. The smallest units of mutation and recombination are now known to be correlated with single nucleotide pairs.
|
https://en.wikipedia.org/wiki/Spacer%20DNA
|
Spacer DNA is a region of non-coding DNA between genes. The terms intergenic spacer (IGS) or non-transcribed spacer (NTS) are used particularly for the spacer DNA between the many tandemly repeated copies of the ribosomal RNA genes.
In bacteria, spacer DNA sequences are only a few nucleotides long. In eukaryotes, they can be extensive and include repetitive DNA, comprising the majority of the DNA of the genome. In ribosomal DNA, there are spacers within and between gene clusters, called internal transcribed spacer (ITS) and external transcribed spacers (ETS), respectively. In animals, the mitochondrial DNA genes generally have very short spacers. In fungi, mitochondrial DNA spacers are common and variable in length, and they may also be mobile.
Due to the non-coding nature of spacer DNA, its nucleotide sequence changes much more rapidly over time than nucleotide sequences coding for genes that are subject to selective forces. Although spacer DNA might not have a function that depends on its nucleotide sequence, it may have sequence-independent functions.
Spacer DNA has practical applications that enable researchers and scientists to examine interactions between CRISPR proteins and bacteriophages.
|
https://en.wikipedia.org/wiki/Introitus
|
An introitus is an entrance into a canal or hollow organ. The vaginal introitus is the opening that leads to the vaginal canal.
|
https://en.wikipedia.org/wiki/Momentum%20map
|
In mathematics, specifically in symplectic geometry, the momentum map (or, by false etymology, moment map) is a tool associated with a Hamiltonian action of a Lie group on a symplectic manifold, used to construct conserved quantities for the action. The momentum map generalizes the classical notions of linear and angular momentum. It is an essential ingredient in various constructions of symplectic manifolds, including symplectic (Marsden–Weinstein) quotients, discussed below, and symplectic cuts and sums.
Formal definition
Let M be a manifold with symplectic form ω. Suppose that a Lie group G acts on M via symplectomorphisms (that is, the action of each g in G preserves ω). Let be the Lie algebra of G, its dual, and
the pairing between the two. Any ξ in induces a vector field ρ(ξ) on M describing the infinitesimal action of ξ. To be precise, at a point x in M the vector is
where is the exponential map and denotes the G-action on M. Let denote the contraction of this vector field with ω. Because G acts by symplectomorphisms, it follows that is closed (for all ξ in ).
Suppose that is not just closed but also exact, so that for some function . If this holds, then one may choose the to make the map linear. A momentum map for the G-action on (M, ω) is a map such that
for all ξ in . Here is the function from M to R defined by . The momentum map is uniquely defined up to an additive constant of integration (on each connected component).
An -action on a symplectic manifold is called Hamiltonian if it is symplectic and if there exists a momentum map.
A momentum map is often also required to be -equivariant, where G acts on via the coadjoint action, and sometimes this requirement is included in the definition of a Hamiltonian group action. If the group is compact or semisimple, then the constant of integration can always be chosen to make the momentum map coadjoint equivariant. However, in general the coadjoint action must be modified to make the map
|
https://en.wikipedia.org/wiki/DNS%20hosting%20service
|
A DNS hosting service is a service that runs Domain Name System (DNS) servers. Most, but not all, domain name registrars include DNS hosting service with registration. Free DNS hosting services also exist. Many third-party DNS hosting services provide dynamic DNS.
DNS hosting service is optimal when the provider has multiple servers in various geographic locations that provide resilience and minimize latency for clients around the world. By operating DNS nodes closer to end users, DNS queries travel a much shorter distance, resulting in faster Web address resolution speed.
DNS can also be self-hosted by running on generic Internet hosting services.
Free DNS
A number of sites offer free DNS hosting, either for second level domains registered with registrars which do not offer free (or sufficiently flexible) DNS service, or as third level domains (selection.somedomain.com). These services generally also offer Dynamic DNS. Free DNS typically includes facilities to manage A, MX, CNAME, TXT and NS records of the domain zone. In many cases the free services can be upgraded with various premium services.
Free DNS service providers can also make money through sponsorship. The majority of modern free DNS services are sponsored by large providers of telecommunication services.
See also
Domain Name System
Fast-flux DNS
Remote backup service
List of DNS record types
List of managed DNS providers
|
https://en.wikipedia.org/wiki/Internet%20hosting%20service
|
An Internet hosting service is a service that runs servers connected to the Internet, allowing organizations and individuals to serve content or host services connected to the Internet.
A common kind of hosting is web hosting. Most hosting providers offer a combination of services e-mail hosting, website hosting, and database hosting, for example. DNS hosting service, another type of service usually provided by hosting providers, is often bundled with domain name registration.
Dedicated server hosts, provide a server, usually housed in a datacenter and connected to the Internet where clients can run anything they want (including web servers and other servers). The hosting provider ensures that the servers have Internet connections with good upstream bandwidth and reliable power sources.
Another popular kind of hosting service is shared hosting. This is a type of web hosting service, where the hosting provider provisions hosting services for multiple clients on one physical server and share the resources between the clients. Virtualization is key to making this work effectively.
Types of hosting service
Full-featured hosting services
Full-featured hosting services include:
Complex managed hosting, applies to both physical dedicated servers and virtual servers, with many companies choosing a hybrid (a combination of physical and virtual) hosting solution. There are many similarities between standard and complex managed hosting but the key difference is the level of administrative and engineering support that the customer pays for – owing to both the increased size and complexity of the infrastructure deployment. The provider steps in to take over most of the management, including security, memory, storage, and IT support. The service is primarily proactive.
Dedicated hosting service, also called managed to host service, where the hosting service provider owns and manages the machine, leasing full control to the client. Management of the server can includ
|
https://en.wikipedia.org/wiki/Email%20hosting%20service
|
An email hosting service is an Internet hosting service that operates email servers.
Features
Email hosting services usually offer premium email as opposed to advertisement-supported free email or free webmail. Email hosting services thus differ from typical end-user email providers such as webmail sites. They cater mostly to demanding email users and small and medium-sized (SME) businesses, while larger enterprises usually run their own email hosting services on their own equipment using software such as Microsoft Exchange Server, IceWarp or Postfix. Hosting providers can manage a user's own domain name, including any email authentication scheme that the domain owner wishes to enforce to convey the meaning that using a specific domain name identifies and qualifies email senders.
Types
There are various types of email hosting services. These vary according to the storage space available, location of the mail boxes and functionality.
Various hosting providers offer this service through two models. A traditional email hosting or per mailbox hosting. Traditional email hosting charges a set amount for a certain number of mail boxes whereas the per mail box model charges per mail box needed.
These include:
Free Email Services using a public domain such as Gmail; Yahoo. These are more suitable for individual and personal use.
Shared Hosting Email Services are large mailboxes that are hosted on a server. People on a shared hosting email service share IP addresses as they are hosted on the same server.
Cloud Email Services are suitable for small companies and SMEs. These mailboxes are hosted externally utilizing a cloud service provider. Examples of these are Gsuite by Gmail and Microsoft Exchange Emails by Microsoft.
Enterprise Email Solutions are suitable for SMEs and large corporations that host several mailboxes. In some cases these are located on dedicated servers on the premises however they can be located on a cloud based server that can scale horizontally
|
https://en.wikipedia.org/wiki/Comparison%20of%20file%20comparison%20tools
|
This article compares computer software tools which are used for accomplishing comparisons of files of various types. The file types addressed by individual file comparison apps varies, but may include text, symbols, images, audio, or video. This category of software tool is often called "file comparison" or "diff tool", but those effectively are equivalent terms — where the term "diff" is more commonly associated with the Unix diff utility.
A typical rudimentary case is the comparison of one file against another. However, it also may include comparisons between two populations of files, such as in the case of comparing directories or folders, as part of file management. For instance, this might be to detect problems with corrupted backup versions of a collection of files ... or to validate a package of files is in compliance with standards before publishing.
Note that comparisons must be made among the same file type. Meaning, a text file cannot be compared to a picture containing text, unless an optical character reader (OCR) process is done first to extract the text. Likewise, text cannot be compared to spoken words, unless the spoken words first are transcribed into text. Additionally, text in one language cannot be compared to text in another, unless one is translated into the language of other.
A critical consideration is how the two files being compared must be substantially similar and thus not radically different. Even different revisions of the same document — if there are many changes due to additions, removals, or moving of content — may make comparisons of file changes very difficult to interpret. This suggests frequent version saves of a critical document, to better facilitate a file comparison.
A "diff" file comparison tool is a vital time and labor saving utility, because it aids in accomplishing tedious comparisons. Thus, it is a vital part of demanding comparison processes employed by individuals, academics, legal arena, forensics field, and ot
|
https://en.wikipedia.org/wiki/Plantlet
|
A plantlet is a young or small plant, produced on the leaf margins or the aerial stems of another plant.
Many plants such as spider plants naturally create stolons with plantlets on the ends as a form of asexual reproduction. Vegetative propagules or clippings of mature plants may form plantlets.
An example is mother of thousands. Many plants reproduce by throwing out long shoots or runners that can grow into new plants. Mother of thousands appears to have lost the ability to reproduce sexually and make seeds, but transferred at least part of the embryo-making process to the leaves to make plantlets.
See also
Apomixis
Plant propagation
Plant reproduction
|
https://en.wikipedia.org/wiki/Commutativity%20of%20conjunction
|
In propositional logic, the commutativity of conjunction is a valid argument form and truth-functional tautology. It is considered to be a law of classical logic. It is the principle that the conjuncts of a logical conjunction may switch places with each other, while preserving the truth-value of the resulting proposition.
Formal notation
Commutativity of conjunction can be expressed in sequent notation as:
and
where is a metalogical symbol meaning that is a syntactic consequence of , in the one case, and is a syntactic consequence of in the other, in some logical system;
or in rule form:
and
where the rule is that wherever an instance of "" appears on a line of a proof, it can be replaced with "" and wherever an instance of "" appears on a line of a proof, it can be replaced with "";
or as the statement of a truth-functional tautology or theorem of propositional logic:
and
where and are propositions expressed in some formal system.
Generalized principle
For any propositions H1, H2, ... Hn, and permutation σ(n) of the numbers 1 through n, it is the case that:
H1 H2 ... Hn
is equivalent to
Hσ(1) Hσ(2) Hσ(n).
For example, if H1 is
It is raining
H2 is
Socrates is mortal
and H3 is
2+2=4
then
It is raining and Socrates is mortal and 2+2=4
is equivalent to
Socrates is mortal and 2+2=4 and it is raining
and the other orderings of the predicates.
|
https://en.wikipedia.org/wiki/Adams%E2%80%93Oliver%20syndrome
|
Adams–Oliver syndrome (AOS) is a rare congenital disorder characterized by defects of the scalp and cranium (cutis aplasia congenita), transverse defects of the limbs, and mottling of the skin.
Signs and symptoms
Two key features of AOS are aplasia cutis congenita with or without underlying bony defects and terminal transverse limb defects. Cutis aplasia congenita is defined as missing skin over any area of the body at birth; in AOS skin aplasia occurs at the vertex of the skull. The size of the lesion is variable and may range from solitary round hairless patches to complete exposure of the cranial contents. There are also varying degrees of terminal limb defects (for example, shortened digits) of the upper extremities, lower extremities, or both. Individuals with AOS may have mild growth deficiency, with height in the low-normal percentiles. The skin is frequently observed to have a mottled appearance (cutis marmorata telangiectatica congenita). Other congenital anomalies, including cardiovascular malformations, cleft lip and/or palate, abnormal renal system, and neurologic disorders manifesting as seizure disorders and developmental delay are sometimes observed. Variable defects in blood vessels have been described, including hypoplastic aortic arch, middle cerebral artery, pulmonary arteries. Other vascular abnormalities described in AOS include absent portal vein, portal sclerosis, arteriovenous malformations, abnormal umbilical veins, and dilated renal veins.
Genetics
AOS was initially described as having autosomal dominant inheritance due to the reports of families with multiple affected family members in more than one generation. The severity of the condition can vary between family members, suggestive of variable expressivity and reduced penetrance of the disease-causing allele. Subsequently, it was reported that some cases of AOS appear to have autosomal recessive inheritance, perhaps with somewhat more severe phenotypic effects.
Six AOS genes have be
|
https://en.wikipedia.org/wiki/Alpha%20diversity
|
In ecology, alpha diversity (α-diversity) is the mean species diversity in a site at a local scale. The term was introduced by R. H. Whittaker together with the terms beta diversity (β-diversity) and gamma diversity (γ-diversity). Whittaker's idea was that the total species diversity in a landscape (gamma diversity) is determined by two different things, the mean species diversity in sites at a more local scale (alpha diversity) and the differentiation among those sites (beta diversity).
Scale considerations
Both the area or landscape of interest and the sites within it may be of very different sizes in different situations, and no consensus has been reached on what spatial scales are appropriate to quantify alpha diversity. It has therefore been proposed that the definition of alpha diversity does not need to be tied to a specific spatial scale: alpha diversity can be measured for an existing dataset that consists of subunits at any scale. The subunits can be, for example, sampling units that were already used in the field when carrying out the inventory, or grid cells that are delimited just for the purpose of analysis. If results are extrapolated beyond the actual observations, it needs to be taken into account that the species diversity in the subunits generally gives an underestimation of the species diversity in larger areas.
Different concepts
Ecologists have used several slightly different definitions of alpha diversity. Whittaker himself used the term both for the species diversity in a single subunit and for the mean species diversity in a collection of subunits. It has been argued that defining alpha diversity as a mean across all relevant subunits is preferable, because it agrees better with Whittaker's idea that total species diversity consists of alpha and beta components.
Definitions of alpha diversity can also differ in what they assume species diversity to be. Often researchers use the values given by one or more diversity indices, such as specie
|
https://en.wikipedia.org/wiki/Adams%E2%80%93Nance%20syndrome
|
Adams–Nance syndrome is a medical condition consisting of persistent tachycardia, paroxysmal hypertension and seizures. It is associated with hyperglycinuria, dominantly inherited microphthalmia and cataracts. It is thought to be caused by a disturbance in glycine metabolism.
|
https://en.wikipedia.org/wiki/Executive%20functions
|
In cognitive science and neuropsychology, executive functions (collectively referred to as executive function and cognitive control) are a set of cognitive processes that are necessary for the cognitive control of behavior: selecting and successfully monitoring behaviors that facilitate the attainment of chosen goals. Executive functions include basic cognitive processes such as attentional control, cognitive inhibition, inhibitory control, working memory, and cognitive flexibility. Higher-order executive functions require the simultaneous use of multiple basic executive functions and include planning and fluid intelligence (e.g., reasoning and problem-solving).
Executive functions gradually develop and change across the lifespan of an individual and can be improved at any time over the course of a person's life. Similarly, these cognitive processes can be adversely affected by a variety of events which affect an individual. Both neuropsychological tests (e.g., the Stroop test) and rating scales (e.g., the Behavior Rating Inventory of Executive Function) are used to measure executive functions. They are usually performed as part of a more comprehensive assessment to diagnose neurological and psychiatric disorders.
Cognitive control and stimulus control, which is associated with operant and classical conditioning, represent opposite processes (internal vs external or environmental, respectively) that compete over the control of an individual's elicited behaviors; in particular, inhibitory control is necessary for overriding stimulus-driven behavioral responses (stimulus control of behavior). The prefrontal cortex is necessary but not solely sufficient for executive functions; for example, the caudate nucleus and subthalamic nucleus also have a role in mediating inhibitory control.
Cognitive control is impaired in addiction, attention deficit hyperactivity disorder, autism, and a number of other central nervous system disorders. Stimulus-driven behavioral responses
|
https://en.wikipedia.org/wiki/Australasian%20Anti-Transportation%20League%20Flag
|
The Australian Anti-Transportation League Flag is a flag used historically by members of the Australasian Anti-Transportation League who opposed penal transportation to the British colonies that are now a part of Australia. It is particularly significant as it is the oldest known flag to feature a representation of the Southern Cross with the stars arranged as they are seen in the sky.
The flag was designed in 1849 by Reverend John West of Launceston, Tasmania, and from 1851 was used by the Australian Anti-Transportation League in the Australian colonies and in New Zealand. The flag is based on the blue ensign — blue background with the Union Flag in the Canton — and has gold or yellow stars of the Southern Cross on the fly. Each of the stars of the Southern Cross was symbolic of a member colony. There is a white border around three sides of the flag, which was used to display the name of the League, the year it was established and the name of the colony where it was flown.
The flag was no longer used after transportation was ceased in 1853; however, the design of the flag is similar to several later flags, including the Flag of New Zealand, Flag of Victoria, and Flag of Australia.
See also
List of Australian flags
|
https://en.wikipedia.org/wiki/Tuberous%20sclerosis%20complex%20tumor%20suppressors
|
Tuberous sclerosis complex (TSC) tumor suppressors form the TSC1-TSC2 molecular complex. Under poor growth conditions the TSC1-TSC2 complex limits cell growth. A key promoter of cell growth, mTORC1, is inhibited by the tuberous sclerosis complex. Insulin activates mTORC1 and causes dissociation of TSC from the surface of lysosomes.
Resistance to ischemia-reperfusion injury by protein restriction is mediated by activation of the tuberous sclerosis complex.
|
https://en.wikipedia.org/wiki/Heteronuclear%20single%20quantum%20coherence%20spectroscopy
|
The heteronuclear single quantum coherence or heteronuclear single quantum correlation experiment, normally abbreviated as HSQC, is used frequently in NMR spectroscopy of organic molecules and is of particular significance in the field of protein NMR. The experiment was first described by Geoffrey Bodenhausen and D. J. Ruben in 1980. The resulting spectrum is two-dimensional (2D) with one axis for proton (1H) and the other for a heteronucleus (an atomic nucleus other than a proton), which is usually 13C or 15N. The spectrum contains a peak for each unique proton attached to the heteronucleus being considered. The 2D HSQC can also be combined with other experiments in higher-dimensional NMR experiments, such as NOESY-HSQC or TOCSY-HSQC.
General scheme
The HSQC experiment is a highly sensitive 2D-NMR experiment and was first described in a 1H—15N system, but is also applicable to other nuclei such as 1H—13C and 1H—31P. The basic scheme of this experiment involves the transfer of magnetization on the proton to the second nucleus, which may be 15N, 13C or 31P, via an INEPT (Insensitive nuclei enhanced by polarization transfer) step. After a time delay (t1), the magnetization is transferred back to the proton via a retro-INEPT step and the signal is then recorded. In HSQC, a series of experiments is recorded where the time delay t1 is incremented. The 1H signal is detected in the directly measured dimension in each experiment, while the chemical shift of 15N or 13C is recorded in the indirect dimension which is formed from the series of experiments.
HSQC in protein NMR
1H—15N HSQC
The 15N HSQC experiment is one of the most frequently recorded experiments in protein NMR. The HSQC experiment can be performed using the natural abundance of the 15N isotope, but normally for protein NMR, isotopically labeled proteins are used. Such labelled proteins are usually produced by expressing the protein in cells grown in 15N-labelled media.
Each residue of the protein, wi
|
https://en.wikipedia.org/wiki/Comparison%20of%20widget%20engines
|
This is a comparison of widget engines. This article is not about widget toolkits that are used in computer programming to build graphical user interfaces.
General
Operating system support
Technical
Languages
Which programming languages the engines support. Most engines rely upon interpreted languages.
Formats and Development
Development Tools
As widgets are largely combinations of HTML or XHTML, CSS, and Javascript in most cases, standard AJAX tools, such as Eclipse ATF, can be used for development. Specialized tools may give access to additional capabilities supplied by frameworks such as Dojo or Openrico.
|
https://en.wikipedia.org/wiki/Homopolar%20generator
|
A homopolar generator is a DC electrical generator comprising an electrically conductive disc or cylinder rotating in a plane perpendicular to a uniform static magnetic field. A potential difference is created between the center of the disc and the rim (or ends of the cylinder) with an electrical polarity that depends on the direction of rotation and the orientation of the field. It is also known as a unipolar generator, acyclic generator, disk dynamo, or Faraday disc. The voltage is typically low, on the order of a few volts in the case of small demonstration models, but large research generators can produce hundreds of volts, and some systems have multiple generators in series to produce an even larger voltage. They are unusual in that they can source tremendous electric current, some more than a million amperes, because the homopolar generator can be made to have very low internal resistance. Also, the homopolar generator is unique in that no other rotary electric machine can produce DC without using rectifiers or commutators.
The Faraday disc
The first homopolar generator was developed by Michael Faraday during his experiments in 1831. It is frequently called the Faraday disc or Faraday wheel in his honor. It was the beginning of modern dynamos — that is, electrical generators which operate using a magnetic field. It was very inefficient and was not used as a practical power source, but it showed the possibility of generating electric power using magnetism, and led the way for commutated direct current dynamos and then alternating current alternators.
The Faraday disc was primarily inefficient due to counterflows of current. While current flow was induced directly underneath the magnet, the current would circulate backwards in regions outside the influence of the magnetic field. This counterflow limits the power output to the pickup wires, and induces waste heating of the copper disc. Later homopolar generators would solve this problem by using an array of
|
https://en.wikipedia.org/wiki/Wigner%20crystal
|
A Wigner crystal is the solid (crystalline) phase of electrons first predicted by Eugene Wigner in 1934. A gas of electrons moving in a uniform, inert, neutralizing background (i.e. Jellium Model) will crystallize and form a lattice if the electron density is less than a critical value. This is because the potential energy dominates the kinetic energy at low densities, so the detailed spatial arrangement of the electrons becomes important. To minimize the potential energy, the electrons form a bcc (body-centered cubic) lattice in 3D, a triangular lattice in 2D and an evenly spaced lattice in 1D. Most experimentally observed Wigner clusters exist due to the presence of the external confinement, i.e. external potential trap. As a consequence, deviations from the b.c.c or triangular lattice are observed. A crystalline state of the 2D electron gas can also be realized by applying a sufficiently strong magnetic field. However, it is still not clear whether it is the Wigner crystallization that has led to observation of insulating behaviour in magnetotransport measurements on 2D electron systems, since other candidates are present, such as Anderson localization.
More generally, a Wigner crystal phase can also refer to a crystal phase occurring in non-electronic systems at low density. In contrast, most crystals melt as the density is lowered. Examples seen in the laboratory are charged colloids or charged plastic spheres.
Description
A uniform electron gas at zero temperature is characterised by a single dimensionless parameter, the so-called Wigner–Seitz radius rs = a / ab, where a is the average inter-particle spacing and ab is the Bohr radius. The kinetic energy of an electron gas scales as 1/rs2, this can be seen for instance by considering a simple Fermi gas. The potential energy, on the other hand, is proportional to 1/rs. When rs becomes larger at low density, the latter becomes dominant and forces the electrons as far apart as possible. As a consequence, they
|
https://en.wikipedia.org/wiki/Language%20disorder
|
Language disorders or language impairments are disorders that involve the processing of linguistic information. Problems that may be experienced can involve grammar (syntax and/or morphology), semantics (meaning), or other aspects of language. These problems may be receptive (involving impaired language comprehension), expressive (involving language production), or a combination of both. Examples include specific language impairment, better defined as developmental language disorder, or DLD, and aphasia, among others. Language disorders can affect both spoken and written language, and can also affect sign language; typically, all forms of language will be impaired.
Current data indicates that 7% of young children display language disorder, with boys being diagnosed twice as often as girls.
Preliminary research on potential risk factors have suggested biological components, such as low birth weight, prematurity, general birth complications, and male gender, as well as family history and low parental education can increase the chance of developing language disorders.
For children with phonological and expressive language difficulties, there is evidence supporting speech and language therapy. However, the same therapy is shown to be much less effective for receptive language difficulties. These results are consistent with the poorer prognosis for receptive language impairments that are generally accompanied with problems in reading comprehension.
Note that these are distinct from speech disorders, which involve difficulty with the act of speech production, but not with language.
Language disorders tend to manifest in two different ways: receptive language disorders (where one cannot properly comprehend language) and expressive language disorders (where one cannot properly communicate their intended message).
Receptive language disorders
Receptive language disorders can be acquired—as in the case of receptive aphasia, or developmental (most often the latter)
|
https://en.wikipedia.org/wiki/Solvent%20exposure
|
Solvent exposure occurs when a chemical, material, or person comes into contact with a solvent. Chemicals can be dissolved in solvents, materials such as polymers can be broken down chemically by solvents, and people can develop certain ailments from exposure to solvents both organic and inorganic.
Some common solvents include acetone, methanol, tetrahydrofuran, dimethylsulfoxide, and water among countless others.
In biology, the solvent exposure of an amino acid in a protein measures to what extent the amino acid is accessible to the solvent (usually water) surrounding the protein. Generally speaking, hydrophobic amino acids will be buried inside the protein and thus shielded from the solvent, while hydrophilic amino acids will be close to the surface and thus exposed to the solvent. However, as with many biological rules exceptions are common and hydrophilic residues are frequently found to be buried in the native structure and vice versa.
Solvent exposure can be numerically described by several measures, the most popular measures being accessible surface area and relative accessible surface area. Other measures are for example:
Contact number: number of amino acid neighbors within a sphere around the amino acid.
Residue depth: distance of the amino acid to the molecular surface.
Half sphere exposure: number of amino acid neighbors within two half spheres around the amino acid.
|
https://en.wikipedia.org/wiki/Microsoft%20Advertising
|
Microsoft Advertising (formerly Bing Ads) is an online advertising platform developed by Microsoft, where advertisers bid to display brief ads, service offers, product listings and videos to web users, it provides pay per click advertising on search engines Bing, Yahoo! and DuckDuckGo, as well as on other websites, mobile apps, and videos.
In 2021, Microsoft Advertising surpassed US$10 billion in annual revenue.
History
Microsoft was the last of the "big three" search engines (which also includes Google and Yahoo!) to develop its own system for delivering pay per click (PPC) ads. Until the beginning of 2006, all of the ads displayed on the MSN Search engine were supplied by Overture (and later Yahoo!). MSN collected a portion of the ad revenue in return for displaying Yahoo!'s ads on its search engine.
As search marketing grew, Microsoft began developing its own system, MSN adCenter, for selling PPC advertisements directly to advertisers. As the system was phased in, MSN Search (now Bing) showed Yahoo! and adCenter advertising in its search results. Microsoft effort to create AdCenter was led by Tarek Najm, then general manager of the MSN division of Microsoft. In June 2006, the contract between Yahoo! and Microsoft had expired and Microsoft was displaying only ads from adCenter until 2010.
In November 2006 Microsoft acquired Deep Metrix, a company situated in Gatineau, Canada, that created web-analytics software. Microsoft has built a new product adCenter Analytics based on the acquired technology. In October, 2007 the beta version of Microsoft Project Gatineau was released to a limited number of participants.
In May 2007, Microsoft agreed to purchase the digital marketing solutions parent company, aQuantive, for roughly $6 billion. Microsoft later resold Atlas, a key piece of the aQuantive acquisition, to Facebook in 2013.
Microsoft acquired ScreenTonic on May 3, 2007, AdECN on July 26, 2007, and YaData on February 27, 2008, and merged their technologies
|
https://en.wikipedia.org/wiki/3-D%20Secure
|
3-D Secure is a protocol designed to be an additional security layer for online credit and debit card transactions. The name refers to the "three domains" which interact using the protocol: the merchant/acquirer domain, the issuer domain, and the interoperability domain.
Originally developed in the autumn of 1999 by Celo Communications AB (which was acquired by Gemplus Associates and integrated into Gemplus, Gemalto and now Thales Group) for Visa Inc. in a project named "p42" ("p" from Pole vault as the project was a big challenge and "42" as the answer from the book The Hitchhiker's Guide to the Galaxy).
A new updated version was developed by Gemplus between 2000-2001.
In 2001 Arcot Systems (now CA Technologies) and Visa Inc. with the intention of improving the security of Internet payments, and offered to customers under the Verified by Visa brand (later rebranded as Visa Secure). Services based on the protocol have also been adopted by Mastercard as SecureCode (later rebranded as Identity Check), by Discover as ProtectBuy, by JCB International as J/Secure, and by American Express as American Express SafeKey. Later revisions of the protocol have been produced by EMVCo under the name EMV 3-D Secure. Version 2 of the protocol was published in 2016 with the aim of complying with new EU authentication
requirements and resolving some of the short-comings of the original protocol.
Analysis of the first version of the protocol by academia has shown it to have many security issues that affect the consumer, including a greater surface area for phishing and a shift of liability in the case of fraudulent payments.
Description and basic aspects
The basic concept of the protocol is to tie the financial authorization process with online authentication. This additional security authentication is based on a three-domain model (hence the "3-D" in the name). The three domains are:
Acquirer domain (the bank and the merchant to which the money is being paid),
Issuer domain (
|
https://en.wikipedia.org/wiki/Accessibility%20relation
|
An accessibility relation is a relation which plays a key role in assigning truth values to sentences in the relational semantics for modal logic. In relational semantics, a modal formula's truth value at a possible world can depend on what's true at another possible world , but only if the accessibility relation relates to . For instance, if holds at some world such that , the formula will be true at . The fact is crucial. If did not relate to , then would be false at unless also held at some other world such that .
Accessibility relations are motivated conceptually by the fact that natural language modal statements depend on some, but not all alternative scenarios. For instance, the sentence "It might be raining" is not generally judged true simply because one can imagine a scenario where it was raining. Rather, its truth depends on whether such a scenario is ruled out by available information. This fact can be formalized in modal logic by choosing an accessibility relation such that iff is compatible with the information that's available to the speaker in .
This idea can be extended to different applications of modal logic. In epistemology, one can use an epistemic notion of accessibility where for an individual iff does not know something which would rule out the hypothesis that . In deontic modal logic, one can say that iff is a morally ideal world given the moral standards of . In application of modal logic to computer science, the so-called possible worlds can be understood as representing possible states and the accessibility relation can be understood as a program. Then iff running the program can transition the computer from state to state .
Different applications of modal logic can suggest different restrictions on admissible accessibility relations, which can in turn lead to different validities. The mathematical study of how validities are tied to conditions on accessibility relations is known as modal correspondence theory.
S
|
https://en.wikipedia.org/wiki/Boss%20%28architecture%29
|
In architecture, a boss is a decorative knob on a ceiling, wall or sculpture.
Bosses can often be found in the ceilings of buildings, particularly at the keystones at the intersections of a rib vault. In Gothic architecture, such roof bosses (or ceiling bosses) are often intricately carved with foliage, heraldic devices or other decorations. Many feature animals, birds, or human figures or faces, sometimes realistic, but often Grotesque: the Green Man is a frequent subject.
The Romanesque Norwich Cathedral in Norfolk, United Kingdom, has the largest number of painted carved stone bosses in the world; an extensive and varied collection of over one thousand individual pieces. Many of these decorated bosses still bear the original gilt and pigments from the time of their creation.
Gallery
See also
Bossage
Lifting boss
Three hares
|
https://en.wikipedia.org/wiki/FM%20Global
|
FM Global is an American mutual insurance company based in Johnston, Rhode Island, United States, with offices worldwide, that specializes in loss prevention services primarily to large corporations throughout the world in the Highly Protected Risk (HPR) property insurance market sector. "FM Global" is the communicative name of the company, whereas the legal name is "Factory Mutual Insurance Company". FM Global has been named the "Best Property Insurer in the World” by Euromoney Magazine.
The company employs a non-traditional business model whereby risk and premiums are determined by engineering analysis as opposed to historically based actuarial calculations. This business approach is centered on the belief that property losses can be prevented or mitigated. FM Global engineering personnel regularly visit insured locations to evaluate hazards and recommend improvements to their property or work practices to reduce physical and financial risks if a loss occurs.
History
During the depression of 1835, Zachariah Allen, a prominent textile mill owner, attempted to reduce the insurance premium on his Rhode Island, USA, mill by making property improvements that he believed would minimize the damage in case of fire. At that time, insurance premium increases for losses were shared among all insureds, regardless of individual loss history. The concept of loss prevention and control was virtually unheard of at the time. To Allen, a proactive approach to preventing losses made good economic sense.
After making considerable improvements to his mill, Allen requested a reduction in his premium, but was denied. He called upon other local textile mill owners who shared his loss prevention philosophy to create a mutual insurance company that would insure only factories with lower risks. This approach should result in fewer losses and smaller premium payments. Whatever premium remained at the end of the year would be returned to policyholders in the form of dividends. The group a
|
https://en.wikipedia.org/wiki/Acland%27s%20Video%20Atlas%20of%20Human%20Anatomy
|
Acland's Video Atlas of Human Anatomy is a series of anatomy lessons on video presented by Robert D. Acland. Dr. Acland was a professor of surgery in the division of plastic and reconstructive surgery at the University of Louisville School of Medicine. The Atlas was originally released as a series of VHS tapes, published individually between 1995 and 2003. The series was re-released in 2003 on DVD as Acland's DVD Atlas of Human Anatomy.
The series uses unembalmed human specimens to illustrate anatomical structures. Intended for use by medical, dental and medical science students, the video teaching aid uses simple language and high quality images.
The authors claim: "Each minute of the finished product took twelve hours to produce: five in creating the script, five in making the shots, and two in post-production."
Contents
Volume 1 - The Upper Extremity
Volume 2 - The Lower Extremity
Volume 3 - The Trunk (Musculoskeletal System)
Volume 4 - The Head and Neck: Part 1
Volume 5 - The Head and Neck: Part 2
Volume 6 - The Internal Organs and Reproductive System
Reception
The British Medical Journal wrote that "Robert Acland’s video atlas series represents a powerful force against .. perceived dumbing down and has set about reinvigorating the subject through its crystal clear presentation of human anatomy."
|
https://en.wikipedia.org/wiki/OMI%20cryptograph
|
The OMI cryptograph was a rotor cipher machine produced and sold by Italian firm Ottico Meccanica Italiana (OMI) in Rome.
The machine had seven rotors, including a reflecting rotor. The rotors stepped regularly. Each rotor could be assembled from two sections with different wiring: one section consisted of a "frame" containing ratchet notches, as well as some wiring, while the other section consisted of a "slug" with a separate wiring. The slug section fitted into the frame section, and different slugs and frames could be interchanged with each other. As a consequence, there were many permutations for the rotor selection.
The machine was offered for sale during the 1960s.
|
https://en.wikipedia.org/wiki/Focused%20Ultrasound%20Foundation
|
The Focused Ultrasound Foundation (FUSF) is a 501(c)(3) non-profit organization based in Charlottesville, Virginia, United States, that promotes the use of image-guided focused ultrasound. The foundation is primarily funded through philanthropic donations.
The Focused Ultrasound Foundation has received attention in part because of The Tumor, a novella by legal thriller writer John Grisham about a future glioma patient who benefits from focused ultrasound treatment. Grisham is distributing the book at no cost to raise awareness about the therapy and the Foundation. Referencing the book and Foundation, Grisham states “This is the most important book I have ever written. I have found no other cause that can potentially save so many lives.”.
History
The Foundation was formed on the third of January in 2005, and received charity status in November of the same year. In October 2005 the Foundation hosted the 5th International Symposium on Therapeutic Ultrasound, at Harvard Medical School, which resulted in the largest-ever gathering of world experts in the use of ultrasound for the treatment of cancer and other disorders. Under the Foundation's charter it operates as an unincorporated association.
Activities
The Foundation compiles and reports on web-based news related to focused ultrasound in therapeutic and diagnostic medicine, organizes and assists with symposia, discussions, and meetings pertaining to diagnostic and therapeutic focused ultrasound as well as student-related research programs. There is a strong concentration in high intensity focused ultrasound (HIFU). The Association promotes collaborations between clinical and research groups via written and oral communication.
The Focused Ultrasound Foundation offers a fellowship program to high school and university students allowing them to work with the Foundation’s medical and research teams. They often partner up with the University of Virginia and Xavier University of Louisiana to help students begin the
|
https://en.wikipedia.org/wiki/Semantic%20neural%20network
|
Semantic neural network (SNN) is based on John von Neumann's neural network [von Neumann, 1966] and Nikolai Amosov M-Network. There are limitations to a link topology for the von Neumann’s network but SNN accept a case without these limitations. Only logical values can be processed, but SNN accept that fuzzy values can be processed too. All neurons into the von Neumann network are synchronized by tacts. For further use of self-synchronizing circuit technique SNN accepts neurons can be self-running or synchronized.
In contrast to the von Neumann network there are no limitations for topology of neurons for semantic networks. It leads to the impossibility of relative addressing of neurons as it was done by von Neumann. In this case an absolute readdressing should be used. Every neuron should have a unique identifier that would provide a direct access to another neuron. Of course, neurons interacting by axons-dendrites should have each other's identifiers. An absolute readdressing can be modulated by using neuron specificity as it was realized for biological neural networks.
There’s no description for self-reflectiveness and self-modification abilities into the initial description of semantic networks [Dudar Z.V., Shuklin D.E., 2000]. But in [Shuklin D.E. 2004] a conclusion had been drawn about the necessity of introspection and self-modification abilities in the system. For maintenance of these abilities a concept of pointer to neuron is provided. Pointers represent virtual connections between neurons. In this model, bodies and signals transferring through the neurons connections represent a physical body, and virtual connections between neurons are representing an astral body. It is proposed to create models of artificial neuron networks on the basis of virtual machine supporting the opportunity for paranormal effects.
SNN is generally used for natural language processing.
Related models
Computational creativity
Semantic hashing
Semantic Pointer Architecture
Sp
|
https://en.wikipedia.org/wiki/Theoretical%20motivation%20for%20general%20relativity
|
A theoretical motivation for general relativity, including the motivation for the geodesic equation and the Einstein field equation, can be obtained from special relativity by examining the dynamics of particles in circular orbits about the Earth. A key advantage in examining circular orbits is that it is possible to know the solution of the Einstein Field Equation a priori. This provides a means to inform and verify the formalism.
General relativity addresses two questions:
How does the curvature of spacetime affect the motion of matter?
How does the presence of matter affect the curvature of spacetime?
The former question is answered with the geodesic equation. The second question is answered with the Einstein field equation. The geodesic equation and the field equation are related through a principle of least action. The motivation for the geodesic equation is provided in the section Geodesic equation for circular orbits. The motivation for the Einstein field equation is provided in the section Stress–energy tensor.
Geodesic equation for circular orbits
Kinetics of circular orbits
For definiteness consider a circular Earth orbit (helical world line) of a particle. The particle travels with speed v. An observer on Earth sees that length is contracted in the frame of the particle. A measuring stick traveling with the particle appears shorter to the Earth observer. Therefore, the circumference of the orbit, which is in the direction of motion appears longer than times the diameter of the orbit.
In special relativity the 4-proper-velocity of the particle in the inertial (non-accelerating) frame of the earth is
where c is the speed of light, is the 3-velocity, and is
.
The magnitude of the 4-velocity vector is always constant
where we are using a Minkowski metric
.
The magnitude of the 4-velocity is therefore a Lorentz scalar.
The 4-acceleration in the Earth (non-accelerating) frame is
where is c times the proper time interval measured in the fra
|
https://en.wikipedia.org/wiki/NUTS%20statistical%20regions%20of%20Spain
|
In the NUTS (Nomenclature of Territorial Units for Statistics) codes of Spain (ES), the following are the first-level political and administrative divisions.
Overall
NUTS Codes
Local administrative units
Below the NUTS levels, the two LAU (Local Administrative Units) levels are:
The LAU codes of Spain can be downloaded here:
NUTS codes
Older Codes
In the 2003 version, the two provinces of the Canary Islands were coded as follows:
See also
Subdivisions of Spain
ISO 3166-2 codes of Spain
FIPS region codes of Spain
Sources
Hierarchical list of the Nomenclature of territorial units for statistics - NUTS and the Statistical regions of Europe
Overview map of EU Countries - NUTS level 1
ESPANA - NUTS level 2
ESPANA - NUTS level 3
Correspondence between the NUTS levels and the national administrative units
List of current NUTS codes
Download current NUTS codes (ODS format)
Provinces of Spain, Statoids.com
Spain
Spain
Nuts
|
https://en.wikipedia.org/wiki/Proof%20of%20impossibility
|
In mathematics, a proof of impossibility is a proof that demonstrates that a particular problem cannot be solved as described in the claim, or that a particular set of problems cannot be solved in general. Such a case is also known as a negative proof, proof of an impossibility theorem, or negative result. Proofs of impossibility often are the resolutions to decades or centuries of work attempting to find a solution, eventually proving that there is no solution. Proving that something is impossible is usually much harder than the opposite task, as it is often necessary to develop a proof that works in general, rather than to just show a particular example. Impossibility theorems are usually expressible as negative existential propositions or universal propositions in logic.
The irrationality of the square root of 2 is one of the oldest proofs of impossibility. It shows that it is impossible to express the square root of 2 as a ratio of two integers. Another consequential proof of impossibility was Ferdinand von Lindemann's proof in 1882, which showed that the problem of squaring the circle cannot be solved because the number is transcendental (i.e., non-algebraic), and that only a subset of the algebraic numbers can be constructed by compass and straightedge. Two other classical problems—trisecting the general angle and doubling the cube—were also proved impossible in the 19th century, and all of these problems gave rise to research into more complicated mathematical structures.
A problem that arose in the 16th century was creating a general formula using radicals to express the solution of any polynomial equation of fixed degree k, where k ≥ 5. In the 1820s, the Abel–Ruffini theorem (also known as Abel's impossibility theorem) showed this to be impossible, using concepts such as solvable groups from Galois theory—a new sub-field of abstract algebra.
Some of the most important proofs of impossibility found in the 20th century were those related to undecidability
|
https://en.wikipedia.org/wiki/Software%20analyst
|
In a software development team, a software analyst is the person who monitors the software development process, performs configuration management, identifies safety, performance, and compliance issues, and prepares software requirements and specification (Software Requirements Specification) documents. The software analyst is the seam between the software users and the software developers. They convey the demands of software users to the developers.
See also
Systems analyst
Application analyst
|
https://en.wikipedia.org/wiki/Super%20vector%20space
|
In mathematics, a super vector space is a -graded vector space, that is, a vector space over a field with a given decomposition of subspaces of grade and grade . The study of super vector spaces and their generalizations is sometimes called super linear algebra. These objects find their principal application in theoretical physics where they are used to describe the various algebraic aspects of supersymmetry.
Definitions
A super vector space is a -graded vector space with decomposition
Vectors that are elements of either or are said to be homogeneous. The parity of a nonzero homogeneous element, denoted by , is or according to whether it is in or ,
Vectors of parity are called even and those of parity are called odd. In theoretical physics, the even elements are sometimes called Bose elements or bosonic, and the odd elements Fermi elements or fermionic. Definitions for super vector spaces are often given only in terms of homogeneous elements and then extended to nonhomogeneous elements by linearity.
If is finite-dimensional and the dimensions of and are and respectively, then is said to have dimension . The standard super coordinate space, denoted , is the ordinary coordinate space where the even subspace is spanned by the first coordinate basis vectors and the odd space is spanned by the last .
A homogeneous subspace of a super vector space is a linear subspace that is spanned by homogeneous elements. Homogeneous subspaces are super vector spaces in their own right (with the obvious grading).
For any super vector space , one can define the parity reversed space to be the super vector space with the even and odd subspaces interchanged. That is,
Linear transformations
A homomorphism, a morphism in the category of super vector spaces, from one super vector space to another is a grade-preserving linear transformation. A linear transformation between super vector spaces is grade preserving if
That is, it maps the even elements of to even e
|
https://en.wikipedia.org/wiki/National%20Treasure%3A%20Book%20of%20Secrets
|
National Treasure: Book of Secrets is a 2007 American action-adventure film directed by Jon Turteltaub and produced by Jerry Bruckheimer. It is a sequel to the 2004 film National Treasure and is the second film of the National Treasure franchise. The film stars Nicolas Cage in the lead role, Jon Voight, Harvey Keitel, Ed Harris, Diane Kruger, Justin Bartha, Bruce Greenwood and Helen Mirren. The film premiered in New York City on December 13, 2007, and Walt Disney Studios Motion Pictures released it in North America on December 21, 2007. Like its predecessor, it received mixed reviews from critics who compared it unfavorably with the original, and was a commercial success, grossing $459 million worldwide.
Plot
Five days after the end of the Civil War, John Wilkes Booth and Michael O'Laughlen, both members of the KGC, approach Thomas Gates to decode a message copied into Booth's diary. Thomas recognizes the message as a Playfair cipher, and translates it while Booth departs for Ford's Theatre to assassinate President Abraham Lincoln. Thomas solves the puzzle, but realizes Booth and O'Laughlen are trying to help the Confederacy, and rips the cipher's pages from the diary to burn them. O'Laughlen shoots Thomas and flees with the one surviving page fragment, and a dying Thomas tells his son Charles the keyword for the cipher.
More than 140 years later, famed treasure hunter Ben Gates tells Thomas' story at a Civilian Heroes conference. Black market dealer Mitch Wilkinson produces the page fragment, with Thomas Gates' name next to those of Mary Surratt and Dr. Samuel Mudd. The public believes Thomas helped kill Lincoln, and Ben and his father Patrick set out to disprove it. Using spectral imaging, Ben discovers traces of the cipher on the diary page, that, when solved using the keyword, points to the smaller Statue of Liberty in Paris. Traveling there, Ben and his friend Riley Poole discover an engraving referencing the Resolute desks. They head to London, reluctantly r
|
https://en.wikipedia.org/wiki/Continuous%20predicate
|
Continuous predicate is a term coined by Charles Sanders Peirce (1839–1914) to describe a special type of relational predicate that results as the limit of a recursive process of hypostatic abstraction.
Here is one of Peirce's definitive discussions of the concept:
When we have analyzed a proposition so as to throw into the subject everything that can be removed from the predicate, all that it remains for the predicate to represent is the form of connection between the different subjects as expressed in the propositional form. What I mean by "everything that can be removed from the predicate" is best explained by giving an example of something not so removable.
But first take something removable. "Cain kills Abel." Here the predicate appears as "— kills —." But we can remove killing from the predicate and make the latter "— stands in the relation — to —." Suppose we attempt to remove more from the predicate and put the last into the form "— exercises the function of relate of the relation — to —" and then putting "the function of relate to the relation" into another subject leave as predicate "— exercises — in respect to — to —." But this "exercises" expresses "exercises the function". Nay more, it expresses "exercises the function of relate", so that we find that though we may put this into a separate subject, it continues in the predicate just the same.
Stating this in another form, to say that "A is in the relation R to B" is to say that A is in a certain relation to R. Let us separate this out thus: "A is in the relation R¹ (where R¹ is the relation of a relate to the relation of which it is the relate) to R to B". But A is here said to be in a certain relation to the relation R¹. So that we can express the same fact by saying, "A is in the relation R¹ to the relation R¹ to the relation R to B", and so on ad infinitum.
A predicate which can thus be analyzed into parts all homogeneous with the whole I call a continuous predicate. It is very impor
|
https://en.wikipedia.org/wiki/NETCONF
|
The Network Configuration Protocol (NETCONF) is a network management protocol developed and standardized by the IETF. It was developed in the NETCONF working group and published in December 2006 as RFC 4741 and later revised in June 2011 and published as RFC 6241. The NETCONF protocol specification is an Internet Standards Track document.
NETCONF provides mechanisms to install, manipulate, and delete the configuration of network devices. Its operations are realized on top of a simple Remote Procedure Call (RPC) layer. The NETCONF protocol uses an Extensible Markup Language (XML) based data encoding for the configuration data as well as the protocol messages. The protocol messages are exchanged on top of a secure transport protocol.
The NETCONF protocol can be conceptually partitioned into four layers:
The Content layer consists of configuration data and notification data.
The Operations layer defines a set of base protocol operations to retrieve and edit the configuration data.
The Messages layer provides a mechanism for encoding remote procedure calls (RPCs) and notifications.
The Secure Transport layer provides a secure and reliable transport of messages between a client and a server.
The NETCONF protocol has been implemented in network devices such as routers and switches by some major equipment vendors. One particular strength of NETCONF is its support for robust configuration change using transactions involving a number of devices.
History
The IETF developed the Simple Network Management Protocol (SNMP) in the late 1980s and it proved to be a very popular network management protocol. In the early part of the 21st century it became apparent that in spite of what was originally intended, SNMP was not being used to configure network equipment, but was mainly being used for network monitoring. In June 2002, the Internet Architecture Board and key members of the IETF's network management community got together with network operators to discuss the situati
|
https://en.wikipedia.org/wiki/Render%20output%20unit
|
In computer graphics, the render output unit (ROP) or raster operations pipeline is a hardware component in modern graphics processing units (GPUs) and one of the final steps in the rendering process of modern graphics cards. The pixel pipelines take pixel (each pixel is a dimensionless point) and texel information and process it, via specific matrix and vector operations, into a final pixel or depth value; this process is called rasterization. Thus, ROPs control antialiasing, when more than one sample is merged into one pixel. The ROPs perform the transactions between the relevant buffers in the local memory – this includes writing or reading values, as well as blending them together.
Dedicated antialiasing hardware used to perform hardware-based antialiasing methods like MSAA is contained in ROPs.
All data rendered has to travel through the ROP in order to be written to the framebuffer, from there it can be transmitted to the display.
Therefore, the ROP is where the GPU's output is assembled into a bitmapped image ready for display.
Historically the number of ROPs, texture mapping units (TMUs), and shader processing units/stream processors have been equal. However, from 2004, several GPUs have decoupled these areas to allow optimum transistor allocation for application workload and available memory performance. As the trend continues, it is expected that graphics processors will continue to decouple the various parts of their architectures to enhance their adaptability to future graphics applications. This design also allows chip makers to build a modular line-up, where the top-end GPUs are essentially using the same logic as the low-end products.
See also
Graphics pipeline
Rendering (computer graphics)
Execution unit
|
https://en.wikipedia.org/wiki/Annuities%20in%20the%20United%20States
|
In the United States, an annuity is a financial product which offers tax-deferred growth and which usually offers benefits such as an income for life. Typically these are offered as structured (insurance) products that each state approves and regulates in which case they are designed using a mortality table and mainly guaranteed by a life insurer. There are many different varieties of annuities sold by carriers. In a typical scenario, an investor (usually the annuitant) will make a single cash premium to own an annuity. After the policy is issued the owner may elect to annuitize the contract (start receiving payments) for a chosen period of time (e.g., 5, 10, 20 years, a lifetime). This process is called annuitization and can also provide a predictable, guaranteed stream of future income during retirement until the death of the annuitant (or joint annuitants). Alternatively, an investor can defer annuitizing their contract to get larger payments later, hedge long-term care cost increases, or maximize a lump sum death benefit for a named beneficiary.
History
Although annuities have existed in their present form only for a few decades, the idea of paying out a stream of income to an individual or family dates back to the Roman Empire. The Latin word annua meant annual stipends, and during the reign of the emperors, the word signified a contract that made annual payments. Individuals would make a single large payment into the annua and then receive an annual payment each year until death, or for a specified period of time. The Roman speculator and jurist Gnaeus Domitius Annius Ulpianus is cited as one of the earliest dealers of these annuities, and he is also credited with creating the first actuarial life table. Roman soldiers were paid annuities as a form of compensation for military service. During the Middle Ages, annuities were used by feudal lords and kings to help raise capital to cover the heavy costs of their constant wars and conflicts with each other. At th
|
https://en.wikipedia.org/wiki/Betty%20Holberton
|
Frances Elizabeth Holberton (March 7, 1917 – December 8, 2001) was an American computer scientist who was one of the six original programmers of the first general-purpose electronic digital computer, ENIAC. The other five ENIAC programmers were Jean Bartik, Ruth Teitelbaum, Kathleen Antonelli, Marlyn Meltzer, and Frances Spence.
Holberton invented breakpoints in computer debugging.
Early life and education
Holberton was born Frances Elizabeth Snyder in Philadelphia, Pennsylvania in 1917. Her father was John Amos Snyder (1884–1963), her mother was Frances J. Morrow (1892–1981), and she was the third child in a family of eight children.
Holberton studied journalism, because its curriculum let her travel far afield. Journalism was also one of the few fields open to women as a career in the 1940s. On her first day of classes at the University of Pennsylvania, her math professor asked her if she wouldn't be better off at home raising children.
Career
During World War 2 while the U.S. Army needed to compute ballistics trajectories, many women were hired for this task. Holberton was hired by the Moore School of Engineering to work as a "computer" and chosen to be one of the six women to program the ENIAC. The ENIAC stood for Electronic Numerical Integrator And Computer. Classified as "subprofessionals", Holberton, along with Kay McNulty, Marlyn Wescoff, Ruth Lichterman, Betty Jean Jennings, and Fran Bilas, programmed the ENIAC to perform calculations for ballistics trajectories electronically for the Army's Ballistic Research Laboratory.
In the beginning, because the ENIAC was classified, the women were only allowed to work with blueprints and wiring diagrams in order to program it. During her time working on ENIAC she had many productive ideas that came to her overnight, leading other programmers to jokingly state that she "solved more problems in her sleep than other people did awake."
The ENIAC was unveiled on February 15, 1946, at the University of Pennsylvania.
|
https://en.wikipedia.org/wiki/Steven%20Block
|
Steven M. Block (born 1952) is an American biophysicist and Professor at Stanford University with a joint appointment in the departments of Biology and Applied Physics. In addition, he is a member of the scientific advisory group JASON, a senior fellow of Stanford's Freeman Spogli Institute for International Studies, and an amateur bluegrass musician. Block received his B.A. and M.A. from Oxford University. He has been elected to the U.S. National Academy of Sciences (2007) and the American Academy of Arts and Sciences (2000), and is a winner of the Max Delbruck Prize of the American Physical Society (2008), as well as the Single Molecule Biophysics Prize of the Biophysical Society (2007). He served as President of the Biophysical Society during 2005-6. His graduate work was completed in the laboratory of Howard Berg at the University of Colorado and Caltech. He received his Ph.D. in 1983 and went on to do postdoctoral research at Stanford. Since that time, Block has held positions at the Rowland Institute for Science, Harvard University, and Princeton University before returning to Stanford in 1999.
As a graduate student, Block picked apart the adaptation kinetics involved in bacterial chemotaxis. As an independent scientist, Block has pioneered the use of optical tweezers, a technique developed by Arthur Ashkin, to study biological enzymes and polymers at the single-molecule level. Work in his lab has led to the direct observation of the 8 nm steps taken by kinesin and the sub-nanometer stepping motions of RNA polymerase on a DNA template. While consulting for the United States government through JASON, Block has researched the many threats associated with bioterrorism and headed influential studies on how advances in genetic engineering have impacted biological warfare.
Selected publications
|
https://en.wikipedia.org/wiki/Field%20Studies%20Council
|
Field Studies Council is an educational charity based in the UK, which offers opportunities for people to learn about and engage with the outdoors.
History
It was established as the Council for the Promotion of Field Studies in 1943 with the vision to provide opportunities for school children to study plants and animals in their natural environment. It subsequently became a nationwide provider of outdoor education, delivering opportunities for people of all ages and abilities to discover explore the environment in many different forms, and has established a network of field centres providing facilities for people wanting to study natural history, ecology and the environment.
Activities
Field Studies Council provides outdoor educational residential or day visits from the organisation's centres, and other outreach areas, including London parks.
The centres include:
Amersham Field Centre, Buckinghamshire
Bishops Wood, Worcestershire
Blencathra, Cumbria
Castle Head, Grange-over-Sands, Cumbria
Dale Fort, Pembrokeshire
Epping Forest, Essex
Flatford Mill, Colchester
Juniper Hall, Surrey
London Parks: Bushy Park, Greenwich Park and Regent's Park
Margam Park, Neath Port Talbot
Millport, North Ayrshire
Nettlecombe Court, Somerset
Preston Montford, Shropshire
Rhyd-y-creuau, Conwy
Slapton Ley, Devon
The Field Studies Council creates a programme covering a wide variety of outdoor education, including fieldwork opportunities in geography and biology, providing fieldwork opportunities to allow students to investigative practical skills and to be given the chance to evaluate and analyse data they collect themselves, and data already held by the organisation.
The Field Studies Council also publishes fold-out charts and guides. BioLinks South East and BioLinks West Midlands are lottery funded schemes set up to strengthen UK biological recording.
With the goal of promoting and improving geography fieldwork, the Field Studies Council has entered into a partnership with The
|
https://en.wikipedia.org/wiki/Initial%20stability
|
Initial stability or primary stability is the resistance of a boat to small changes in the difference between the vertical forces applied on its two sides. The study of initial stability and secondary stability are part of naval architecture as applied to small watercraft (as distinct from the study of ship stability concerning large ships).
Determination
The Initial stability is determined by the angle of tilting on each side of the boat as its center of gravity (CG) moves sideways as a result of the passengers or cargo moving laterally or as a response to an external force (e.g., a wave).
The wider the boat and the further its volume is distributed away from its center line (CL), the greater the initial stability.
Examples
Wide mono-hull small boats such as the johnboat have a great deal of initial stability and allow the occupants to stand upright to engage in fishing activities, and so do narrower small boats such as W-kayaks that feature a twin hull.
Very narrow mono-hull boats such as canoes and kayaks have little initial stability, but twin-hull W-kayaks are considerably more stable due to the fact that their buoyancy is distributed at a greater distance from their center line and therefore acts more effectively to reduce tilting.
For purposes of stability, it is advantageous to keep the centre of gravity as low as possible in small boats, so occupants are generally seated. Flatwater rowing shells, which have length-to-beam ratios of up to 30:1, are inherently unstable.
Compared to secondary stability
After approximately 10 degrees of lateral tilt, hull shape gains importance, and secondary stability becomes the dominant consideration in boat stability.
Other types of ship stability
Secondary stability
Tertiary stability: For kayak rolling, tertiary stability, or the stability of an upside-down kayak, is also important (lower tertiary stability makes rolling up easier)
See also
Ship stability
Kayak#Types of stability
Limit of positive stabi
|
https://en.wikipedia.org/wiki/Pontiki
|
is a construction toy for building models of unusual creatures (which are also referred to as pontiki). Pontiki are constructed from colourful plastic components of two different types: a hollow shape dotted with holes, for representing the creatures' bodies, and smaller parts which fit into the holes, for representing features such as eyes and limbs.
The body parts come in four standard shapes, a cube, a cylinder, a cone and an egg. They can be divided into sections and combined with the body parts of other pontiki. In addition to the standard pontiki, there is a pontiki with a pull-back car for a body, called Chovica, and a six-legged moving pontiki, called Pootiki.
Pontikis may also have three kinds of rare mini pontikis called parasites. One of the most common is Parapara, a dog-like mini pontiki. The others are a human-like mini pontiki called Eric and the rarest another one armed human-like mini pontiki called Tabcuo.
See also
Mr. Potato Head
External links
Official Pontiki Site
Construction toys
2000s toys
|
https://en.wikipedia.org/wiki/Active%20Phased%20Array%20Radar
|
[[File:APAR.jpg|right|thumb|APAR mounted on top of the German Navy Sachsen class frigate Hamburgs superstructure.]]Active Phased Array Radar (APAR''') is a shipborne active electronically scanned array multifunction 3D radar (MFR) developed and manufactured by Thales Nederland. The radar receiver modules are developed and built in the US by the Sanmina Corporation.
Characteristics
APAR has four fixed (i.e., non-rotating) sensor arrays (faces), fixed on a pyramidal structure. Each face consists of 3424 transmit/receive (TR) modules operating at X band frequencies.
The radar provides the following capabilities:
air target tracking of over 200 targets out to 150 km
surface target tracking of over 150 targets out to 32 km
horizon search out to 75 km
"limited" volume search out to 150 km (in order to back up the volume search capabilities of the SMART-L)
cued search (a mode in which the search is cued using data originating from another sensor)
surface naval gunfire support
missile guidance using the Interrupted Continuous Wave Illumination (ICWI) technique, thus allowing guidance of 32 semi-active radar homing missiles in flight simultaneously, including 16 in the terminal guidance phase
"innovative" Electronic Counter-Countermeasures (ECCM)
Note: all ranges listed above are instrumented ranges.
Mountings
APAR is installed on four Royal Netherlands Navy (RNLN) LCF De Zeven Provinciën class frigates, three German Navy F124 Sachsen class frigates, and three Royal Danish Navy Ivar Huitfeldt class frigates. The Netherlands and Germany (along with Canada) were the original sponsors for the development of APAR, whereas Denmark selected APAR for their frigates as part of a larger decision to select a Thales Nederland anti-air warfare system (designed around the APAR and SMART-L radars, the Raytheon ESSM and SM-2 missile systems, and the Lockheed Martin Mk-41 vertical launch system) over the competing Sea Viper anti-air warfare system (designed around the S1850M an
|
https://en.wikipedia.org/wiki/Acentric%20fragment
|
An acentric fragment is a segment of a chromosome that lacks a centromere.
Because the centromere is the point of attachment for the mitotic apparatus, acentric fragments are not evenly distributed to the daughter cells in cell division (mitosis and meiosis). As a result, one of the daughters will lack the acentric fragment.
Lack of the acentric fragment in one of the daughter cells may have deleterious consequences, depending on the function of the DNA in this region of the chromosome. In the case of a haploid organism or a gamete, it will be fatal to one of the daughter cells if essential DNA is contained in the lost DNA segment. In the case of a diploid organism, the daughter cell lacking the acentric fragment will show expression of any recessive genes found in the homologous chromosome. Developmental geneticists look for cells and cell lineages lacking unpaired chromosome segments produced this way as a means of identifying essential genes for specific functions.
Acentric fragments are commonly generated by chromosome-breaking events, such as irradiation. Such acentric fragments are unequally distributed between the daughter cells after cell division. Acentric fragments can also be produced when an inverted segment is present in one member of a chromosome pair. In that case, a crossover event will result in one chromosome with two centromeres and an acentric fragment. The acentric fragment will be lost as explained above, and chromosomes with two centromeres will break unevenly during mitosis, resulting in one daughter lacking essential genes.
See also
Metacentric
Submetacentric
Acrocentric
Telocentric
|
https://en.wikipedia.org/wiki/Philip%20Leder
|
Philip Leder (November 19, 1934 – February 2, 2020) was an American geneticist.
Early life and education
Leder was born in Washington, D.C., and studied at Harvard University, graduating in 1956. In 1960, he graduated from Harvard Medical School and completed his medical residency at the University of Minnesota.
Scientific accomplishments
Leder made several contributions in each decade of the modern genetics era from the 1960s through the 1990s. He may be best known for his early work with Marshall Nirenberg in the elucidation of the genetic code and the Nirenberg and Leder experiment. Since then, he has made several contributions in the fields of molecular genetics, immunology and the genetics of cancer. His group defined the base sequence of a complete mammalian gene (the gene for beta globin), which enabled him to determine its organization in detail, including its associated control signals. His research into the structure of genes which carry the code for antibody molecules was of major significance. The main focus of this inquiry was the question of how the vast diversity of antibody molecules is formed by a limited number of encoded genes. Leder's work on antibody genes was later extended to research into Burkitt's lymphoma, a tumour of antibody-producing cells, which involves the oncogene c-myc. This was crucial in understanding the origin of this type of tumor. In 1988, Leder and Timothy Stewart were granted the first patent on a genetically engineered animal. This animal, a mouse which had genes injected into its embryo to increase susceptibility to cancer, became known as the "oncomouse" and has been used in the laboratory study of cancer therapies.
Positions
In 1968, Leder headed the Biochemistry Department of the Graduate Program of the Foundation for Advanced Education in the Sciences at the National Institute of Health. In 1972 he was appointed director of the Laboratory for Molecular Genetics at the same institution and remained in that post u
|
https://en.wikipedia.org/wiki/List%20of%20streaming%20media%20systems
|
This is a list of streaming media systems. A more detailed comparison of streaming media systems is also available.
Servers
Ampache – GPL/LGPL Audio streaming
Ant Media Server – Real-Time media streaming
atmosph3re – responsive web-based streaming audio server for personal music collection
Darwin Streaming Server – Apple Public Source License
datarhei Restreamer— Apache licensed media server for RTMP, HLS, and SRT with flexible FFmpeg API and graphical user interface
dyne:bolic – Linux live CD ready for radio streaming
emby – a media server/client that runs on Linux/Mac/Windows/freeBSD/docker & NAS devices with clients on Android TV/fireTV/Apple TV/Roku/Windows/PlayStation/Xbox/iOS & HTML5 Capable devices
FFserver included in FFmpeg (discontinued)
Firefly Media Server – GPL
Flash Media Server
FreeJ – video streamer for Icecast – GPL
Helix Universal Server – delivers MPEG-DASH, RTSP, HTTP Live Streaming (HLS), RTMP; developed by RealNetworks, discontinued since October 2014
HelixCommunity – RealNetworks Open Source development community
Jellyfin – GPL-licensed fully open-source fork of Emby
Icecast – GPL streaming media server
IIS Media Services – Extensions for the Windows IIS web server that deliver intelligent progressive downloads, Smooth Streaming, and HTTP Live Streaming
Kaltura – full-featured Affero GPL video platform running on your own servers or cloud
LIVE555 – a set of open source (LGPL) C++ libraries for multimedia streaming; its RTSP/RTP/RTCP client implementation is used by VLC media player and MPlayer
Logitech Media Server – open source music streaming server, backboned by a music database (formerly SlimServer, SqueezeCenter and Squeezebox Server)
Nimble Streamer – freeware server for live and VOD streaming (transcoding function is not free)
nginx with Nginx-rtmp-module (BSD 2-clause)
OpenBroadcaster – LPFM IPTV broadcast automation tools with AGPL Linux Python play out based on Gstreamer
Open Broadcaster Software – open source streaming and record
|
https://en.wikipedia.org/wiki/Sorptivity
|
In 1957 John Philip introduced the term sorptivity and defined it as a measure of the capacity of the medium to absorb or desorb liquid by capillarity.
According to C Hall and W D Hoff, the sorptivity expresses the tendency of a material to absorb and transmit water and other liquids by capillarity.
The sorptivity is widely used in characterizing soils and porous construction materials such as brick, stone and concrete.
Calculation of the true sorptivity required numerical iterative procedures dependent on soil water content and diffusivity.
John R. Philip (1969) showed that sorptivity can be determined from horizontal infiltration where water flow is mostly controlled by capillary absorption:
where S is sorptivity and I is the cumulative infiltration (i.e. distance) at time t. Its associated SI unit is m⋅s−1/2.
For vertical infiltration, Philip's solution is adapted using a parameter A1. This results in the following equations, which are valid for short times:
cumulative:
rate:
where the sorptivity S is defined (when a sharp wetting front Lf exists) as:
|
https://en.wikipedia.org/wiki/Ermine%20%28heraldry%29
|
Ermine () in heraldry is a "fur", a type of tincture, consisting of a white background with a pattern of black shapes representing the winter coat of the stoat (a species of weasel with white fur and a black-tipped tail). The linings of medieval coronation cloaks and some other garments, usually reserved for use by high-ranking peers and royalty, were made by sewing many ermine furs together to produce a luxurious white fur with patterns of hanging black-tipped tails. Due largely to the association of the ermine fur with the linings of coronation cloaks, crowns and peerage caps, the heraldic tincture of ermine was usually reserved to similar applications in heraldry (i.e., the linings of crowns and chapeaux and of the royal canopy). In heraldry it has become especially associated with the Duchy of Brittany and Breton heraldry.
Ermine spots
The ermine spot, the conventional heraldic representation of the tail, has had a wide variety of shapes over the centuries; its most usual representation has three tufts at the end (bottom), converges to a point at the root (top), and is attached by three studs. When "ermine" is specified as the tincture of the field (or occasionally of a charge), the spots are part of the tincture itself, rather than a semé or pattern of charges. The ermine spot (so specified), however, may also be used singly as a mobile charge, or as a mark of distinction signifying the absence of a blood relationship.
On a bend ermine, the tails follow the line of the bend. In the arms of William John Uncles, the field ermine is cut into bendlike strips by the three bendlets azure, so the ermine tails are (unusually) depicted bendwise.
Variations
Though ermine and vair were the two furs used in early armoury, other variations of these developed later. Both in continental heraldry and British, the fur pattern was used in varying colours as a blazon atop other tinctures (e.g., "" for black ermine spots on a gold field).
British heraldry created three name
|
https://en.wikipedia.org/wiki/Schoof%E2%80%93Elkies%E2%80%93Atkin%20algorithm
|
The Schoof–Elkies–Atkin algorithm (SEA) is an algorithm used for finding the order of or calculating the number of points on an elliptic curve over a finite field. Its primary application is in elliptic curve cryptography. The algorithm is an extension of Schoof's algorithm by Noam Elkies and A. O. L. Atkin to significantly improve its efficiency (under heuristic assumptions).
Details
The Elkies-Atkin extension to Schoof's algorithm works by restricting the set of primes considered to primes of a certain kind. These came to be called Elkies primes and Atkin primes respectively. A prime is called an Elkies prime if the characteristic equation: splits over , while an Atkin prime is a prime that is not an Elkies prime. Atkin showed how to combine information obtained from the Atkin primes with the information obtained from Elkies primes to produce an efficient algorithm, which came to be known as the Schoof–Elkies–Atkin algorithm. The first problem to address is to determine whether a given prime is Elkies or Atkin. In order to do so, we make use of modular polynomials that parametrize pairs of -isogenous elliptic curves in terms of their j-invariants (in practice alternative modular polynomials may also be used but for the same purpose).
If the instantiated polynomial has a root in then is an Elkies prime, and we may compute a polynomial whose roots correspond to points in the kernel of the -isogeny from to . The polynomial is a divisor of the corresponding division polynomial used in Schoof's algorithm, and it has significantly lower degree, versus . For Elkies primes, this allows one to compute the number of points on modulo more efficiently than in Schoof's algorithm.
In the case of an Atkin prime, we can gain some information from the factorization pattern of in , which constrains the possibilities for the number of points modulo , but the asymptotic complexity of the algorithm depends entirely on the Elkies primes. Provided there are sufficien
|
https://en.wikipedia.org/wiki/Cog%27s%20ladder
|
Cog's ladder of group development is based on the work, "Cog's Ladder: A Model of Group Growth", by George O. Charrier, an employee of Procter and Gamble, published in a company newsletter in 1972. The original document was written to help group managers at Procter and Gamble better understand the dynamics of group work, thus improving efficiency. It is now also used by the United States Naval Academy, the United States Air Force Academy, and other businesses – to help in understanding group development.
Stages
The basic idea of Cog's ladder is that there are five steps necessary for a small group of people to be able to work efficiently together. These stages are the polite stage, the why we're here stage, the power stage, the cooperation stage and the esprit stage. Groups can only move forward after completing the current stage as in Jean Piaget's stage model.
Polite stage
An introductory phase where members strive to get acquainted or reacquainted with one another. During this phase, the basis for the group structure is established and is characterized by polite social interaction. All ideas are simple, controversy is avoided and all members limit self-disclosure. Judgements of other members are formed, and this sets the tone for the rest of the group's time.
Why we're here stage
Group members will want to know why they have been called together. The specific agenda for each planning session will be communicated by the moderator or leader. In this phase, individual need for approval begins to diminish as the members examine their group's purpose and begin to set goals. Often, social cliques will begin to form as members begin to feel as though they "fit in."
Power stage
Bids for power begin between group members in an effort to convince each other that their position on an issue is correct. Often, the field of candidates vying for leadership narrows, as fewer members strive to establish power. Some of those who contributed freely to the group discussi
|
https://en.wikipedia.org/wiki/Biquadratic%20field
|
In mathematics, a biquadratic field is a number field K of a particular kind, which is a Galois extension of the rational number field Q with Galois group the Klein four-group.
Structure and subfields
Biquadratic fields are all obtained by adjoining two square roots. Therefore in explicit terms they have the form
K = Q(,)
for rational numbers a and b. There is no loss of generality in taking a and b to be non-zero and square-free integers.
According to Galois theory, there must be three quadratic fields contained in K, since the Galois group has three subgroups of index 2. The third subfield, to add to the evident Q() and Q(), is Q().
L-function
Biquadratic fields are the simplest examples of abelian extensions of Q that are not cyclic extensions. According to general theory the Dedekind zeta-function of such a field is a product of the Riemann zeta-function and three Dirichlet L-functions. Those L-functions are for the Dirichlet characters which are the Jacobi symbols attached to the three quadratic fields. Therefore taking the product of the Dedekind zeta-functions of the quadratic fields, multiplying them together, and dividing by the square of the Riemann zeta-function, is a recipe for the Dedekind zeta-function of the biquadratic field. This illustrates also some general principles on abelian extensions, such as the calculation of the conductor of a field.
Such L-functions have applications in analytic theory (Siegel zeroes), and in some of Kronecker's work.
|
https://en.wikipedia.org/wiki/Vasile%20M.%20Popov
|
Vasile Mihai Popov (born 1928) is a leading systems theorist and control engineering specialist. He is well known for having developed a method to analyze stability of nonlinear dynamical systems, now known as Popov criterion.
Biography
He was born in Galaţi, Romania on July 7, 1928. He received the engineering degree in electronics from the Bucharest Polytechnic Institute in 1950.
He worked for a few years as Assistant Professor at the Bucharest Polytechnic Institute in the Faculty of Electronics. His main research interests during this period were in frequency modulation and parametric oscillations. In the mid 1950s, he joined the Institute for Energy of Romanian Academy of Science in Bucharest. In the 1960s, Popov headed the Control group at the Institute of Energy of the Romanian Academy.
In 1968 Popov left Romania. He was a visiting professor at the Electrical Engineering departments of University of California, Berkeley, and Stanford University, and then Professor in the
department of electrical engineering at the University of Maryland College Park. In 1975 he joined the mathematics department of University of Florida Gainesville.
He retired in 1993 and currently resides in Gainesville, Florida, USA.
Work
Qualitative theory of differential equations
Motivated by stability issues in nuclear reactors and by his participation in a seminar series on qualitative theory of differential equations run by A. Halanay, Popov started working in stability of nonlinear feedback systems, in particular on the Lur'e-Postnikov problem.
In 1958/59 he obtained, through a very original approach, the first frequency stability criterion for a class of nonlinear feedback control systems. He continued this work and obtained the equivalence between the state space (Lyapunov function based) approach and the frequency domain approach for stability and obtained a very perceptive characterization of passive systems, nowadays known as the celebrated Kalman–Yakubovich–Popov lem
|
https://en.wikipedia.org/wiki/Porphine
|
Porphine or porphin is an organic compound of empirical formula . It is heterocyclic and aromatic. The molecule is a flat macrocycle, consisting of four pyrrole-like rings joined by four methine bridges, which makes it the simplest of the tetrapyrroles.
The nonpolar tetrapyrrolic ring structure of porphine means it is poorly soluble in most organic solvents and hardly water soluble. As a result, porphine is mostly of theoretical interest. It has been detected in GC-MS of certain fractions of Piper betle.
Porphine derivatives: porphyrins
Substituted derivatives of porphine are called porphyrins. Many porphyrins are found in nature with the dominant example being protoporphyrin IX. Many synthetic porphyrins are also known, including octaethylporphyrin and tetraphenylporphyrin.
Further reading
|
https://en.wikipedia.org/wiki/Boundary%20conformal%20field%20theory
|
In theoretical physics, boundary conformal field theory (BCFT) is a conformal field theory defined on a spacetime with a boundary (or boundaries). Different kinds of boundary conditions for the fields may be imposed on the fundamental fields; for example, Neumann boundary condition or Dirichlet boundary condition is acceptable for free bosonic fields. BCFT was developed by John Cardy.
In the context of string theory, physicists are often interested in two-dimensional BCFTs. The specific types of boundary conditions in a specific CFT describe different kinds of D-branes.
BCFT is also used in condensed matter physics - it can be used to study boundary critical behavior and to solve quantum impurity models.
See also
Conformal field theory
Operator product expansion
Critical point
|
https://en.wikipedia.org/wiki/Angular%20distance
|
Angular distance or angular separation, also known as apparent distance or apparent separation, denoted , is the angle between the two sightlines, or between two point objects as viewed from an observer.
Angular distance appears in mathematics (in particular geometry and trigonometry) and all natural sciences (e.g., kinematics, astronomy, and geophysics). In the classical mechanics of rotating objects, it appears alongside angular velocity, angular acceleration, angular momentum, moment of inertia and torque.
Use
The term angular distance (or separation) is technically synonymous with angle itself, but is meant to suggest the linear distance between objects (for instance, a couple of stars observed from Earth).
Measurement
Since the angular distance (or separation) is conceptually identical to an angle, it is measured in the same units, such as degrees or radians, using instruments such as goniometers or optical instruments specially designed to point in well-defined directions and record the corresponding angles (such as telescopes).
Formulation
To derive the equation that describes the angular separation of two points located on the surface of a sphere as seen from the center of the sphere, we use the example of two astronomical objects and observed from the Earth. The objects and are defined by their celestial coordinates, namely their right ascensions (RA), ; and declinations (dec), . Let indicate the observer on Earth, assumed to be located at the center of the celestial sphere. The dot product of the vectors and is equal to:
which is equivalent to:
In the frame, the two unitary vectors are decomposed into:
Therefore,
then:
Small angular distance approximation
The above expression is valid for any position of A and B on the sphere. In astronomy, it often happens that the considered objects are really close in the sky: stars in a telescope field of view, binary stars, the satellites of the giant planets of the solar system, etc. In the case
|
https://en.wikipedia.org/wiki/Graft%20hybrid
|
Grafting joins plant parts, forming a genetically composite organism functioning as one plant. A scion is a shoot from one plant that, after grafting, grows on the upper part of another plant. The stock receives the scion and serves as the root system for the grafted plant. Graft hybridisation refers to a form of asexual hybridisation where heritable modifications can be induced through grafting.
Differentiation from graft chimeras
Graft chimeras are not true hybrids. In graft chimeras it is possible that the two parent tissues become separated again, revealing the original parents.
Graft hybridisation however involves the transfer of genetic material.
Mechanism
The tissues of both parts are joined together through pluripotent cells. First, undifferentiated callus tissue arises, which later differentiates and forms vascular tissue, which connects both partners of the graft union. Plasmodesmata form between the cells of tissues of both ends of the graft junction. Plastid DNA has been proven to be exchanged through the graft union. Entire nuclear genomes are also known to cross the graft junction through plasmodesmata. Graft hybridisation is explained by horizontal gene transfer, DNA transformation, and the long-distance transport of mRNA and small RNAs.
Examples
Graft hybridisation in eudicots
This technique has been demonstrated in Nicotiana, as well as in Solanum.
Graft hybridisation in monocots
The successful creation of an intergeneric graft hybrid of Sorghum and Zea has been demonstrated.
Hereditary changes of Triticum through graft hybridisation (vegetative hybridisation) have also been recorded.
Significance
Hybridisation through grafting has the potential to create economically significant hybrid plants. Graft hybridisation is a simple and practical method for breeding woody plants, particularly helpful for overcoming reproductive isolation and difficulties due to highly heterozygous genotypes.
History
This process was first discussed by Charles Dar
|
https://en.wikipedia.org/wiki/Annulet%20%28architecture%29
|
An annulet is a small square component in the Doric capital, under the quarter-round. It is also called a fillet or listel, although and are also more general terms for a narrow band or strip, such as the ridge between flutes.
An annulet is also a narrow flat architectural moulding, common in other parts of a column, viz. the bases, as well as the capital. It is so called, because it encompasses the column round. In this sense, annulet is frequently used for baguette or little astragal.
|
https://en.wikipedia.org/wiki/Optical%20cross%20section
|
Optical cross section (OCS) is a value which describes the maximum amount of optical flux reflected back to the source. The standard unit of measurement is m2/sr. OCS is dependent on the geometry and the reflectivity at a particular wavelength of an object. Optical cross section is useful in fields such as LIDAR. In the field of radar this is referred to as radar cross-section. Objects such as license plates on automobiles have a high optical cross section to maximize the laser return to the speed detector gun.
Flat mirror
Optical cross section of a flat mirror with a given reflectivity at a particular wavelength can be expressed by the formula
Where is the cross sectional diameter of the beam. Note that the direction of the light has to be perpendicular to the mirror surface for this formula to be valid, else the return from the mirror would no longer go back to it source.
In order to maximize the return a corner reflector is used. The alignment of a corner reflector with respect to the source is not as critical as the alignment of a flat mirror.
Other optical devices
Optical cross section is not limited to reflective surfaces. Optical devices such as telescopes and cameras will return some of the optical flux back to the source, since it has optics that reflect some light. The Optical cross section of a camera can vary over time due to the camera shutter opening and closing.
|
https://en.wikipedia.org/wiki/Yang%E2%80%93Baxter%20equation
|
In physics, the Yang–Baxter equation (or star–triangle relation) is a consistency equation which was first introduced in the field of statistical mechanics. It depends on the idea that in some scattering situations, particles may preserve their momentum while changing their quantum internal states. It states that a matrix , acting on two out of three objects, satisfies
In one-dimensional quantum systems, is the scattering matrix and if it satisfies the Yang–Baxter equation then the system is integrable. The Yang–Baxter equation also shows up when discussing knot theory and the braid groups where corresponds to swapping two strands. Since one can swap three strands two different ways, the Yang–Baxter equation enforces that both paths are the same.
It takes its name from independent work of C. N. Yang from 1968, and R. J. Baxter from 1971.
General form of the parameter-dependent Yang–Baxter equation
Let be a unital associative algebra. In its most general form, the parameter-dependent Yang–Baxter equation is an equation for , a parameter-dependent element of the tensor product (here, and are the parameters, which usually range over the real numbers ℝ in the case of an additive parameter, or over positive real numbers ℝ+ in the case of a multiplicative parameter).
Let for , with algebra homomorphisms determined by
The general form of the Yang–Baxter equation is
for all values of , and .
Parameter-independent form
Let be a unital associative algebra. The parameter-independent Yang–Baxter equation is an equation for , an invertible element of the tensor product . The Yang–Baxter equation is
where , , and .
With respect to a basis
Often the unital associative algebra is the algebra of endomorphisms of a vector space over a field , that is, . With respect to a basis of , the components of the matrices are written , which is the component associated to the map . Omitting parameter dependence, the component of the Yang–Baxter equation associate
|
https://en.wikipedia.org/wiki/AppArmor
|
AppArmor ("Application Armor") is a Linux kernel security module that allows the system administrator to restrict programs' capabilities with per-program profiles. Profiles can allow capabilities like network access, raw socket access, and the permission to read, write, or execute files on matching paths. AppArmor supplements the traditional Unix discretionary access control (DAC) model by providing mandatory access control (MAC). It has been partially included in the mainline Linux kernel since version 2.6.36 and its development has been supported by Canonical since 2009.
Details
In addition to manually creating profiles, AppArmor includes a learning mode, in which profile violations are logged, but not prevented. This log can then be used for generating an AppArmor profile, based on the program's typical behavior.
AppArmor is implemented using the Linux Security Modules (LSM) kernel interface.
AppArmor is offered in part as an alternative to SELinux, which critics consider difficult for administrators to set up and maintain. Unlike SELinux, which is based on applying labels to files, AppArmor works with file paths. Proponents of AppArmor claim that it is less complex and easier for the average user to learn than SELinux. They also claim that AppArmor requires fewer modifications to work with existing systems. For example, SELinux requires a filesystem that supports "security labels", and thus cannot provide access control for files mounted via NFS. AppArmor is filesystem-agnostic.
Other systems
AppArmor represents one of several possible approaches to the problem of restricting the actions that installed software may take.
The SELinux system generally takes an approach similar to AppArmor. One important difference: SELinux identifies file system objects by inode number instead of path. Under AppArmor an inaccessible file may become accessible if a hard link to it is created. This difference may be less important than it once was, as Ubuntu 10.10 and later mit
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.