source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Selectin | The selectins (cluster of differentiation 62 or CD62) are a family of cell adhesion molecules (or CAMs). All selectins are single-chain transmembrane glycoproteins that share similar properties to C-type lectins due to a related amino terminus and calcium-dependent binding. Selectins bind to sugar moieties and so are considered to be a type of lectin, cell adhesion proteins that bind sugar polymers.
Structure
All three known members of the selectin family (L-, E-, and P-selectin) share a similar cassette structure: an N-terminal, calcium-dependent lectin domain, an epidermal growth factor (EGF)-like domain, a variable number of consensus repeat units (2, 6, and 9 for L-, E-, and P-selectin, respectively), a transmembrane domain (TM) and an intracellular cytoplasmic tail (cyto). The transmembrane and cytoplasmic parts are not conserved across the selectins being responsible for their targeting to different compartments. Though they share common elements, their tissue distribution and binding kinetics are quite different, reflecting their divergent roles in various pathophysiological processes.
Types
There are three subsets of selectins:
E-selectin (in endothelial cells)
L-selectin (in leukocytes)
P-selectin (in platelets and endothelial cells)
L-selectin is the smallest of the vascular selectins, expressed on all granulocytes and monocytes and on most lymphocytes, can be found in most leukocytes.
P-selectin, the largest selectin, is stored in α-granules of platelets and in Weibel–Palade bodies of endothelial cells, and is translocated to the cell surface of activated endothelial cells and platelets.
E-selectin is not expressed under baseline conditions, except in skin microvessels, but is rapidly induced by inflammatory cytokines.
These three types share a significant degree of sequence homology among themselves (except in the transmembrane and cytoplasmic domains) and between species. Analysis of this homology has revealed that the lectin domain, which b |
https://en.wikipedia.org/wiki/Skin%20repair | Protection from mechanical injury, chemical hazards, and bacterial invasion is provided by the skin because the epidermis is relatively thick and covered with keratin. Secretions from sebaceous glands and sweat glands also benefit this protective barrier. In the event of an injury that damages the skin's protective barrier, the body triggers a response called wound healing. After hemostasis, inflammation white blood cells, including phagocytic macrophages arrive at the injury site. Once the invading microorganisms have been brought under control, the skin proceeds to heal itself. The ability of the skin to heal even after considerable damage has occurred is due to the presence of stem cells in the dermis and cells in the stratum basale of the epidermis, all of which can generate new tissue.
When an injury extends through the epidermis into the dermis, bleeding occurs and the inflammatory response begins. Clotting mechanisms in the blood are soon activated, and a clot of scab is formed within several hours. The scab temporarily restores the integrity of the epidermis and restricts the entry of microorganisms. After the scab is formed, cells of the stratum basale begin to divide by mitosis and migrate to the edges of the scab. A week after the injury, the edges of the wound are pulled together by contraction. Contraction is an important part of the healing process when damage has been extensive, and involves shrinking in size of underlying contractile connective tissue, which brings the wound margins toward one another. In a major injury, if epithelial cell migration and tissue contraction cannot cover the wound, suturing the edges of the injured skin together, or even replacement of lost skin with skin grafts, may be required to restore the skin.
As epithelial cells continue to migrate around the scab, the dermis is repaired by the activity of stem cells. Active cells,called fibroblasts, produce collagenous fibers and ground substance. Blood vessels soon grow into |
https://en.wikipedia.org/wiki/Dielectric%20complex%20reluctance | Dielectric complex reluctance is a scalar measurement of a passive dielectric circuit (or element within that circuit) dependent on sinusoidal voltage and sinusoidal electric induction flux, and this is determined by deriving the ratio of their complex effective amplitudes. The units of dielectric complex reluctance are (inverse Farads - see Daraf) [Ref. 1-3].
As seen above, dielectric complex reluctance is a phasor represented as uppercase Z epsilon where:
and represent the voltage (complex effective amplitude)
and represent the electric induction flux (complex effective amplitude)
, lowercase z epsilon, is the real part of dielectric reluctance
The "lossless" dielectric reluctance, lowercase z epsilon, is equal to the absolute value (modulus) of the dielectric complex reluctance. The argument distinguishing the "lossy" dielectric complex reluctance from the "lossless" dielectric reluctance is equal to the natural number raised to a power equal to:
Where:
is the imaginary unit
is the phase of voltage
is the phase of electric induction flux
is the phase difference
The "lossy" dielectric complex reluctance represents a dielectric circuit element's resistance to not only electric induction flux but also to changes in electric induction flux. When applied to harmonic regimes, this formality is similar to Ohm's Law in ideal AC circuits. In dielectric circuits, a dielectric material has a dielectric complex reluctance equal to:
Where:
is the length of the circuit element
is the cross-section of the circuit element
is the complex dielectric permeability
See also
Dielectric
Dielectric reluctance — Special definition of dielectric reluctance that does not account for energy loss |
https://en.wikipedia.org/wiki/Zeaxanthin | Zeaxanthin is one of the most common carotenoids in nature, and is used in the xanthophyll cycle. Synthesized in plants and some micro-organisms, it is the pigment that gives paprika (made from bell peppers), corn, saffron, goji (wolfberries), and many other plants and microbes their characteristic color.
The name (pronounced zee-uh-zan'-thin) is derived from Zea mays (common yellow maize corn, in which zeaxanthin provides the primary yellow pigment), plus xanthos, the Greek word for "yellow" (see xanthophyll).
Xanthophylls such as zeaxanthin are found in highest quantity in the leaves of most green plants, where they act to modulate light energy and perhaps serve as a non-photochemical quenching agent to deal with triplet chlorophyll (an excited form of chlorophyll) which is overproduced at high light levels during photosynthesis. Zeaxanthin in guard cells acts as a blue light photoreceptor which mediates the stomatal opening.
Animals derive zeaxanthin from a plant diet. Zeaxanthin is one of the two primary xanthophyll carotenoids contained within the retina of the eye. Zeaxanthin supplements are typically taken on the supposition of supporting eye health. Although there are no reported side effects from taking zeaxanthin supplements, the actual health effects of zeaxanthin and lutein are not proven, and, as of 2018, there is no regulatory approval in the European Union or the United States for health claims about products that contain zeaxanthin.
As a food additive, zeaxanthin is a food dye with E number E161h.
Isomers and macular uptake
Lutein and zeaxanthin have identical chemical formulas and are isomers, but they are not stereoisomers. The only difference between them is in the location of the double bond in one of the end rings. This difference gives lutein three chiral centers whereas zeaxanthin has two. Because of symmetry, the (3R,3′S) and (3S,3′R) stereoisomers of zeaxanthin are identical. Therefore, zeaxanthin has only three stereoisomeric forms. T |
https://en.wikipedia.org/wiki/Portable%20Draughts%20Notation | Portable Draughts Notation (.PDN) is the standard computer-processable format for recording draughts games. This format is derived from Portable Game Notation, which is the standard chess format.
PDN files are text files which must contain Tag Pairs and Movetext for each game.
Tag Pairs
Tag pairs begin with "[", the name of the tag, the tag value enclosed in double-quotes, and a closing "]". There must be a newline after each tag. Tag names are case-sensitive.
PDN data for archival storage is required to provide 7 tags.
Event the name of the tournament or match event
Site the location of the event. This is in "City, Region COUNTRY" format, where COUNTRY is the 3-letter International Olympic Committee code for the country. An example is "New York City, NY USA".
Date the starting date of the game, in YYYY.MM.DD form. "??" are used for unknown values
Round the playing round ordinal of the game
White the player of the White pieces, in "last name, first name" format
Black the player of the Black pieces, same format as White
Result the result of the game. This can only have four possible values: "1-0" (White won), "0-1" (Black won), "1/2-1/2" (Draw), or "*" (other, e.g., the game is ongoing)
FEN the initial position of the checkers board. This is used to record partial games (starting at some initial position). It is also necessary for some draughts variants where the initial position is not always the same as traditional checkers. If a FEN tag is used, a separate tag pair "SetUp" is required and have its value set to "1".
A position can be stored by the FEN tag:
[SetUp "1"]
[FEN "[Turn]:[Color 1][K][Square number][,]...]:[Color 2][K][Square number][,]...]"]
Turn the side to move, B for Black, W for White
Color 1 and Color 2 the color for the Square numbers that follow B for Black, W, and the sequence is unimportant.
K optional before square number, indicates the piece on that square is a king, otherwise it is a man.
Square number indicates the square number occ |
https://en.wikipedia.org/wiki/Milk%20substitute | A milk substitute is any substance that resembles milk and can be used in the same ways as milk. Such substances may be variously known as non-dairy beverage, nut milk, grain milk, legume milk, mock milk and alternative milk.
For adults, milk substitutes take two forms: plant milks, which are liquids made from plants and may be home-made or commercially produced, and coffee creamers, synthetic products invented in the US in the 1900s specifically to replace dairy milk in coffee. For infants, breast milk can be substituted with infant formula based on cow's milk or plant based alternatives such as soybean.
History
Around the world, humans have traditionally consumed plant milks for hundreds, if not thousands, of years. In 2018, Tara McHugh in Food Technology Magazine wrote: "The word “milk” has been used since around 1200 AD to refer to plant juices." The article also said: "Of all the plant-based milks, coconut milk has the longest tradition of use. It originated in India and Southeast Asia and has been used as both a drink and an ingredient for nutrition and ceremonial offerings. Soy milk also has a long history and was discovered in 1365 in China."
In 2018, Benjamin Kemper wrote in the Smithsonian Magazine: Linguistically speaking, using “milk” to refer to “the white juice of certain plants” (the second definition of milk in the Oxford American Dictionary) has a history that dates back centuries. The Latin root word of lettuce is lact, as in lactate, for its milky juice, which indicates that even the Romans had a fluid definition for milk. Ken Albala, professor of history at University of the Pacific and host of the podcast Food: A Cultural Culinary History, says that almond milk “shows up in pretty much every medieval cookbook.” Almonds, which originate in the Middle East, reached southern Europe with the Moors around the 8th century, and their milk—yes, medieval Europeans called it milk in their various languages and dialects—quickly became all the rage am |
https://en.wikipedia.org/wiki/Sulfur-reducing%20bacteria | Sulfur-reducing bacteria are microorganisms able to reduce elemental sulfur (S0) to hydrogen sulfide (H2S). These microbes use inorganic sulfur compounds as electron acceptors to sustain several activities such as respiration, conserving energy and growth, in absence of oxygen. The final product of these processes, sulfide, has a considerable influence on the chemistry of the environment and, in addition, is used as electron donor for a large variety of microbial metabolisms. Several types of bacteria and many non-methanogenic archaea can reduce sulfur. Microbial sulfur reduction was already shown in early studies, which highlighted the first proof of S0 reduction in a vibrioid bacterium from mud, with sulfur as electron acceptor and as electron donor. The first pure cultured species of sulfur-reducing bacteria, Desulfuromonas acetoxidans, was discovered in 1976 and described by Pfennig Norbert and Biebel Hanno as an anaerobic sulfur-reducing and acetate-oxidizing bacterium, not able to reduce sulfate. Only few taxa are true sulfur-reducing bacteria, using sulfur reduction as the only or main catabolic reaction. Normally, they couple this reaction with the oxidation of acetate, succinate or other organic compounds. In general, sulfate-reducing bacteria are able to use both sulfate and elemental sulfur as electron acceptors. Thanks to its abundancy and thermodynamic stability, sulfate is the most studied electron acceptor for anaerobic respiration that involves sulfur compounds. Elemental sulfur, however, is very abundant and important, especially in deep-sea hydrothermal vents, hot springs and other extreme environments, making its isolation more difficult. Some bacteriasuch as Proteus, Campylobacter, Pseudomonas and Salmonellahave the ability to reduce sulfur, but can also use oxygen and other terminal electron acceptors.
Taxonomy
Sulfur reducers are known to cover about 74 genera within the Bacteria domain. Several types of sulfur-reducing bacteria have been di |
https://en.wikipedia.org/wiki/AMSDOS | AMSDOS is a disk operating system for the 8-bit Amstrad CPC Computer (and various clones). The name is a contraction of Amstrad Disk
Operating System.
AMSDOS first appeared in 1984 on the CPC 464, with added 3 inch disk drive, and then on the CPC 664 and CPC 6128. Relatively fast and efficient for its time, AMSDOS was quicker and more effective than most of its contemporaries.
AMSDOS was provided built into ROM (either supplied with the external disk drive or in the machine ROM, depending on model) and was accessible through the built-in Locomotive BASIC as well as through firmware routines. Its main function was to map the cassette access routines (which were built into every CPC model) through to a disk drive. This enabled the majority of cassette-based programs to work with a disk drive with no modification. AMSDOS was able to support up to two connected disk drives.
Commands
AMDOS extends the AMSTRAD BASIC by the addition of a number of external commands which are identified by a preceding ¦ (bar) symbol. The following is a list of external commands supported by AMSDOS.
¦A
¦B
¦CPM
¦DIR
¦DISC
¦DISC.IN
¦DISC.OUT
¦DRIVE
¦ERA
¦REN
¦TAPE
¦TAPE.IN
¦TAPE.OUT
¦USER
Alternatives
Other disk operating systems for the Amstrad range included CP/M (which was also bundled with an external disk drive, or built-in on ROM depending on model), RAMDOS, which allowed the full (800K) capacity of single-density 3 ½" disks to be used providing a suitable drive was connected and SymbOS. |
https://en.wikipedia.org/wiki/Al%20C.%20Kalmbach | Al C. Kalmbach (June 25, 1910 – October 14, 1981) was the founder of Kalmbach Publishing, a publisher of magazines and books geared towards enthusiasts of several different hobbies.
Albert Carpenter Kalmbach was born in Sturgeon Bay, Wisconsin. He grew up in Milwaukee, not far from the shops of the Milwaukee Road.
He was ambitious from an early age. At 12 he spent some of his savings to buy a small hand-operated printing press. He would publish the Milwaukee Sun, a neighbourhood paper, until he enrolled in Marquette University. In 1932, after graduation, he had a job offer working on the Pennsylvania Railroad's electrification project, but the job fell through due to the Great Depression. He started a new printing company, The Milwaukee Commercial Press, which specialized in church newspapers, besides commercial job printing.
His interest in railroads began during his early life in Sturgeon Bay. The rail line that served his relative's business (Fidler-Skilling Fuel & Dock) was the Ahnapee and Western Railway which ran through Door and Kewaunee counties.. His interest in model railroads came from helping his friend Frank P. Zeidler (later mayor of Milwaukee) with electrical problems on the O gauge layout Zeidler was building. Al was hooked and began construction in 1928 of his own layout, the Great Gulch, Yahoo Valley & Northern, in his parents' attic. In the winter of 1932-33 he helped to organize the Model Railroad Club of Milwaukee.
Kalmbach, seeing the interest people had in the operating O Scale layouts at the 1933 Chicago Century of Progress Exposition, turned to one of his lifelong loves — railroads — for the topic of his first magazine. The Model Railroader began publication in the summer of 1933, the first issue dated January 1934. A press release announcing the magazine appeared in August 1933, but did not receive much interest. The bank refused to loan Kalmbach any money, many felt sorry for him, and a few told him he was crazy.
His first wife, Bern |
https://en.wikipedia.org/wiki/Myzocytosis | Myzocytosis (from Greek: myzein, () meaning "to suck" and kytos () meaning "container", hence referring to "cell") is a method of feeding found in some heterotrophic organisms. It is also called "cellular vampirism" as the predatory cell pierces the cell wall and/or cell membrane of the prey cell with a feeding tube, the conoid, sucks out the cellular content and digests it.
Myzocytosis is found in Myzozoa and also in some species of Ciliophora (both comprise the alveolates). A classic example of myzocytosis is the feeding method of the infamous predatory ciliate, Didinium, where it is often depicted devouring a hapless Paramecium. The suctorian ciliates were originally thought to have fed exclusively through myzocytosis, sucking out the cytoplasm of prey via superficially drinking straw-like pseudopodia. It is now understood that suctorians do not feed through myzocytosis, but actually, instead, manipulate and envenomate captured prey with their tentacle-like pseudopodia. |
https://en.wikipedia.org/wiki/Acotyledon | Acotyledon is used to refer to seed plants or spermatophytes that lack cotyledons, such as orchids and dodder. Orchid seeds are tiny with underdeveloped embryos. They depend on mycorrhizal fungi for their early nutrition so are myco-heterotrophs at that stage.
Although some authors, especially in the 19th century and earlier, use the word acotyledon to include plants which have no cotyledons because they lack seeds entirely (such as ferns and mosses), others restrict the term to plants which have seeds but no cotyledons.
Flowering plants or angiosperms are divided into two large groups. Monocotyledons or monocots have one seed lobe, which is often modified to absorb stored nutrients from the seed so never emerges from the seed or becomes photosynthetic. Dicotyledons or dicots have two cotyledons and often germinate to produce two leaf-like cotyledons. Conifers and other gymnosperms lack flowers but may have two or more cotyledons in the seedling. |
https://en.wikipedia.org/wiki/Zadik%E2%80%93Barak%E2%80%93Levin%20syndrome | Zadik–Barak–Levin syndrome (ZBLS) is a congenital disorder in humans. Presenting conditions include primary hypothyroidism, cleft palate, hypodontia, and ectodermal dysplasia. It is the result of an embryonic defect in the mesodermal-ectodermal midline development.
Signs and symptoms
Diagnosis
Management |
https://en.wikipedia.org/wiki/Tarski%E2%80%93Kuratowski%20algorithm | In computability theory and mathematical logic the Tarski–Kuratowski algorithm is a non-deterministic algorithm that produces an upper bound for the complexity of a given formula in the arithmetical hierarchy and analytical hierarchy.
The algorithm is named after Alfred Tarski and Kazimierz Kuratowski.
Algorithm
The Tarski–Kuratowski algorithm for the arithmetical hierarchy consists of the following steps:
Convert the formula to prenex normal form. (This is the non-deterministic part of the algorithm, as there may be more than one valid prenex normal form for the given formula.)
If the formula is quantifier-free, it is in and .
Otherwise, count the number of alternations of quantifiers; call this k.
If the first quantifier is ∃, the formula is in .
If the first quantifier is ∀, the formula is in . |
https://en.wikipedia.org/wiki/Pantropical | A pantropical ("all tropics") distribution is one which covers tropical regions of both hemispheres. Examples of species include caecilians, modern sirenians and the plant genera Acacia and Bacopa.
Neotropical is a zoogeographic term that covers a large part of the Americas, roughly from Mexico and the Caribbean southwards (including cold regions in southernmost South America).
Palaeotropical refers to geographical occurrence. For a distribution to be palaeotropical a taxon must occur in tropical regions in the Old World.
According to Takhtajan (1978), the following families have a pantropical distribution:
Annonaceae, Hernandiaceae, Lauraceae, Piperaceae, Urticaceae, Dilleniaceae, Tetrameristaceae, Passifloraceae, Bombacaceae, Euphorbiaceae, Rhizophoraceae, Myrtaceae, Anacardiaceae, Sapindaceae, Malpighiaceae, Proteaceae, Bignoniaceae, Orchidaceae and Arecaceae.
See also
Afrotropical realm
Tropical Africa
Tropical Asia |
https://en.wikipedia.org/wiki/T.51/ISO/IEC%206937 | T.51 / ISO/IEC 6937:2001, Information technology — Coded graphic character set for text communication — Latin alphabet, is a multibyte extension of ASCII, or more precisely ISO/IEC 646-IRV. It was developed in common with ITU-T (then CCITT) for telematic services under the name of T.51, and first became an ISO standard in 1983. Certain byte codes are used as lead bytes for letters with diacritics (accents). The value of the lead byte often indicates which diacritic that the letter has, and the follow byte then has the ASCII-value for the letter that the diacritic is on.
ISO/IEC 6937's architects were Hugh McGregor Ross, Peter Fenwick, Bernard Marti and Loek Zeckendorf.
ISO6937/2 defines 327 characters found in modern European languages using the Latin alphabet. Non-Latin European characters, such as Cyrillic and Greek, are not included in the standard. Also, some diacritics used with the Latin alphabet like the Romanian comma are not included, using cedilla instead as no distinction between cedilla and comma below was made at the time.
IANA has registered the charset names ISO_6937-2-25 and ISO_6937-2-add for two (older) versions of this standard (plus control codes). But in practice this character encoding is unused on the Internet.
Single byte characters
The primary set (first half) originally followed ISO 646-IRV before the ISO/IEC 646:1991 revision, that is, mostly following ASCII but with character 0x24 still denoted as an "international currency sign" (¤) instead of the dollar sign ($). The 1992 edition of ITU T.51 permits existing CCITT services to continue to interpret 0x24 as the international currency sign, but stipulates that new telecommunication applications should use it for the dollar sign (i.e. following the current ISO 646-IRV), and instead represent the international currency sign using the supplementary set.
The supplementary set (second half) contains a selection of spacing and non-spacing graphic characters, additional symbols and some loca |
https://en.wikipedia.org/wiki/Geometric%20modeling%20kernel | A geometric modeling kernel is a solid modeling software component used in computer-aided design (CAD) packages. Available modelling kernels include:
ACIS is developed and licensed by Spatial Corporation of Dassault Systèmes.
SMLib is developed by Solid Modeling Solutions.
Convergence Geometric Modeler is developed by Dassault Systèmes.
Parasolid is developed and licensed by Siemens.
Romulus was a predecessor to Parasolid.
ShapeManager is developed by Autodesk and was forked from ACIS in 2001.
Granite is developed by Parametric Technology Corporation.
C3D Modeler is developed by C3D Labs, part of the ASCON Group.
CGAL is an opensource Computational Geometry Algorithms Library which has support for boolean operations on Polyhedra; but no sweep, revolve or NURBS.
Open CASCADE is an opensource modeling kernel.
sgCore is a freeware proprietary modeling kernel distributed as an SDK.
K3 kernel is developed by Center GeoS.
SOLIDS++ is developed by IntegrityWare, Inc.
APM Engine is developed by RSDC APM.
KCM is developed and licensed by Kubotek Kosmos
SvLis Geometric Kernel became opensource and discontinued, for Windows only.
IRIT modeling environment, for Windows only.
GTS GNU Triangulated Surface Library, for polygon meshes only and not surfaces.
Russian Geometric Kernel.
Geometry Kernel, a multi-platform C++ library with source code accessible for clients, developed and distributed by RDF - Geometry Kernel web site.
SolveSpace application has own integrated parametric solid geometry kernel with a limited NURBS support.
Kernel market
The kernel market currently is dominated by Parasolid and ACIS, which were introduced in the late 1980s. The latest kernel to enter the market is KCM. ShapeManager has no presence in the kernel licensing market and in 2001 Autodesk clearly stated they were not going into this business.
The world's newest geometric modeling kernel is Russian Geometric Kernel owned by the Russian government, and it is not clear if it is going to be |
https://en.wikipedia.org/wiki/Phenomenology%20%28physics%29 | In physics, phenomenology is the application of theoretical physics to experimental data by making quantitative predictions based upon known theories. It is related to the philosophical notion of the same name in that these predictions describe anticipated behaviors for the phenomena in reality. Phenomenology stands in contrast with experimentation in the scientific method, in which the goal of the experiment is to test a scientific hypothesis instead of making predictions.
Phenomenology is commonly applied to the field of particle physics, where it forms a bridge between the mathematical models of theoretical physics (such as quantum field theories and theories of the structure of space-time) and the results of the high-energy particle experiments. It is sometimes used in other fields such as in condensed matter physics and plasma physics, when there are no existing theories for the observed experimental data.
Applications in particle physics
Standard Model consequences
Within the well-tested and generally accepted Standard Model, phenomenology is the calculating of detailed predictions for experiments, usually at high precision (e.g., including radiative corrections).
Examples include:
Next-to-leading order calculations of particle production rates and distributions.
Monte Carlo simulation studies of physics processes at colliders.
Extraction of parton distribution functions from data.
CKM matrix calculations
The CKM matrix is useful in these predictions:
Application of heavy quark effective field theory to extract CKM matrix elements.
Using lattice QCD to extract quark masses and CKM matrix elements from experiment.
Theoretical models
In Physics beyond the Standard Model, phenomenology addresses the experimental consequences of new models: how their new particles could be searched for, how the model parameters could be measured, and how the model could be distinguished from other, competing models.
Phenomenological analysis
Phenomenological anal |
https://en.wikipedia.org/wiki/Boolean-valued%20function | A Boolean-valued function (sometimes called a predicate or a proposition) is a function of the type f : X → B, where X is an arbitrary set and where B is a Boolean domain, i.e. a generic two-element set, (for example B = {0, 1}), whose elements are interpreted as logical values, for example, 0 = false and 1 = true, i.e., a single bit of information.
In the formal sciences, mathematics, mathematical logic, statistics, and their applied disciplines, a Boolean-valued function may also be referred to as a characteristic function, indicator function, predicate, or proposition. In all of these uses, it is understood that the various terms refer to a mathematical object and not the corresponding semiotic sign or syntactic expression.
In formal semantic theories of truth, a truth predicate is a predicate on the sentences of a formal language, interpreted for logic, that formalizes the intuitive concept that is normally expressed by saying that a sentence is true. A truth predicate may have additional domains beyond the formal language domain, if that is what is required to determine a final truth value.
See also
Bit
Boolean data type
Boolean algebra (logic)
Boolean domain
Boolean logic
Propositional calculus
Truth table
Logic minimization
Indicator function
Predicate
Proposition
Finitary boolean function
Boolean function |
https://en.wikipedia.org/wiki/Blum%27s%20speedup%20theorem | In computational complexity theory, Blum's speedup theorem, first stated by Manuel Blum in 1967, is a fundamental theorem about the complexity of computable functions.
Each computable function has an infinite number of different program representations in a given programming language. In the theory of algorithms one often strives to find a program with the smallest complexity for a given computable function and a given complexity measure (such a program could be called optimal). Blum's speedup theorem shows that for any complexity measure, there exists a computable function such that there is no optimal program computing it, because every program has a program of lower complexity. This also rules out the idea there is a way to assign to arbitrary functions their computational complexity, meaning the assignment to any f of the complexity of an optimal program for f. This does of course not exclude the possibility of finding the complexity of an optimal program for certain specific functions.
Speedup theorem
Given a Blum complexity measure and a total computable function with two parameters, then there exists a total computable predicate (a boolean valued computable function) so that for every program for , there exists a program for so that for almost all
is called the speedup function. The fact that it may be as fast-growing as desired
(as long as it is computable) means that the phenomenon of always having a program of smaller complexity remains even if by "smaller" we mean "significantly smaller" (for instance, quadratically smaller, exponentially smaller).
See also
Gödel's speed-up theorem |
https://en.wikipedia.org/wiki/Control%20structure%20diagram | A control structure diagram (CSD) automatically documents the program flow within the source code and adds indentation with graphical symbols. Thereby the source code becomes visibly structured without sacrificing space.
See also
Data structure diagram
Diagram
Entity-relationship model
Hierarchy diagram
Unified Modeling Language
Visual programming language
External links
"The Control Structure Diagram (CSD)" - A chapter from jGRASP Tutorials
"Control Structure Diagrams for Ada 95"
Data modeling diagrams
Data modeling languages
Source code |
https://en.wikipedia.org/wiki/Polyworld | Polyworld is a cross-platform (Linux, Mac OS X) program written by Larry Yaeger to evolve Artificial Intelligence through natural selection and evolutionary algorithms.
It uses the Qt graphics toolkit and OpenGL to display a graphical environment in which a population of trapezoid agents search for food, mate, have offspring, and prey on each other. The population is typically only in the hundreds, as each individual is rather complex and the environment consumes considerable computer resources. The graphical environment is necessary since the individuals actually move around the 2-D plane and must be able to "see." Since some basic abilities, like eating carcasses or randomly generated food, seeing other individuals, mating or fighting with them, etc., are possible, a number of interesting behaviours have been observed to spontaneously arise after prolonged evolution, such as cannibalism, predators and prey, and mimicry.
Each individual makes decisions based on a neural net using Hebbian learning; the neural net is derived from each individual's genome. The genome does not merely specify the wiring of the neural nets, but also determines their size, speed, color, mutation rate and a number of other factors. The genome is randomly mutated at a set probability, which are also changed in descendant organisms.
External links
Github entry
Yaeger's page on Polyworld
Google TechTalk about Polyworld
Applications of artificial intelligence
Artificial life
Digital organisms |
https://en.wikipedia.org/wiki/AppImage | AppImage (formerly known as klik and PortableLinuxApps) is a format for distributing portable software on Linux without needing superuser permissions to install the application. It aims to enable application developers to deploy binary software without being restricted to specific Linux distributions, a concept often referred to as upstream packaging. In this manner, a single developed software can effortlessly run on any Linux distribution, such as Ubuntu, RHEL, or Arch.
Released first in 2004 under the name klik, it was continuously developed, then renamed in 2011 to PortableLinuxApps and later in 2013 to AppImage.
History
AppImage's predecessor klik was designed in 2004 by Simon Peter. The client-side software is GPL-licensed. klik integrated with web browsers on the user's computer. Users downloaded and installed software by typing a URL beginning with klik://. This downloaded a klik "recipe" file, which was used to generate a .cmg file. For main ingredients, usually pre-built .deb packages from Debian Stable repositories were fed into the recipe's .cmg generation process. In this way, one recipe could be used to supply packages to a wide variety of platforms. With klik, only eight programs could be run at once because of the limitation of mounting compressed images with the Linux kernel, unless FUSE was used. The file was remounted each time the program is run, meaning the user could remove the program by simply deleting the .cmg file. A next version, klik2, was in development; and would natively incorporate the FUSE kernel module, but it never reached past the beta stage. Around 2011, the klik project went dormant and the homepage went offline for some time.
Simon Peter started a successor project named PortableLinuxApps with similar goals around that time. The technology was adapted for instance by the "portablelinuxgames.org" repository, providing hundreds of mostly open-source video games.
Around 2013, the software was renamed again from portableLinux |
https://en.wikipedia.org/wiki/Critical%20distance | Critical distance is, in acoustics, the distance at which the sound pressure level of the direct sound D and the reverberant sound R are equal when dealing with a directional source. As the source is directional, the sound pressure as a function of distance between source and sampling point (listener) varies with their relative position, so that for a particular room and source the set of points where direct and reverberant sound pressure are equal constitutes a surface rather than a distinguished location in the room. In other words, it is the point in space at which the combined amplitude of all the reflected echoes are the same as the amplitude of the sound coming directly from the source (D = R). This distance, called the critical distance , is dependent on the geometry and absorption of the space in which the sound waves propagate, as well as the dimensions and shape of the sound source.
A reverberant room generates a short critical distance and an acoustically dead (anechoic) room generates a longer critical distance.
Calculation
The calculation of the critical distance for a diffuse approximation of the reverberant field:
where is the degree of directivity of the source ( for an omnidirectional source), the equivalent absorption surface, the room volume in m3 and
the reverberation time of room in seconds. The latter approximation is using Sabine's reverberation formula .
Sources
Acoustics
Audio effects |
https://en.wikipedia.org/wiki/Data%20theft | Data theft is a growing phenomenon primarily caused by system administrators and office workers with access to technology such as database servers, desktop computers and a growing list of hand-held devices capable of storing digital information, such as USB flash drives, iPods and even digital cameras. Since employees often spend a considerable amount of time developing contacts, confidential, and copyrighted information for the company they work for, they may feel they have some right to the information and are inclined to copy and/or delete part of it when they leave the company, or misuse it while they are still in employment. Information can be sold and bought and then used by criminals and criminal organizations. Alternatively, an employee may choose to deliberately abuse trusted access to information for the purpose of exposing misconduct by the employer. From the perspective of the society, such an act of whistleblowing can be seen as positive and is protected by law in certain situations in some jurisdictions, such as the USA.
A common scenario is where a sales person makes a copy of the contact database for use in their next job. Typically, this is a clear violation of their terms of employment.
Notable acts of data theft include those by leaker Chelsea Manning and self-proclaimed whistleblowers Edward Snowden and Hervé Falciani.
Data theft methods
Thumbsucking
Thumbsucking, similar to podslurping, is the intentional or undeliberate use of a portable USB mass storage device, such as a USB flash drive (or "thumbdrive"), to illicitly download confidential data from a network endpoint.
A USB flash drive was allegedly used to remove without authorization highly classified documents about the design of U.S. nuclear weapons from a vault at Los Alamos.
The threat of thumbsucking has been amplified for a number of reasons, including the following:
The storage capacity of portable USB storage devices has increased.
The cost of high-capacity portable USB storag |
https://en.wikipedia.org/wiki/Stream%20%28computing%29 | In computer science, a stream is a sequence of data elements made available over time. A stream can be thought of as items on a conveyor belt being processed one at a time rather than in large batches.
Streams are processed differently from batch data – normal functions cannot operate on streams as a whole, as they have potentially unlimited data, and formally, streams are codata (potentially unlimited), not data (which is finite). Functions that operate on a stream, producing another stream, are known as filters, and can be connected in pipelines, analogously to function composition. Filters may operate on one item of a stream at a time, or may base an item of output on multiple items of input, such as a moving average.
Examples
The term "stream" is used in a number of similar ways:
"Stream editing", as with sed, awk, and perl. Stream editing processes a file or files, in-place, without having to load the file(s) into a user interface. One example of such use is to do a search and replace on all the files in a directory, from the command line.
On Unix and related systems based on the C language, a stream is a source or sink of data, usually individual bytes or characters. Streams are an abstraction used when reading or writing files, or communicating over network sockets. The standard streams are three streams made available to all programs.
I/O devices can be interpreted as streams, as they produce or consume potentially unlimited data over time.
In object-oriented programming, input streams are generally implemented as iterators.
In the Scheme language and some others, a stream is a lazily evaluated or delayed sequence of data elements. A stream can be used similarly to a list, but later elements are only calculated when needed. Streams can therefore represent infinite sequences and series.
In the Smalltalk standard library and in other programming languages as well, a stream is an external iterator. As in Scheme, streams can represent finite or infinite |
https://en.wikipedia.org/wiki/Walrasian%20auction | A Walrasian auction, introduced by Léon Walras, is a type of simultaneous auction where each agent calculates its demand for the good at every possible price and submits this to an auctioneer. The price is then set so that the total demand across all agents equals the total amount of the good. Thus, a Walrasian auction perfectly matches the supply and the demand.
Walras suggested that equilibrium would always be achieved through a process of tâtonnement (French for "trial and error"), a form of hill climbing. More recently, however, the Sonnenschein–Mantel–Debreu theorem proved that such a process would not necessarily reach a unique and stable equilibrium, even if the market is populated with perfectly rational agents.
Walrasian auctioneer
The Walrasian auctioneer is the presumed auctioneer that matches supply and demand in a market of perfect competition. The auctioneer provides for the features of perfect competition: perfect information and no transaction costs. The process is called tâtonnement, or groping, relating to finding the market clearing price for all commodities and giving rise to general equilibrium.
The device is an attempt to avoid one of deepest conceptual problems of perfect competition, which may, essentially, be defined by the stipulation that no agent can affect prices. But if no one can affect prices no one can change them, so prices cannot change. However, involving as it does an artificial solution, the device is less than entirely satisfactory.
As a mistranslation
Until Walker and van Daal's 2014 translation (retitled Elements of Theoretical Economics), William Jaffé's Elements of Pure Economics (1954) was for many years the only English translation of Walras's Éléments d’économie politique pure.
Walker and van Daal argue that the idea of the Walrasian auction and Walrasian auctioneer resulted from Jaffé's mistranslation of the French word crieurs (criers) into auctioneers. Walker and van Daal call this "a momentous error that has mis |
https://en.wikipedia.org/wiki/Combinatorial%20design | Combinatorial design theory is the part of combinatorial mathematics that deals with the existence, construction and properties of systems of finite sets whose arrangements satisfy generalized concepts of balance and/or symmetry. These concepts are not made precise so that a wide range of objects can be thought of as being under the same umbrella. At times this might involve the numerical sizes of set intersections as in block designs, while at other times it could involve the spatial arrangement of entries in an array as in sudoku grids.
Combinatorial design theory can be applied to the area of design of experiments. Some of the basic theory of combinatorial designs originated in the statistician Ronald Fisher's work on the design of biological experiments. Modern applications are also found in a wide gamut of areas including finite geometry, tournament scheduling, lotteries, mathematical chemistry, mathematical biology, algorithm design and analysis, networking, group testing and cryptography.
Example
Given a certain number n of people, is it possible to assign them to sets so that each person is in at least one set, each pair of people is in exactly one set together, every two sets have exactly one person in common, and no set contains everyone, all but one person, or exactly one person? The answer depends on n.
This has a solution only if n has the form q2 + q + 1. It is less simple to prove that a solution exists if q is a prime power. It is conjectured that these are the only solutions. It has been further shown that if a solution exists for q congruent to 1 or 2 mod 4, then q is a sum of two square numbers. This last result, the Bruck–Ryser theorem, is proved by a combination of constructive methods based on finite fields and an application of quadratic forms.
When such a structure does exist, it is called a finite projective plane; thus showing how finite geometry and combinatorics intersect. When q = 2, the projective plane is called the Fano plan |
https://en.wikipedia.org/wiki/Rolled%20oats | Rolled oats are a type of lightly processed whole-grain food. They are made from oat groats that have been dehusked and steamed, before being rolled into flat flakes under heavy rollers and then stabilized by being lightly toasted.
Thick-rolled oats usually remain unbroken during processing, while thin-rolled oats often become fragmented. Rolled whole oats, without further processing, can be cooked into a porridge and eaten as old-fashioned oats or Scottish oats; when the oats are rolled thinner and steam-cooked more in the factory, they will later absorb water much more easily and cook faster into a porridge, and when processed this way are sometimes called "quick" or "instant" oats.
Rolled oats are most often the main ingredient in granola and muesli. They can be further processed into a coarse powder, which breaks down to nearly a liquid consistency when boiled. Cooked oatmeal powder is often used as baby food.
Process
The oat, like other cereals, has a hard, inedible outer husk that must be removed before the grain can be eaten. After the outer husk (or chaff) has been removed from the still bran-covered oat grains, the remainder is called oat groats. Since the bran layer, though nutritious, makes the grains tougher to chew and contains an enzyme that can cause the oats to go rancid, raw oat groats are often further steam-treated to soften them for a quicker cooking time and to denature the enzymes for a longer shelf life.
Steel-cut or pinhead oats
Steel-cut oats (sometimes called "pinhead oats", especially if cut small) are oat groats that have been chopped by a sharp-bladed machine before any steaming, and thus retain bits of the bran layer.
Preparation
Rolled oats can be eaten without further heating or cooking, if they are soaked for 1–6 hours in water-based liquid, such as water, milk, or plant-based dairy substitutes. The required soaking duration depends on shape, size and pre-processing technique.
Whole oat groats can be cooked as a breakfast ce |
https://en.wikipedia.org/wiki/Polynucleotide | In molecular biology, a polynucleotide () is a biopolymer composed of nucleotide monomers that are covalently bonded in a chain. DNA (deoxyribonucleic acid) and RNA (ribonucleic acid) are examples of polynucleotides with distinct biological functions. DNA consists of two chains of polynucleotides, with each chain in the form of a helix (like a spiral staircase).
Sequence
Although DNA and RNA do not generally occur in the same polynucleotide, the four species of nucleotides may occur in any order in the chain. The sequence of DNA or RNA species for a given polynucleotide is the main factor determining its function in a living organism or a scientific experiment.
Polynucleotides in organisms
Polynucleotides occur naturally in all living organisms. The genome of an organism consists of complementary pairs of enormously long polynucleotides wound around each other in the form of a double helix. Polynucleotides have a variety of other roles in organisms.
Polynucleotides in scientific experiments
Polynucleotides are used in biochemical experiments such as polymerase chain reaction (PCR) or DNA sequencing. Polynucleotides are made artificially from oligonucleotides, smaller nucleotide chains with generally fewer than 30 subunits. A polymerase enzyme is used to extend the chain by adding nucleotides according to a pattern specified by the scientist.
Prebiotic condensation of nucleobases with ribose
In order to understand how life arose, knowledge is required of the chemical pathways that permit formation of the key building blocks of life under plausible prebiotic conditions. According to the RNA world hypothesis free-floating ribonucleotides were present in the primitive soup. These were the fundamental molecules that combined in series to form RNA. Molecules as complex as RNA must have arisen from small molecules whose reactivity was governed by physico-chemical processes. RNA is composed of purine and pyrimidine nucleotides, both of which are necessary for re |
https://en.wikipedia.org/wiki/United%20States%20Navy%20Marine%20Mammal%20Program | The U.S. Navy Marine Mammal Program (NMMP) is a program administered by the U.S. Navy which studies the military use of marine mammals - principally bottlenose dolphins and California sea lions - and trains animals to perform tasks such as ship and harbor protection, mine detection and clearance, and equipment recovery. The program is based in San Diego, California, where animals are housed and trained on an ongoing basis. NMMP animal teams have been deployed for use in combat zones, such as during the Vietnam War and the Iraq War.
The program has been dogged by controversy over the treatment of the animals and speculation as to the nature of its mission and training. This has been due at least in part to the secrecy of the program, which was de-classified in the early 1990s. Since the program's inception, there have been ongoing animal welfare concerns, with many opposing the use of marine mammals in military applications, even in essentially non-combatant roles such as mine detection. The Navy cites external oversight, including ongoing monitoring, in defense of its animal care standards.
History
The origins of the program date back to 1960, when a Pacific white-sided dolphin was acquired for hydrodynamic studies seeking to improve torpedo performance. The aim was to determine whether dolphins had a sophisticated drag-reduction system, but the technology of the day failed to demonstrate that dolphins have any unusual capabilities in this respect. This research has now resumed with the benefit of modern-day technology; among the possible drag-reducing mechanisms being studied for human use are skin compliance, biopolymers, and boundary-layer heating.
In 1962, the animals' intelligence, exceptional diving ability, and trainability led to the foundation of a new research program at Point Mugu, California, where a research facility was built on a sand spit between Mugu Lagoon and the ocean. The intention was to study the dolphins' senses and capabilities, such as |
https://en.wikipedia.org/wiki/Solvay%20Conference | The Solvay Conferences () have been devoted to preeminent unsolved problems in both physics and chemistry. They began with the historic invitation-only 1911 Solvay Conference on Physics, considered a turning point in the world of physics, and are ongoing.
Since the success of 1911, they have been organised by the International Solvay Institutes for Physics and Chemistry, founded by the Belgian industrialist Ernest Solvay in 1912 and 1913, and located in Brussels. The institutes coordinate conferences, workshops, seminars, and colloquia. Recent Solvay Conferences entail a three year cycle: the Solvay Conference on Physics followed by a gap year, followed by the Solvay Conference on Chemistry.
Notable Solvay conferences
First conference
Hendrik Lorentz was chairman of the first Solvay Conference on Physics, held in Brussels from 30 October to 3 November 1911. The subject was Radiation and the Quanta. This conference looked at the problems of having two approaches, namely classical physics and quantum theory. Albert Einstein was the second youngest physicist present (the youngest one was Lindemann). Other members of the Solvay Congress were experts including Marie Curie, Ernest Rutherford and Henri Poincaré (see image for attendee list).
Third conference
The third Solvay Conference on Physics was held in April 1921, soon after World War I. Most German scientists were barred from attending. In protest at this action, Albert Einstein, although he had renounced German citizenship in 1901 and become a Swiss citizen (in 1896, he renounced his German citizenship, and remained officially stateless before becoming a Swiss citizen in 1901), declined his invitation to attend the conference and publicly renounced any German citizenship again. Because anti-Semitism had been on the rise, Einstein accepted the invitation by Dr. Chaim Weizmann, the president of the World Zionist Organization, for a trip to the United States to raise money.
Fourth conference
The fourth Solvay |
https://en.wikipedia.org/wiki/STREAMS | In computer networking, STREAMS is the native framework in Unix System V for implementing character device drivers, network protocols, and inter-process communication. In this framework, a stream is a chain of coroutines that pass messages between a program and a device driver (or between a pair of programs). STREAMS originated in Version 8 Research Unix, as Streams (not capitalized).
STREAMS's design is a modular architecture for implementing full-duplex I/O between kernel and device drivers. Its most frequent uses have been in developing terminal I/O (line discipline) and networking subsystems. In System V Release 4, the entire terminal interface was reimplemented using STREAMS. An important concept in STREAMS is the ability to push drivers custom code modules which can modify the functionality of a network interface or other device together to form a stack. Several of these drivers can be chained together in order.
History
STREAMS was based on the Streams I/O subsystem introduced in the Eighth Edition Research Unix (V8) by Dennis Ritchie, where it was used for the terminal I/O subsystem and the Internet protocol suite. This version, not yet called STREAMS in capitals, fit the new functionality under the existing device I/O system calls (open, close, read, write, and ioctl), and its application was limited to terminal I/O and protocols providing pipe-like I/O semantics.
This I/O system was ported to System V Release 3 by Robert Israel, Gil McGrath, Dave Olander, Her-Daw Che, and Maury Bach as part of a wider framework intended to support a variety of transport protocols, including TCP, ISO Class 4 transport, SNA LU 6.2, and the AT&T NPACK protocol (used in RFS). It was first released with the Network Support Utilities (NSU) package of UNIX System V Release 3. This port added the putmsg, getmsg, and poll system calls, which are nearly equivalent in purpose to the send, recv, and select calls from Berkeley sockets. The putmsg and getmsg system calls were orig |
https://en.wikipedia.org/wiki/Bisquick | Bisquick is a pre-mixed baking mix sold by General Mills under its Betty Crocker brand, consisting of flour, shortening, salt, sugar and baking powder (a leavening agent).
History
According to General Mills, Bisquick was invented in 1930 after one of their top sales executives met an innovative train dining car chef, on a business trip. After the sales executive complimented the chef on his deliciously fresh biscuits, the dining car chef shared that he used a pre-mixed biscuit batter he created consisting of lard, flour, baking powder and salt. The chef then stored this pre-mixed biscuit batter on ice in his kitchen, enabling him to bake fresh biscuits quickly on the train every day. As soon as the sales executive returned from that business trip, he “created” Bisquick.
The recipe was adapted, using hydrogenated oil, thus eliminating the need for refrigeration. Bisquick was officially introduced on grocers' shelves in 1931.
Though first promoted for only baking biscuits ("90 seconds from package to oven", the slogan read), Bisquick was soon used to prepare a wide variety of baked goods from pizza dough to pancakes to dumplings to snickerdoodle cookies.
Substitution
One cup of Bisquick can be substituted by a mixture of one cup of flour, teaspoons of baking powder, teaspoon of salt, and tablespoons of oil or melted butter (or by cutting in tbsp Crisco or lard).
Ingredients
The ingredients in Bisquick Original consist of bleached wheat flour (enriched with niacin, iron, thiamine mononitrate, riboflavin and folic acid), corn starch, dextrose, palm oil, leavening (baking soda, sodium aluminum phosphate, monocalcium phosphate), canola oil, salt, sugar, DATEM, and distilled monoglycerides.
Bisquick Heart Smart is formulated with canola oil resulting in less saturated fat and 0g trans fat. Bisquick also comes in a gluten-free variety, which uses rice flour instead of regular flour. |
https://en.wikipedia.org/wiki/Molecular%20motor | Molecular motors are natural (biological) or artificial molecular machines that are the essential agents of movement in living organisms. In general terms, a motor is a device that consumes energy in one form and converts it into motion or mechanical work; for example, many protein-based molecular motors harness the chemical free energy released by the hydrolysis of ATP in order to perform mechanical work. In terms of energetic efficiency, this type of motor can be superior to currently available man-made motors. One important difference between molecular motors and macroscopic motors is that molecular motors operate in the thermal bath, an environment in which the fluctuations due to thermal noise are significant.
Examples
Some examples of biologically important molecular motors:
Cytoskeletal motors
Myosins are responsible for muscle contraction, intracellular cargo transport, and producing cellular tension.
Kinesin moves cargo inside cells away from the nucleus along microtubules, in anterograde transport.
Dynein produces the axonemal beating of cilia and flagella and also transports cargo along microtubules towards the cell nucleus, in retrograde transport.
Polymerisation motors
Actin polymerization generates forces and can be used for propulsion. ATP is used.
Microtubule polymerization using GTP.
Dynamin is responsible for the separation of clathrin buds from the plasma membrane. GTP is used.
Rotary motors:
FoF1-ATP synthase family of proteins convert the chemical energy in ATP to the electrochemical potential energy of a proton gradient across a membrane or the other way around. The catalysis of the chemical reaction and the movement of protons are coupled to each other via the mechanical rotation of parts of the complex. This is involved in ATP synthesis in the mitochondria and chloroplasts as well as in pumping of protons across the vacuolar membrane.
The bacterial flagellum responsible for the swimming and tumbling of E. coli and other bacteria |
https://en.wikipedia.org/wiki/Cross%20education | Cross education is a neurophysiological phenomenon where an increase in strength is witnessed within an untrained limb following unilateral strength training in the opposite, contralateral limb.
Cross education can also be seen in the transfer of skills from one limb to the other.
Examples
A resistance trainer witnesses strength gains in her left and right biceps after participating in a strength training program for only her right biceps. This phenomenon is due to factors both at the muscular, spinal and neural levels.
A basketball player learns to dribble a basketball with his right hand and then successfully performs the task with his left hand even though he had undergone no previous training with his left side. |
https://en.wikipedia.org/wiki/Insect%20trap | Insect traps are used to monitor or directly reduce populations of insects or other arthropods, by trapping individuals and killing them. They typically use food, visual lures, chemical attractants and pheromones as bait and are installed so that they do not injure other animals or humans or result in residues in foods or feeds. Visual lures use light, bright colors and shapes to attract pests. Chemical attractants or pheromones may attract only a specific sex. Insect traps are sometimes used in pest management programs instead of pesticides but are more often used to look at seasonal and distributional patterns of pest occurrence. This information may then be used in other pest management approaches.
The trap mechanism or bait can vary widely. Flies and wasps are attracted by proteins. Mosquitoes and many other insects are attracted by bright colors, carbon dioxide, lactic acid, floral or fruity fragrances, warmth, moisture and pheromones. Synthetic attractants like methyl eugenol are very effective with tephritid flies.
Trap types
Insect traps vary widely in shape, size, and construction, often reflecting the behavior or ecology of the target species. Some common varieties are described below
Light traps
Light traps, with or without ultraviolet light, attract certain insects. Light sources may include fluorescent lamps, mercury-vapor lamps, black lights, or light-emitting diodes.
Designs differ according to the behavior of the insects being targeted.
Light traps are widely used to survey nocturnal moths. Total species richness and abundance of trapped moths may be influenced by several factors such as night temperature, humidity and lamp type.
Grasshoppers and some beetles are attracted to lights at a long range but are repelled by it at short range. Farrow's light trap has a large base so that it captures insects that may otherwise fly away from regular light traps. Light traps can attract flying and terrestrial insects, and lights may be combined with |
https://en.wikipedia.org/wiki/IMUnited | IMUnited was a coalition of instant messaging service providers, including Yahoo! and Microsoft, that wanted AOL to open its proprietary AIM network to them. It appears to have disappeared, possibly because both Yahoo!'s and Microsoft's instant messaging services started to gain popularity.
See also
IMUnified
Instant messaging |
https://en.wikipedia.org/wiki/Five%20Equations%20That%20Changed%20the%20World | Five Equations That Changed the World: The Power and Poetry of Mathematics is a book by Michael Guillen, published in 1995.
It is divided into five chapters that talk about five different equations in physics and the people who have developed them.
The scientists and their equations are:
Isaac Newton (Universal Law of Gravity)
Daniel Bernoulli (Law of Hydrodynamic Pressure)
Michael Faraday (Law of Electromagnetic Induction)
Rudolf Clausius (Second Law of Thermodynamics)
Albert Einstein (Theory of Special Relativity)
The book is a light study in science and history, portraying the preludes to and times and settings of discoveries that have been the basis of further development, including space travel, flight and nuclear power. Each chapter of the book is divided into sections titled Veni, Vidi, Vici.
The reviews of the book have been mixed. Publishers Weekly called it "wholly accessible, beautifully written", Kirkus Reviews wrote that it is a "crowd-pleasing kind of book designed to make the science as palatable as possible", and Frank Mahnke wrote that Guillen "has a nice touch for the history of mathematics and physics and their impact on the world". However, in contrast, Charles Stephens panned "the superficiality of the author's treatment of scientific ideas", and the editors of The Capital Times called the book a "miserable failure" at its goal of helping the public appreciate the beauty of mathematics. |
https://en.wikipedia.org/wiki/Gaisberg%20Transmitter | Gaisberg Transmitter is a facility for FM and TV-transmission on the Gaisberg mountain near Salzburg, Austria. It was the first large transmitter in Austria finished after the war and started its work on 22 August 1956 (however, a provisional transmitter already broadcast a VHF radio signal since 1953 with 1kW). It used a lattice tower and broadcast Austria's first radio station on 99.0MHz and third radio station on 94.8 MHz, each with 50 kW, as well as a TV station on channel 8 with 60/12 kW (picture/sound). During the 1980s an UHF antenna was put on top of the tower, bringing its height to 100 meters.
The ALDIS (Austrian Lightning Detection & Information System) maintains the Austrian Lightning Research Station Gaisberg next to the transmitter .
Towers in Austria
Broadcast transmitters
1956 establishments in Austria
Towers completed in 1956
20th-century architecture in Austria |
https://en.wikipedia.org/wiki/Shottsuru | Shottsuru (塩魚汁) is a pungent regional Japanese fish sauce similar to the Thai nam pla. The authentic version is made from the fish known the hatahata (Arctoscopus japonicus or sailfin sandfish), and its production is associated with the Akita region.
See also
List of fish sauces
External links
Information
Japanese condiments
Fish sauces
Umami enhancers |
https://en.wikipedia.org/wiki/Correct%20name | In botany, the correct name according to the International Code of Nomenclature for algae, fungi, and plants (ICN) is the one and only botanical name that is to be used for a particular taxon, when that taxon has a particular circumscription, position and rank. Determining whether a name is correct is a complex procedure. The name must be validly published, a process which is defined in no less than 16 Articles of the ICN. It must also be "legitimate", which imposes some further requirements. If there are two or more legitimate names for the same taxon (with the same circumscription, position and rank), then the correct name is the one which has priority, i.e. it was published earliest, although names may be conserved if they have been very widely used. Validly published names other than the correct name are called synonyms. Since taxonomists may disagree as to the circumscription, position or rank of a taxon, there can be more than one correct name for a particular plant. These may also be called synonyms.
The correct name has only one correct spelling, which will generally be the original spelling (although certain limited corrections are allowed). Other spellings are called orthographical variants.
The zoological equivalent of "correct name" is "valid name".
Example
Different taxonomic placements may well lead to different correct names. For example, the earliest name for the fastest growing tree in the world is Adenanthera falcataria L. The "L." stands for "Linnaeus" who first validly published the name. Adenanthera falcataria is thus one of the correct names for this plant. There are other correct names, based on different taxonomic treatments.
It can be placed in the genus Albizia, as Fosberg first did. When placed in this genus, the first choice of correct name is the new genus name followed by the earlier species epithet, giving Albizia falcataria. This name cannot be used if there is already a species in the genus with this epithet, so that an illegitim |
https://en.wikipedia.org/wiki/Interstimulus%20interval | The interstimulus interval (often abbreviated as ISI) is the temporal interval between the offset of one stimulus to the onset of another. For instance, Max Wertheimer did experiments with two stationary, flashing lights that at some interstimulus intervals appeared to the subject as moving instead of stationary. In these experiments, the interstimulus interval is simply the time between the two flashes. The ISI plays a large role in the phi phenomenon (Wertheimer) since the illusion of motion is directly due to the length of the interval between stimuli. When the ISI is shorter, for example between two flashing lines alternating back and forth, we perceive the change in stimuli to be movement. Wertheimer discovered that the space between the two lines is filled in by our brains and that the faster the lines alternate, the more likely we are to perceive it as one line moving back and forth. When the stimuli move fast enough, this creates the illusion of a moving picture like a movie or cartoon. Phi phenomenon is very similar to beta movement.
As it applies to classical conditioning, the term interstimulus interval is used to represent the gap of time between the start of the neutral or conditioned stimulus and the start of the unconditioned stimulus. An example would be the case of Pavlov's dog, where the time between the unconditioned stimulus, the food, and the conditioned stimulus, the bell, is considered the ISI. More particularly, ISI is often used in eyeblink conditioning (a widely studied type of classical conditioning involving puffs of air blown into the subject's eyes) where the ISI can affect learning based on the size of the time gap. What is of interest in this particular type of classical conditioning is that when the subject is conditioned to blink after the conditioned stimulus (tone), the blink will take place within the time period between the tone and the air puff, making the subject's eyes close before the puff can reach the eyes, protecting th |
https://en.wikipedia.org/wiki/Jos%C3%A9%20Luis%20Rodr%C3%ADguez%20Pitt%C3%AD | José Luis Rodríguez Pittí is a contemporary writer, videoartist and documentary photographer.
He is the author of short stories, poems and essays. Rodríguez Pittí is author of the books Panamá Blues (2010, miniTEXTOS (2008), Sueños urbanos (2008) and Crónica de invisibles (1999). Most of his stories and essays were published in literary magazines and newspapers.
In 1994, the Universidad de Panamá awarded him with the Premio "Darío Herrera". Other literary honors received are Accesit in the Premio Nacional "Signos" 1993 (Panamá), Concurso Nacional de Cuentos "José María Sánchez" 1998 (Panamá), Concurso "Amadís de Gaula" 1999 (Soria, España) and the Concurso "Maga" de Cuento Corto 2001 (Panamá)
Early life and education
Rodríguez Pittí was born in Panama City on 29 March 1971. He grew up in Mexico City, Santiago de Veraguas and Panamá City. He is resident of Toronto, Canada.
Graduated from the Universidad Tecnológica de Panamá, was Computer Vision, Programming Languages and Deep Learning Professor at the Universidad Tecnológica de Panamá and Universidad Santa María la Antigua.
Biography
President of the Writers Association of Panama from 2008 to 2010. Founder and President of Fundación El Hacedor (since 2007).
From 1990 to 1995 he traveled extensively in the Panamanian region of Azuero to collect stories and photograph, the body of three photo essays: "Viernes Santo en Pesé", "Cuadernos de Azuero", and "Noche de carnaval". Other photography essays are "De diablos, diablicos y otros seres de la mitología panameña" and "Regee Child". Some of his photographs are cover art of books published in Panamá. His work has been exhibited in Panamá, México, Canadá and Italy.
Awards and honors
1993, Finalista, Premio "Signos" de Joven Literatura 1993 otorgado en Panamá
1994, Premio "Darío Herrera" de Literatura otorgado por la Universidad de Panamá
1994, Premio Canon "Día de la Tierra"
1998, Accésit, Premio Nacional de Cuento "José María Sánchez" 1998 otorgado en Panamá
|
https://en.wikipedia.org/wiki/Necking%20%28engineering%29 | In engineering and materials science, necking is a mode of tensile deformation where relatively large amounts of strain localize disproportionately in a small region of the material. The resulting prominent decrease in local cross-sectional area provides the basis for the name "neck". Because the local strains in the neck are large, necking is often closely associated with yielding, a form of plastic deformation associated with ductile materials, often metals or polymers. Once necking has begun, the neck becomes the exclusive location of yielding in the material, as the reduced area gives the neck the largest local stress.
Formation
Necking results from an instability during tensile deformation when the cross-sectional area of the sample decreases by a greater proportion than the material strain hardens. Armand Considère published the basic criterion for necking in 1885, in the context of the stability of large scale structures such as bridges. Three concepts provide the framework for understanding neck formation.
Before deformation, all real materials have heterogeneities such as flaws or local variations in dimensions or composition that cause local fluctuations in stresses and strains. To determine the location of the incipient neck, these fluctuations need only be infinitesimal in magnitude.
During plastic tensile deformation the material decreases in cross-sectional area due to the incompressibility of plastic flow. (Not due to the Poisson effect, which is linked to elastic behaviour.)
During plastic tensile deformation the material strain hardens. The amount of hardening varies with extent of deformation.
The latter two effects determine the stability while the first effect determines the neck's location.
The Considère treatment
Instability (onset of necking) is expected to occur when an increase in the (local) strain produces no net increase in the load, . This will happen when
This leads to
with the subscript being used to emphasize that these |
https://en.wikipedia.org/wiki/Mycoremediation | Mycoremediation (from ancient Greek μύκης (mukēs), meaning "fungus" and the suffix -remedium, in Latin meaning 'restoring balance') is a form of bioremediation in which fungi-based remediation methods are used to decontaminate the environment. Fungi have been proven to be a cheap, effective and environmentally sound way for removing a wide array of contaminants from damaged environments or wastewater. These contaminants include heavy metals, organic pollutants, textile dyes, leather tanning chemicals and wastewater, petroleum fuels, polycyclic aromatic hydrocarbons, pharmaceuticals and personal care products, pesticides and herbicides in land, fresh water, and marine environments.
The byproducts of the remediation can be valuable materials themselves, such as enzymes (like laccase), edible or medicinal mushrooms, making the remediation process even more profitable. Some fungi are useful in the biodegradation of contaminants in extremely cold or radioactive environments where traditional remediation methods prove too costly or are unusable due to the extreme conditions. Mycoremediation can even be used for fire management with the encapsulation method. This process consists of using fungal spores coated with agarose in a pellet form. This pellet is introduced to a substrate in the burnt forest, breaking down the toxins in the environment and stimulating growth.
Pollutants
Fungi, thanks to their non-specific enzymes, are able to break down many kinds of substances including pharmaceuticals and fragrances that are normally recalcitrant to bacteria degradation, such as paracetamol (also known as acetaminophen). For example, using Mucor hiemalis, the breakdown of products which are toxic in traditional water treatment, such as phenols and pigments of wine distillery wastewater, X-ray contrast agents, and ingredients of personal care products, can be broken down in a non-toxic way.
Mycoremediation is a cheaper method of remediation, and it doesn't usually require expe |
https://en.wikipedia.org/wiki/PANDAS | Pediatric autoimmune neuropsychiatric disorders associated with streptococcal infections (PANDAS) is a controversial hypothetical diagnosis for a subset of children with rapid onset of obsessive-compulsive disorder (OCD) or tic disorders. Symptoms are proposed to be caused by group A streptococcal (GAS), and more specifically, group A beta-hemolytic streptococcal (GABHS) infections. OCD and tic disorders are hypothesized to arise in a subset of children as a result of a post-streptococcal autoimmune process. The proposed link between infection and these disorders is that an autoimmune reaction to infection produces antibodies that interfere with basal ganglia function, causing symptom exacerbations, and this autoimmune response results in a broad range of neuropsychiatric symptoms.
The PANDAS hypothesis, first described in 1998, was based on observations in clinical case studies by Susan Swedo et al at the US National Institute of Mental Health and in subsequent clinical trials where children appeared to have dramatic and sudden OCD exacerbations and tic disorders following infections. Whether PANDAS was a distinct entity differing from other cases of tic disorders or OCD is debated. As the PANDAS hypothesis was unconfirmed and unsupported by data, a new definition was proposed by Swedo and colleagues in 2012. In addition to the 2012 broader pediatric acute-onset neuropsychiatric syndrome (PANS), two other categories have been proposed: childhood acute neuropsychiatric symptoms (CANS) and pediatric infection-triggered autoimmune neuropsychiatric disorders (PITAND). The CANS/PANS hypotheses include different possible mechanisms underlying acute-onset neuropsychiatric conditions, but do not exclude GAS infections as a cause in a subset of individuals. PANDAS, PANS and CANS are the focus of clinical and laboratory research but remain unproven.
There is no diagnostic test to accurately confirm PANDAS; the diagnostic criteria are unevenly applied and the conditions ma |
https://en.wikipedia.org/wiki/Spectral%20imaging | Spectral imaging is imaging that uses multiple bands across the electromagnetic spectrum. While an ordinary camera captures light across three wavelength bands in the visible spectrum, red, green, and blue (RGB), spectral imaging encompasses a wide variety of techniques that go beyond RGB. Spectral imaging may use the infrared, the visible spectrum, the ultraviolet, x-rays, or some combination of the above. It may include the acquisition of image data in visible and non-visible bands simultaneously, illumination from outside the visible range, or the use of optical filters to capture a specific spectral range. It is also possible to capture hundreds of wavelength bands for each pixel in an image.
Multispectral imaging captures a small number of spectral bands, typically three to fifteen, through the use of varying filters and illumination. Many off-the-shelf RGB cameras will detect a small amount of Near-Infrared (NIR) light. A scene may be illuminated with NIR light, and, simultaneously, an infrared-passing filter may be used on the camera to ensure that visible light is blocked and only NIR is captured in the image. Industrial, military, and scientific work, however, uses sensors built for the purpose.
Hyperspectral imaging is another subcategory of spectral imaging, which combines spectroscopy and digital photography. In hyperspectral imaging, a complete spectrum or some spectral information (such as the Doppler shift or Zeeman splitting of a spectral line) is collected at every pixel in an image plane. A hyperspectral camera uses special hardware to capture hundreds of wavelength bands for each pixel, which can be interpreted as a complete spectrum. In other words, the camera has a high spectral resolution. The phrase "spectral imaging" is sometimes used as a shorthand way of referring to this technique, but it is preferable to use the term "hyperspectral imaging" in places when ambiguity may arise. Hyperspectral images are often represented as an image c |
https://en.wikipedia.org/wiki/International%20Botanical%20Congress | International Botanical Congress (IBC) is an international meeting of botanists in all scientific fields, authorized by the International Association of Botanical and Mycological Societies (IABMS) and held every six years, with the location rotating between different continents. The current numbering system for the congresses starts from the year 1900; the XVIII IBC was held in Melbourne, Australia, 24–30 July 2011, and the XIX IBC was held in Shenzhen, China, 23–29 July 2017.
The IBC has the power to alter the ICN (International Code of Nomenclature for algae, fungi, and plants), which was renamed from the International Code of Botanical Nomenclature (ICBN) at the XVIII IBC. Formally the power resides with the Plenary Session; in practice this approves the decisions of the Nomenclature Section. The Nomenclature Section meets before the actual Congress and deals with all proposals to modify the Code: this includes ratifying recommendations from sub-committees on conservation. To reduce the risk of a hasty decision the Nomenclature Section adopts a 60% majority requirement for any change not already recommended by a committee.
History
Prior to the first International Botanical Congress, local congresses concerned with natural sciences generally had grown to be very large, and a more specialized but also international meeting was considered desirable. The first annual IBC was held in 1864 in Brussels, in conjunction with an international horticultural exhibit. At the second annual congress (held in Amsterdam), Karl Koch made a proposal to standardize botanical nomenclature, and the third congress (held in London) resolved that this matter would be dealt with by the next congress.
The fourth congress, which had as one of its principal purposes to establish laws of botanical nomenclature, was organized by la Société botanique de France, and took place in Paris in August 1867. The laws adopted were based on those prepared by Alphonse de Candolle. Regular internationa |
https://en.wikipedia.org/wiki/Activation-induced%20cytidine%20deaminase | Activation-induced cytidine deaminase, also known as AICDA, AID and single-stranded DNA cytosine deaminase, is a 24 kDa enzyme which in humans is encoded by the AICDA gene. It creates mutations in DNA by deamination of cytosine base, which turns it into uracil (which is recognized as a thymine). In other words, it changes a C:G base pair into a U:G mismatch. The cell's DNA replication machinery recognizes the U as a T, and hence C:G is converted to a T:A base pair. During germinal center development of B lymphocytes, AID also generates other types of mutations, such as C:G to A:T. The mechanism by which these other mutations are created is not well understood. It is a member of the APOBEC family.
In B cells in the lymph nodes, AID causes mutations that produce antibody diversity, but that same mutation process leads to B cell lymphoma.
Function
This gene encodes a DNA-editing deaminase that is a member of the cytidine deaminase family. The protein is involved in somatic hypermutation, gene conversion, and class-switch recombination of immunoglobulin genes in B cells of the immune system.
AID is currently thought to be the master regulator of secondary antibody diversification. It is involved in the initiation of three separate immunoglobulin (Ig) diversification processes:
Somatic hypermutation (SHM), in which the antibody genes are minimally mutated to generate a library of antibody variants, some of which with higher affinity for a particular antigen than any of its close variants
Class switch recombination (CSR), in which B cells change their expression from IgM to IgG or other immune types
Gene conversion (GC) a process that causes mutations in antibody genes of chickens, pigs and some other vertebrates.
AID has been shown in vitro to be active on single-strand DNA, and has been shown to require active transcription in order to exert its deaminating activity. The involvement of Cis-regulatory factors is suspected as AID activity is several orders of |
https://en.wikipedia.org/wiki/Baum%C3%A9%20scale | The Baumé scale is a pair of hydrometer scales developed by French pharmacist Antoine Baumé in 1768 to measure density of various liquids. The unit of the Baumé scale has been notated variously as degrees Baumé, B°, Bé° and simply Baumé (the accent is not always present). One scale measures the density of liquids heavier than water and the other, liquids lighter than water. The Baumé of distilled water is 0. The API gravity scale is based on errors in early implementations of the Baumé scale.
Definitions
Baumé degrees (heavy) originally represented the percent by mass of sodium chloride in water at . Baumé degrees (light) was calibrated with 0°Bé (light) being the density of 10% NaCl in water by mass and 10°Bé (light) set to the density of water.
Consider, at near room temperature:
+100°Bé (specific gravity, 3.325) would be among the densest fluids known (except some liquid metals), such as diiodomethane.
Near 0°Bé would be approximately the density of water.
−100°Bé (specific gravity, 0.615) would be among the lightest fluids known, such as liquid butane.
Thus, the system could be understood as representing a practical spectrum of the density of liquids between −100 and 100, with values near 0 being the approximate density of water.
Conversions
The relationship between specific gravity (s.g.; i.e., water-specific gravity, the density relative to water) and degrees Baumé is a function of the temperature. Different versions of the scale may use different reference temperatures. Different conversions formulae can therefore be found in various handbooks.
As an example, a 2008 handbook states the conversions between specific gravity and degrees Baumé at a temperature of :
The numerator in the specific gravity calculation is commonly known as the "modulus".
An older handbook gives the following formulae (no reference temperature being mentioned):
Other scales
Because of vague instructions or errors in translation a large margin of error was introduced when |
https://en.wikipedia.org/wiki/Motor%20drive | Motor drive means a system that includes a motor. An adjustable speed motor drive means a system that includes a motor that has multiple operating speeds. A variable speed motor drive is a system that includes a motor and is continuously variable in speed. If the motor is generating electrical energy rather than using it – this could be called a generator drive but is often still referred to as a motor drive.
A variable frequency drive (VFD) or variable speed drive (VSD) describes the electronic portion of the system that controls the speed of the motor. More generally, the term drive, describes equipment used to control the speed of machinery. Many industrial processes such as assembly lines must operate at different speeds for different products. Where process conditions demand adjustment of flow from a pump or fan, varying the speed of the drive may save energy compared with other techniques for flow control.
Where speeds may be selected from several different pre-set ranges, usually the drive is said to be adjustable speed. If the output speed can be changed without steps over a range, the drive is usually referred to as variable speed.
Adjustable and variable speed drives may be purely mechanical (termed variators), electromechanical, hydraulic, or electronic.
Sometimes motor drive refers to a drive used to control a motor and therefore gets interchanged with VFD or VSD.
Electric motors
AC electric motors can be run in fixed-speed operation determined by the number of stator pole pairs in the motor and the frequency of the alternating current supply. AC motors can be made for "pole changing" operation, reconnecting the stator winding to vary the number of poles so that two, sometimes three, speeds are obtained. For example a machine with 8 physical pairs of poles, could be connected to allow running with either 4 or 8 pole pairs, giving two speeds - at 60 Hz, these would be 1800 RPM and 900 RPM. If speed changes are rare, the motor may be initially |
https://en.wikipedia.org/wiki/Paralytic%20shellfish%20poisoning | Paralytic shellfish poisoning (PSP) is one of the four recognized syndromes of shellfish poisoning, which share some common features and are primarily associated with bivalve mollusks (such as mussels, clams, oysters and scallops). These shellfish are filter feeders and accumulate neurotoxins, chiefly saxitoxin, produced by microscopic algae, such as dinoflagellates, diatoms, and cyanobacteria. Dinoflagellates of the genus Alexandrium are the most numerous and widespread saxitoxin producers and are responsible for PSP blooms in subarctic, temperate, and tropical locations. The majority of toxic blooms have been caused by the morphospecies Alexandrium catenella, Alexandrium tamarense, Gonyaulax catenella and Alexandrium fundyense, which together comprise the A. tamarense species complex. In Asia, PSP is mostly associated with the occurrence of the species Pyrodinium bahamense.
Some pufferfish, including the chamaeleon puffer, also contain saxitoxin, making their consumption hazardous.
PSP and cyanobacteria
PSP toxins (of which saxitoxin is the most ubiquitous) are produced in eukaryotic dinoflagellates and prokaryotic cyanobacteria (usually referred to as blue-green algae). Within the freshwater marine ecosystem, the largest contribution in the accumulation of PSP toxins derives from saxitoxin produced by cyanobacteria. The biosynthesis of saxitoxin is well-defined in cyanobacteria, while within dinoflagellates it remains mostly unknown. Cyanobacterial saxitoxin biosynthesis has been studied in radioisotope tracing experiments, and turns out to be highly complex, involving many steps, enzymes and chemical reactions. The starting reagent, L-arginine, goes through several chemical reactions (among which is a rare chemical reaction known as a Claisen condensation), going through four intermediates before resulting in saxitoxin.
The Australian freshwater mussel Alathyria condola is highly susceptible to neurotoxin accumulation. After two to three days of exposure to |
https://en.wikipedia.org/wiki/Dirichlet%27s%20approximation%20theorem | In number theory, Dirichlet's theorem on Diophantine approximation, also called Dirichlet's approximation theorem, states that for any real numbers and , with , there exist integers and such that and
Here represents the integer part of .
This is a fundamental result in Diophantine approximation, showing that any real number has a sequence of good rational approximations: in fact an immediate consequence is that for a given irrational α, the inequality
is satisfied by infinitely many integers p and q. This shows that any irrational number has irrationality measure at least 2. This corollary also shows that the Thue–Siegel–Roth theorem, a result in the other direction, provides essentially the tightest possible bound, in the sense that the bound on rational approximation of algebraic numbers cannot be improved by increasing the exponent beyond 2. The Thue–Siegel–Roth theorem uses advanced techniques of number theory, but many simpler numbers such as the golden ratio can be much more easily verified to be inapproximable beyond exponent 2. This exponent is referred to as the irrationality measure.
Simultaneous version
The simultaneous version of the Dirichlet's approximation theorem states that given real numbers and a natural number then there are integers such that
Method of proof
Proof by the pigeonhole principle
This theorem is a consequence of the pigeonhole principle. Peter Gustav Lejeune Dirichlet who proved the result used the same principle in other contexts (for example, the Pell equation) and by naming the principle (in German) popularized its use, though its status in textbook terms comes later. The method extends to simultaneous approximation.
Proof outline: Let be an irrational number and be an integer. For every we can write such that is an integer and .
One can divide the interval into smaller intervals of measure . Now, we have numbers and intervals. Therefore, by the pigeonhole principle, at least two of them are in the same i |
https://en.wikipedia.org/wiki/Stub%20%28electronics%29 | In microwave and radio-frequency engineering, a stub or resonant stub is a length of transmission line or waveguide that is connected at one end only. The free end of the stub is either left open-circuit, or short-circuited (as is always the case for waveguides). Neglecting transmission line losses, the input impedance of the stub is purely reactive; either capacitive or inductive, depending on the electrical length of the stub, and on whether it is open or short circuit. Stubs may thus function as capacitors, inductors and resonant circuits at radio frequencies.
The behaviour of stubs is due to standing waves along their length. Their reactive properties are determined by their physical length in relation to the wavelength of the radio waves. Therefore, stubs are most commonly used in UHF or microwave circuits in which the wavelengths are short enough that the stub is conveniently small. They are often used to replace discrete capacitors and inductors, because at UHF and microwave frequencies lumped components perform poorly due to parasitic reactance. Stubs are commonly used in antenna impedance matching circuits, frequency selective filters, and resonant circuits for UHF electronic oscillators and RF amplifiers.
Stubs can be constructed with any type of transmission line: parallel conductor line (where they are called Lecher lines), coaxial cable, stripline, waveguide, and dielectric waveguide. Stub circuits can be designed using a Smith chart, a graphical tool which can determine what length line to use to obtain a desired reactance.
Short circuited stub
The input impedance of a lossless, short circuited line is,
where
is the imaginary unit (),
is the characteristic impedance of the line,
is the phase constant of the line, and
is the physical length of the line.
Thus, depending on whether is positive or negative, the short circuited stub will be inductive or capacitive, respectively.
The length of a stub to act as a capacitor at an angul |
https://en.wikipedia.org/wiki/Paris%20meridian | The Paris meridian is a meridian line running through the Paris Observatory in Paris, France – now longitude 2°20′14.02500″ East. It was a long-standing rival to the Greenwich meridian as the prime meridian of the world. The "Paris meridian arc" or "French meridian arc" (French: la Méridienne de France) is the name of the meridian arc measured along the Paris meridian.
The French meridian arc was important for French cartography, since the triangulations of France began with the measurement of the French meridian arc. Moreover, the French meridian arc was important for geodesy as it was one of the meridian arcs which were measured to determine the figure of the Earth via the arc measurement method. The determination of the figure of the Earth was a problem of the highest importance in astronomy, as the diameter of the Earth was the unit to which all celestial distances had to be referred.
History
French cartography and the figure of the Earth
In the year 1634, France ruled by Louis XIII and Cardinal Richelieu, decided that the Ferro Meridian through the westernmost of the Canary Islands should be used as the reference on maps, since El Hierro (Ferro) was the most western position of the Ptolemy's world map. It was also thought to be exactly 20 degrees west of Paris. The astronomers of the French Academy of Sciences, founded in 1666, managed to clarify the position of El Hierro relative to the meridian of Paris, which gradually supplanted the Ferro meridian. In 1666, Louis XIV of France had authorized the building of the Paris Observatory. On Midsummer's Day 1667, members of the Academy of Sciences traced the future building's outline on a plot outside town near the Port Royal abbey, with Paris meridian exactly bisecting the site north–south. French cartographers would use it as their prime meridian for more than 200 years. Old maps from continental Europe often have a common grid with Paris degrees at the top and Ferro degrees offset by 20 at the bottom.
A Fr |
https://en.wikipedia.org/wiki/Training%20pants | Training pants are undergarments used by incontinent people, typically young children, as an aid for toilet training. They are intended to be worn in between the transition between wearing diapers but before they are ready to wear regular underpants. Training pants may be reusable and made of fabric, or they may be disposable. In the US, disposable training pants may also be referred to as "pull-ups", and in the UK, training pants are frequently referred to as nappy pants or trainer pants. The main benefit of training pants over diapers is that unlike traditional diapers, they can be easily pulled down in order to sit on a potty or toilet, and pulled back up for re-use after the person has used the toilet. The main benefit of wearing training pants over regular underpants is that if the person has an accident, they do not soil their environment.
Disposable pants
Flexible sides
Many toilet training pants use flexible sides for the wearer to easily pull them off and on like normal underwear. This is to increase independence, make training easier, and are designed to be child-friendly, as well as to make them designed like normal underwear, unlike most traditional diapers in which the diaper is fastened by inexpensive velcro straps, although they are adjustable when it comes to tightness. Also unlike normal diapers, the sides are sold already fastened with the goal of enabling wearers to put them on independently.
Some brands include strong velcro on the sides, the goal being to keep the sides in place while enabling the parent to remove the pants if necessary. Conversely, the sides may be more vulnerable to breaking and are liable to lose the psychological benefit of moving away from diapers.
Leak guards
In addition, all training pants have leak protection for when the wearer wets the pant. When the pant is wet, the urine is absorbed and drawn into a compartment that absorbs the wetness, much like a diaper. This is used to prevent the wetness to ruin any clothing |
https://en.wikipedia.org/wiki/OS-level%20virtualization | OS-level virtualization is an operating system (OS) virtualization paradigm in which the kernel allows the existence of multiple isolated user space instances, called containers (LXC, Solaris containers, AIX WPARs, HP-UX SRP Containers, Docker, Podman), zones (Solaris containers), virtual private servers (OpenVZ), partitions, virtual environments (VEs), virtual kernels (DragonFly BSD), or jails (FreeBSD jail or chroot jail). Such instances may look like real computers from the point of view of programs running in them. A computer program running on an ordinary operating system can see all resources (connected devices, files and folders, network shares, CPU power, quantifiable hardware capabilities) of that computer. However, programs running inside of a container can only see the container's contents and devices assigned to the container.
On Unix-like operating systems, this feature can be seen as an advanced implementation of the standard chroot mechanism, which changes the apparent root folder for the current running process and its children. In addition to isolation mechanisms, the kernel often provides resource-management features to limit the impact of one container's activities on other containers. Linux containers are all based on the virtualization, isolation, and resource management mechanisms provided by the Linux kernel, notably Linux namespaces and cgroups.
The term container, while most popularly referring to OS-level virtualization systems, is sometimes ambiguously used to refer to fuller virtual machine environments operating in varying degrees of concert with the host OS, e.g., Microsoft's Hyper-V containers. A more historic overview of virtualization in general since 1960 can be found in the Timeline of virtualization development.
Operation
On ordinary operating systems for personal computers, a computer program can see (even though it might not be able to access) all the system's resources. They include:
Hardware capabilities that can be emplo |
https://en.wikipedia.org/wiki/Descriptive%20botanical%20name | Descriptive botanical names are scientific names of groups of plants that are irregular, not being derived systematically from the name of a type genus. They may describe some characteristics of the group in general or may be a name already in existence before regularised scientific nomenclature.
Descriptive names can occur above or at the rank of family. There is only a single descriptive below the rank of family (the subfamily Papilionoideae).
Above the rank of family
Descriptive names above the rank of family are governed by Article 16 of the International Code of Nomenclature for algae, fungi, and plants (ICN), which rules that a name above the rank of family may either be ‘automatically typified’ (such as Magnoliophyta and Magnoliopsida from the type genus Magnolia) or be descriptive.
Descriptive names of this type may be used unchanged at different ranks (without modifying the suffix). These descriptive plant names are decreasing in importance, becoming less common than ‘automatically typified names’, but many are still in use, such as:
Plantae, Algae, Musci, Fungi, Embryophyta, Tracheophyta, Spermatophyta, Gymnospermae, Coniferae, Coniferales, Angiospermae, Monocotyledones, Dicotyledones, etc.
Many of these descriptive names have a very long history, often preceding Carl Linnaeus. Some are Classical Latin common nouns in the nominative plural, meaning for instance ‘the plants’, ‘the seaweeds’, ‘the mosses’. Like all names above the rank of family, these names follow the Latin grammatical rules of nouns in the plural, and are written with an initial capital letter.
At the rank of family
Article 18.5 of the ICN allows a descriptive name, of long usage, for the following eight families. For each of these families there also exists a name based on the name of an included genus (an alternative name that is also allowed, here in parentheses):
Compositae = "composites" (alternative name: Asteraceae, based on the genus Aster)
Cruciferae = "cross-bearers" |
https://en.wikipedia.org/wiki/Giuga%20number | A Giuga number is a composite number n such that for each of its distinct prime factors pi we have , or equivalently such that for each of its distinct prime factors pi we have .
The Giuga numbers are named after the mathematician Giuseppe Giuga, and relate to his conjecture on primality.
Definitions
Alternative definition for a Giuga number due to Takashi Agoh is: a composite number n is a Giuga number if and only if the congruence
holds true, where B is a Bernoulli number and is Euler's totient function.
An equivalent formulation due to Giuseppe Giuga is: a composite number n is a Giuga number if and only if the congruence
and if and only if
All known Giuga numbers n in fact satisfy the stronger condition
Examples
The sequence of Giuga numbers begins
30, 858, 1722, 66198, 2214408306, 24423128562, 432749205173838, … .
For example, 30 is a Giuga number since its prime factors are 2, 3 and 5, and we can verify that
30/2 - 1 = 14, which is divisible by 2,
30/3 - 1 = 9, which is 3 squared, and
30/5 - 1 = 5, the third prime factor itself.
Properties
The prime factors of a Giuga number must be distinct. If divides , then it follows that , where is divisible by . Hence, would not be divisible by , and thus would not be a Giuga number.
Thus, only square-free integers can be Giuga numbers. For example, the factors of 60 are 2, 2, 3 and 5, and 60/2 - 1 = 29, which is not divisible by 2. Thus, 60 is not a Giuga number.
This rules out squares of primes, but semiprimes cannot be Giuga numbers either. For if , with primes, then
, so will not divide , and thus is not a Giuga number.
All known Giuga numbers are even. If an odd Giuga number exists, it must be the product of at least 14 primes. It is not known if there are infinitely many Giuga numbers.
It has been conjectured by Paolo P. Lava (2009) that Giuga numbers are the solutions of the differential equation n' = n+1, where n' is the arithmetic derivative of n. (For square-free numbers , |
https://en.wikipedia.org/wiki/Temporal%20resolution | Temporal resolution (TR) refers to the discrete resolution of a measurement with respect to time.
Physics
Often there is a trade-off between the temporal resolution of a measurement and its spatial resolution, due to Heisenberg's uncertainty principle. In some contexts, such as particle physics, this trade-off can be attributed to the finite speed of light and the fact that it takes a certain period of time for the photons carrying information to reach the observer. In this time, the system might have undergone changes itself. Thus, the longer the light has to travel, the lower the temporal resolution.
Technology
Computing
In another context, there is often a tradeoff between temporal resolution and computer storage. A transducer may be able to record data every millisecond, but available storage may not allow this, and in the case of 4D PET imaging the resolution may be limited to several minutes.
Electronic displays
In some applications, temporal resolution may instead be equated to the sampling period, or its inverse, the refresh rate, or update frequency in Hertz, of a TV, for example.
The temporal resolution is distinct from temporal uncertainty. This would be analogous to conflating image resolution with optical resolution. One is discrete, the other, continuous.
The temporal resolution is a resolution somewhat the 'time' dual to the 'space' resolution of an image. In a similar way, the sample rate is equivalent to the pixel pitch on a display screen, whereas the optical resolution of a display screen is equivalent to temporal uncertainty.
Note that both this form of image space and time resolutions are orthogonal to measurement resolution, even though space and time are also orthogonal to each other. Both an image or an oscilloscope capture can have a signal-to-noise ratio, since both also have measurement resolution.
Oscilloscopy
An oscilloscope is the temporal equivalent of a microscope, and it is limited by temporal uncertainty the same way a m |
https://en.wikipedia.org/wiki/Geomag | Geomag, stylized as GEOMAG, is a magnetic construction toy consisting of a collection of bars, each set with a neodymium alloy magnet at both ends, connected by a magnetic plug coated with polypropylene, and nickel-coated metal spheres. These elements interlock using magnetism, allowing for them to be assembled in various ways.
Geomag was created in May 1998 by Claudio Vicentelli. Geomag products are manufactured by Geomagworld SA, based in Novazzano, Switzerland. To align with the 2009/48/EC law's nickel content regulations for toys, the design was adjusted to feature spheres coated with a bronze alloy.
Invention and Patent
In May 1998, Claudio Vicentelli, a specialist in practical applications of permanent magnets, patented the concept of Geomag.
The patented design outlines the setup of the Geomag bars, which incorporate a metal pin that connects two magnets positioned at opposite ends, along with metallic spheres. The primary objective of this configuration was to minimize the use of magnetic material, thereby reducing production expenses.
Global Expansion of Geomag
Geomag made its debut in the Italian toy chain store Città del Sole in 1999. In 2000, Geomag appeared in several toy fairs in Milan, Nuremberg, and New York. Differences in vision for the development of the toy led to Vicentelli and toy company Plastwood ending their partnership in the same year.
In January of 2003, the Swiss company Geomag SA was created in Ticino, with a license to produce Geomag. This company introduced more elements to the toy, such as panels made of semi-transparent colored polycarbonate (triangular platforms, rhombi, squares, and pentagons), used for decorative and structural support purposes. Vicentelli patented these new elements in Europe in 2004.
In 2004, the G-Baby line, aimed at younger children, was introduced, featuring magnetic cubes and half-spheres with magnetic faces.
International regulators such as ASTM USA and the European Commission instituted a rul |
https://en.wikipedia.org/wiki/List%20of%20congenital%20disorders | List of congenital disorders
Numerical
5p syndrome - see Cri du chat syndrome
A
Acrorenal mandibular syndrome
Albinism
Amelia and hemimelia
Amniotic band syndrome
Anencephaly
Angelman syndrome
Aposthia
Arnold–Chiari malformation
B
Bannayan–Zonana syndrome
Bardet–Biedl syndrome
Barth syndrome
Basal-cell nevus syndrome
Beckwith–Wiedemann syndrome
Benjamin syndrome
Bladder exstrophy
Bloom syndrome
Brachydactyly
C
Cat eye syndrome
Caudal regression syndrome
Sotos syndrome Cerebral Gigantism
CHARGE syndrome
Chromosome 16 abnormalities
Chromosome 18 abnormalities
Chromosome 20 abnormalities
Chromosome 22 abnormalities
Cleft lip/palate
Cleidocranial dysostosis
Club foot
Congenital adrenal hyperplasia (CAH)
Congenital central hypoventilation syndrome
Congenital diaphragmatic hernia (CDH)
Congenital Disorder of Glycosylation (CDG)
Congenital hyperinsulinism
Congenital insensitivity to pain with anhidrosis (CIPA)
Congenital pulmonary airway malformation (CPAM)
Conjoined twins
Costello syndrome
Craniopagus parasiticus
Cri du chat syndrome
Cyclopia
Cystic fibrosis
D
De Lange syndrome
Diphallia
Distal trisomy 10q
Down syndrome
E
Ectodermal dysplasia
Ectopia cordis
Ectrodactyly
Encephalocele
F
Fetal alcohol syndrome
Fetofetal transfusion
First arch syndrome
Freeman–Sheldon syndrome
G
Gastroschisis
Genu recurvatum
Goldenhar syndrome
H
Harlequin-type ichthyosis
Heart disorders (Congenital heart defects)
Hemifacial microsomia
Holoprosencephaly
Huntington's disease
Hirschsprung's disease, or congenital aganglionic megacolon
Hypertrichosis
Hypoglossia
Hypomelanism or hypomelanosis (albinism)
Hypospadias
Haemophilia
Heterochromia
Hemochromatosis
I
Imperforate anus
Imperforate hymen
Incontinentia pigmenti
Intestinal neuronal dysplasia
Ivemark syndrome
J
Jacobsen syndrome
K
Katz syndrome
Klinefelter syndrome
Kabuki syndrome
Kyphosis
L
Larsen syndrome
Laurence–Moon syndrome
Lisse |
https://en.wikipedia.org/wiki/Anterior%20spinal%20artery | In human anatomy, the anterior spinal artery is the artery that supplies the anterior portion of the spinal cord. It arises from branches of the vertebral arteries and courses along the anterior aspect of the spinal cord. It is reinforced by several contributory arteries, especially the artery of Adamkiewicz.
Anatomy
Origin
The anterior spinal artery arises bilaterally as two small branches near the termination of the vertebral arteries. One of these vessels is usually larger than the other, but occasionally they are about equal in size.
Course
Descending in front of the medulla oblongata, they unite at the level of the foramen magnum. The single trunk descends in the front of the medulla spinalis, extending to the lowest part of the medulla spinalis. It is continued as a slender twig on the filum terminale. The vessel passes in the pia mater along the anterior median fissure.
Branches
On its course the artery takes several small branches (i.e. anterior segmental medullary arteries), which enter the vertebral canal through the intervertebral foramina. These branches are derived from the vertebral artery, the ascending cervical artery, a branch of the inferior thyroid artery in the neck, the intercostal arteries in the thorax, and from the lumbar artery, iliolumbar artery and lateral sacral arteries in the abdomen and pelvis.
Distribution
It supplies the pia mater, and the substance of the medulla spinalis, also sending off branches at its lower part to be distributed to the cauda equina.
Disorders
Disruption of the anterior spinal artery leads to bilateral disruption of the corticospinal tract, causing motor deficits, and bilateral disruption of the spinothalamic tract, causing sensory deficits in the form of pain/temperature sense loss. It is called anterior spinal artery syndrome. This occurs when the disruption of the anterior spinal artery is at the level of the spinal cord. Contrast this with medial medullary syndrome, when the anterior spinal artery |
https://en.wikipedia.org/wiki/Very%20Large%20Hadron%20Collider | The Very Large Hadron Collider (VLHC) was a proposed future hadron collider planned to be located at Fermilab. The VLHC was planned to be located in a ring, using the Tevatron as an injector. The VLHC would run in two stages, initially the Stage-1 VLHC would have a collision energy of 40 TeV, and a luminosity of at least 1⋅1034 cm−2⋅s−1 (matching or surpassing the LHC design luminosity, however the LHC has now surpassed this).
After running at Stage-1 for a period of time the VLHC was planned to run at Stage-2, with the quadrupole magnets used for bending the beam being replaced by magnets that can reach higher peak magnetic fields, allowing a collision energy of up to 175 TeV and other improvements, including raising the luminosity to at least 2⋅1034 cm−2⋅s−1.
Given that such a performance increase necessitates a correspondingly large increase in size, cost, and power requirements, a significant amount of international collaboration over a period of decades would be required to construct such a collider.
See also
Particle physics
Superconducting Super Collider - planned ring circumference of . Canceled after of tunnel had been bored and about billion spent.
High Luminosity Large Hadron Collider
Future Circular Collider |
https://en.wikipedia.org/wiki/Endoplasm | Endoplasm generally refers to the inner (often granulated), dense part of a cell's cytoplasm. This is opposed to the ectoplasm which is the outer (non-granulated) layer of the cytoplasm, which is typically watery and immediately adjacent to the plasma membrane. The nucleus is separated from the endoplasm by the nuclear envelope. The different makeups/viscosities of the endoplasm and ectoplasm contribute to the amoeba's locomotion through the formation of a pseudopod. However, other types of cells have cytoplasm divided into endo- and ectoplasm. The endoplasm, along with its granules, contains water, nucleic acids, amino acids, carbohydrates, inorganic ions, lipids, enzymes, and other molecular compounds. It is the site of most cellular processes as it houses the organelles that make up the endomembrane system, as well as those that stand alone. The endoplasm is necessary for most metabolic activities, including cell division.
The endoplasm, like the cytoplasm, is far from static. It is in a constant state of flux through intracellular transport, as vesicles are shuttled between organelles and to/from the plasma membrane. Materials are regularly both degraded and synthesized within the endoplasm based on the needs of the cell and/or organism. Some components of the cytoskeleton run throughout the endoplasm though most are concentrated in the ectoplasm - towards the cells edges, closer to the plasma membrane. The endoplasm's granules are suspended in cytosol.
Granules
The term granule refers to a small particle within the endoplasm, typically the secretory vesicles. The granule is the defining characteristic of the endoplasm, as they are typically not present within the ectoplasm. These offshoots of the endomembrane system are enclosed by a phospholipid bilayer and can fuse with other organelles as well as the plasma membrane. Their membrane is only semipermeable and allows them to house substances that could be harmful to the cell if they were allowed to flow fre |
https://en.wikipedia.org/wiki/Morphallaxis | Morphallaxis is the regeneration of specific tissue in a variety of organisms due to loss or death of the existing tissue. The word comes from the Greek allazein, (αλλάζειν) which means to change.
The classical example of morphallaxis is that of the Cnidarian hydra, where when the animal is severed in two (by actively cutting it with, for example, a surgical knife) the remaining severed sections form two fully functional and independent hydra. The notable feature of morphallaxis is that a large majority of regenerated tissue comes from already-present tissue in the organism. That is, the one severed section of the hydra forms into a smaller version of the original hydra, approximately the same size as the severed section. Hence, there is an "exchange" of tissue.
Researchers Wilson and Child showed circa 1930 that if the hydra was pulped and the disassociated food passed through a sieve, those cells then put into an aqueous solution would shortly reform into the original organism with all differentiated tissue correctly arranged.
Morphallaxis is often contrasted with epimorphosis, which is characterized by a much greater relative degree of cellular proliferation. Although cellular differentiation is active in both processes, in morphallaxis the majority of the regeneration comes from reorganization or exchange, while in epimorphosis the majority of the regeneration comes from cellular differentiation. Thus, the two may be distinguished as a measure of degree. Epimorphosis is the regeneration of a part of an organism by proliferation at the cut surface. For example, in Planaria neoblasts help in regeneration.
History
The word comes from the Greek allazein, which means to exchange. The biological process was first discovered in hydra by Abraham Trembley, who was considered the father of environmental zoology. Abraham Trembley was doing research on a sample of pond water and examined the lifestyle of hydra. He couldn’t decide if they belonged to the animal or |
https://en.wikipedia.org/wiki/Pesante | Pesante () is a musical term, meaning "heavy and ponderous." |
https://en.wikipedia.org/wiki/Paramutation | In epigenetics, a paramutation is an interaction between two alleles at a single locus, whereby one allele induces a heritable change in the other allele. The change may be in the pattern of DNA methylation or histone modifications. The allele inducing the change is said to be paramutagenic, while the allele that has been epigenetically altered is termed paramutable. A paramutable allele may have altered levels of gene expression, which may continue in offspring which inherit that allele, even though the paramutagenic allele may no longer be present. Through proper breeding, paramutation can result in siblings that have the same genetic sequence, but with drastically different phenotypes.
Though studied primarily in maize, paramutation has been described in a number of other systems, including animal systems like Drosophila melanogaster and mice. Despite its broad distribution, examples of this phenomenon are scarce and its mechanism is not fully understood.
History
The first description of what would come to be called paramutation was given by William Bateson and Caroline Pellew in 1915, when they described "rogue" peas that always passed their "rogue" phenotype onto their progeny. However, the first formal description of paramutation was given by R.A. Brink at the University of Wisconsin–Madison in the 1950s, who did his work in maize (Zea mays). Brink noticed that specific weakly expressed alleles of the red1 (r1) locus in maize, which encodes a transcription factor that confers red pigment to corn kernels, can heritably change specific strongly expressed alleles to a weaker expression state. The weaker expression state adopted by the changed allele is heritable and can, in turn, change the expression state of other active alleles in a process termed secondary paramutation. Brink showed that the influence of the paramutagenic allele could persist for many generations.
Description
The alleles that cause heritable changes in the alleles they come into contact |
https://en.wikipedia.org/wiki/King%27s%20Valley | King’s Valley is a platform game released by Konami for MSX in 1985. The game is considered a spiritual successor to Konami's earlier arcade game Tutankham (1982), employing similar concepts such as treasure hunting in Egyptian tombs and an identical end-level music tune. It also has similarities to Lode Runner (1983).
The game was initially released on ROM cartridge with 15 levels. It was also planned to be released on floppy disk with 60 levels but that version was shelved. The floppy disk version would ultimately be released a few years later in 1988 as part of Konami Game Collection Vol. 1 on MSX.
Gameplay
As an intrepid adventurer, the player's goal is to collect various gems, while evading angry mummies and other monsters long enough to find the exit to the next level. A port to MS-DOS was made by a Korean company named APROMAN which supports monochrome and CGA graphic cards.
Legacy
A sequel King's Valley II was released for the MSX in two versions, each specifically designed for the MSX and MSX2 respectively.
See also
Pharaoh's Revenge (1988) |
https://en.wikipedia.org/wiki/Transvection%20%28genetics%29 | Transvection is an epigenetic phenomenon that results from an interaction between an allele on one chromosome and the corresponding allele on the homologous chromosome. Transvection can lead to either gene activation or repression. It can also occur between nonallelic regions of the genome as well as regions of the genome that are not transcribed.
The first observation of mitotic (i.e. non-meiotic) chromosome pairing was discovered via microscopy in 1908 by Nettie Stevens.
Edward B. Lewis at Caltech discovered transvection at the bithorax complex in Drosophila in the 1950s. Since then, transvection has been observed at a number of additional loci in Drosophila, including white, decapentaplegic, eyes absent, vestigial, and yellow.
As stated by Ed Lewis, "Operationally, transvection is occurring if the phenotype of a given genotype can be altered solely by disruption of somatic (or meiotic) pairing. Such disruption can generally be accomplished by introduction of a heterozygous rearrangement that disrupts pairing in the relevant region but has no position effect of its own on the phenotype" (cited by Ting Wu and Jim Morris 1999). Recently, pairing-mediated phenomena have been observed in species other than Drosophila, including mice, humans, plants, nematodes, insects, and fungi. In light of these findings, transvection may represent a potent and widespread form of gene regulation.
Transvection appears to be dependent upon chromosome pairing. In some cases, if one allele is placed on a different chromosome by a translocation, transvection does not occur. Transvection can sometimes be restored in a translocation homozygote, where both alleles may once again be able to pair. Restoration of phenotype has been observed at bithorax, decapentaplegic, eyes absent, and vestigial, and with transgenes of white. In some cases, transvection between two alleles leads to intragenic complementation while disruption of transvection disrupts the complementation.
Transvection i |
https://en.wikipedia.org/wiki/Pitch%20correction | Pitch correction is an electronic effects unit or audio software that changes the intonation (highness or lowness in pitch) of an audio signal so that all pitches will be notes from the equally tempered system (i.e., like the pitches on a piano). Pitch correction devices do this without affecting other aspects of its sound. Pitch correction first detects the pitch of an audio signal (using a live pitch detection algorithm), then calculates the desired change and modifies the audio signal accordingly. The widest use of pitch corrector devices is in Western popular music on vocal lines.
History
Prior to the invention of pitch correction, errors in vocal intonation in recordings could only be corrected by re-recording the entire song (in the early era of recording) or, after the development of multitrack recording, by overdubbing the incorrect vocal pitches by re-recording those specific notes or sections. By the late 70s, engineers were fixing parts using the Eventide Harmonizer. Prior to the development of electronic pitch correction devices, there was no way to make "real time" corrections to a live vocal performance in a concert (although lip-syncing was used in some cases where a performer was not able to sing adequately in live performances).
Pitch correction was relatively uncommon before 1997 when Antares Audio Technology's Auto-Tune Pitch Correcting Plug-In was introduced. Developed by Dr. Andy Hildebrand, a geophysical engineer, the software leveraged auto-correlation algorithms originally used in seismic wave mapping for the oil industry. Andy Hildebrand adapted these algorithms for musical applications, offering a more efficient and precise way to correct vocal imperfections. This replaced slow studio techniques with a real-time process that could also be used in live performances.
Auto-Tune is still widely used, as are other pitch-correction algorithms including Celemony's Direct Note Access which allows adjustment of individual notes in a polyphonic au |
https://en.wikipedia.org/wiki/Artin%20L-function | In mathematics, an Artin L-function is a type of Dirichlet series associated to a linear representation ρ of a Galois group G. These functions were introduced in 1923 by Emil Artin, in connection with his research into class field theory. Their fundamental properties, in particular the Artin conjecture described below, have turned out to be resistant to easy proof. One of the aims of proposed non-abelian class field theory is to incorporate the complex-analytic nature of Artin L-functions into a larger framework, such as is provided by automorphic forms and the Langlands program. So far, only a small part of such a theory has been put on a firm basis.
Definition
Given , a representation of on a finite-dimensional complex vector space , where is the Galois group of the finite extension of number fields, the Artin -function: is defined by an Euler product. For each prime ideal in 's ring of integers, there is an Euler factor, which is easiest to define in the case where is unramified in (true for almost all ). In that case, the Frobenius element is defined as a conjugacy class in . Therefore, the characteristic polynomial of is well-defined. The Euler factor for is a slight modification of the characteristic polynomial, equally well-defined,
as rational function in t, evaluated at , with a complex variable in the usual Riemann zeta function notation. (Here N is the field norm of an ideal.)
When is ramified, and I is the inertia group which is a subgroup of G, a similar construction is applied, but to the subspace of V fixed (pointwise) by I.
The Artin L-function is then the infinite product over all prime ideals of these factors. As Artin reciprocity shows, when G is an abelian group these L-functions have a second description (as Dirichlet L-functions when K is the rational number field, and as Hecke L-functions in general). Novelty comes in with non-abelian G and their representations.
One application is to give factorisations of Dedekind zeta-f |
https://en.wikipedia.org/wiki/Scott%20Yanoff | Scott Yanoff (born October 20, 1969) is an IT manager and web developer who was a key person in the early days of the internet, most notably for creating and maintaining the Yanoff List, an alphabetical list of internet sites.
Career
Yanoff authored the Inter-Network Mail Guide, a text written in 1997 documenting the different methods of sending email from one network to another. He was also a co-author of The Web Site Administrator's Survival Guide with Jerry Ablan, a book that explains how to set up, administer, care for, and feed your own Web server. Most of this work was accomplished as an undergraduate student at the University of Wisconsin–Milwaukee, while working as a mainframe/UNIX consultant for the university.
He has worked for SpectraCom, Inc., and the now-defunct Strong Capital Management in Menomonee Falls, Wisconsin and at Northwestern Mutual in Milwaukee, Wisconsin from February, 2004 to June, 2023.
The Yanoff List
In the early and mid-1990s, before the use of search engines, the Yanoff List became an important tool for internet users. The list consisted of internet sites listed alphabetically and grouped by subject acting as a type of internet yellow pages containing hundreds of FTP, gopher, and web locations relevant to each subject. Users of the internet in the early 1990s would eagerly await the latest version of this list. As a minor tribute to his service, a popular Palm-based newsreader, Yanoff, was named after him.
Additional work
Yanoff created a visual basic script called "iTunesStats" in 2008 that can be run on Windows-based computers to generate a file of statistics of one's listening habits based upon the user's iTunes library. Additionally, he transposed popular music guitar tablature in the 1990s including that of The Beatles, R.E.M., Bruce Springsteen, and U2. |
https://en.wikipedia.org/wiki/Axalto | See Gemalto for current company information.
Axalto has been a smart card manufacturer, that during its brief independent existence, with over 4,500 employees in 60 countries, was one of the world's leading providers of microprocessor cards (Gartner, 2005) and also a major supplier of point of sale terminals.
Axalto's business covered the telecommunications, public telephony, finance, retail, transport, entertainment, healthcare, personal identification, information technology and public sector markets. The company recorded sales of over $992 million in 2005 and was fully listed on Euronext, the pan-European market.
History
Starting business as the Smart Card and Terminal Department of Schlumberger, after Schlumberger purchased Sema Group, it was merged with the latter to form SchlumbergerSema.
When Schlumberger sold the IT services business of SchlumbergerSema to Atos Origin, the Smart Card and Terminal Department was again spun off to become Axalto, which went public in 2004, with its initial public offering.
On December 7, 2005, Axalto announced its merger plan with main competitor Gemplus International. On May 19, 2006, the European Commission approved the merger between Axalto and Gemplus, leading to the creation of the new company Gemalto, on June 2, 2006.
External links
Gemalto Official Site
Axalto Official Site
Gemplus Official Site
Smart cards |
https://en.wikipedia.org/wiki/Crossover%20distortion | Crossover distortion is a type of distortion which is caused by switching between devices driving a load. It is most commonly seen in complementary, or "push-pull", Class-B amplifier stages, although it is occasionally seen in other types of circuits as well.
The term crossover signifies the "crossing over" of the signal between devices, in this case, from the upper transistor to the lower and vice versa. The term is not related to the audio loudspeaker crossover filter—a filtering circuit which divides an audio signal into frequency bands to drive separate drivers in multiway speakers.
Distortion mechanism
The image shows a typical class-B emitter-follower complementary output stage. Under no signal conditions, the output is exactly midway between the supplies (i.e., at 0 V). When this is the case, the base-emitter bias of both the transistors is zero, so they are in the cut-off region where the transistors are not conducting.
Consider a positive-going swing: As long as the input is less than the required forward VBE drop (≈ 0.65 V) of the upper NPN transistor, it will remain off or conduct very little. This is the same as a diode operation as far as the base circuit is concerned, and the output voltage does not follow the input (the lower PNP transistor is still off because its base-emitter diode is being reverse biased by the positive-going input). The same applies to the lower transistor but for a negative-going input. Thus, between about ±0.65 V of input, the output voltage is not a true replica or amplified version of the input, and we can see that as a "kink" in the output waveform near 0 V (or where one transistor stops conducting and the other starts). This kink is the most pronounced form of crossover distortion, and it becomes more evident and intrusive when the output voltage swing is reduced.
Less pronounced forms of distortion may be observed in this circuit as well. An emitter-follower will have a voltage gain of just under 1. In the circuit sho |
https://en.wikipedia.org/wiki/Protein%20misfolding%20cyclic%20amplification | Protein misfolding cyclic amplification (PMCA) is an amplification technique (conceptually like PCR but not involving nucleotides) to multiply misfolded prions originally developed by Soto and colleagues. It is a test for spongiform encephalopathies like CWD or BSE.
Technique
The technique initially incubates a small amount of abnormal prion with an excess of normal protein, so that some conversion takes place. The growing chain of misfolded protein is then blasted with ultrasound, breaking it down into smaller chains and so rapidly increasing the amount of abnormal protein available to cause conversions. By repeating the cycle, the mass of normal protein is rapidly changed into the prion being tested for.
Development
PMCA was originally developed to, in vitro, mimic prion replication with a similar efficiency to the in vivo process, but with accelerated kinetics. PMCA is conceptually analogous to the polymerase chain reaction - in both systems a template grows at the expense of a substrate in a cyclic reaction, combining growing and multiplication of the template units.
Replication
PMCA has been applied to replicate the misfolded protein from diverse species. The newly generated protein exhibits the same biochemical, biological, and structural properties as brain-derived PrPSc and strikingly it is infectious to wild type animals, producing a disease with similar characteristics as the illness produced by brain-isolated prions.
Automation
The technology has been automated, leading to a dramatic increase in the efficiency of amplification. Now, a single cycle results in a 2500-fold increase in sensitivity of detection over western blotting, whereas 2 and 7 consecutive cycles result in 6 million and 3 billion-fold increases in sensitivity of detection over western blotting, a technique widely used in BSE surveillance in several countries.
Sensitivity
It has been shown that PMCA is capable of detecting as little as a single molecule of oligomeric infectious PrPSc. |
https://en.wikipedia.org/wiki/Online%20producer | An online producer oversees the making of content for websites and other online properties. Online producers are sometimes called "web producers," "publishers," "content producers," or "online editors."
Online producers have a range of responsibilities. They are in charge of arranging, editing, and sometimes even creating content, which comes in various forms like writing, music, video, and Adobe Flash, for websites. Many online producers often, but not always, specialize in one particular form of web content.
The role is distinct from that of web designer, developer, or webmaster. Online producers define and maintain the character of a website, as opposed to running it from a technical standpoint. However, technical and design knowledge is imperative for an online producer to be effective at their job. Online producers are typically responsible for working with system engineers or web designers to design site features with a user-friendly interface for smooth navigation and transitions. This means that an online producer should be familiar with common web publishing technologies such as CSS and HTML to effectively communicate with the system engineers or web designers on their teams.
Online producers may also be responsible for finding ways to boost the popularity of a website and increase user activity, particularly if the website sells advertising space. Online producers will also work with web teams to conceive, design and launch new web products such as blogs, community forums and user profiles.
Online producer roles often feature a project management component. The producer will schedule resources to create content, ensure that the content has passed Q/A on a staging server, and publish the content to the production server; keeping to a pre-defined schedule or project plan.
Annual Pay
The estimated annual pay for an online producer in the United States ranges from around $42K USD to $98K USD a year; with the current reported, average salary being around |
https://en.wikipedia.org/wiki/AS-Interface | Actuator Sensor Interface (AS-Interface or ASi) is an industrial networking solution (physical layer, data access method and protocol) used in PLC, DCS and PC-based automation systems. It is designed for connecting simple field I/O devices (e.g. binary ON/OFF devices such as actuators, sensors, rotary encoders, analog inputs and outputs, push buttons, and valve position sensors) in discrete manufacturing and process applications using a single two-conductor cable.
AS-Interface is an 'open' technology supported by a multitude of automation equipment vendors. The AS-Interface has been an international standard according to IEC 62026-2 since 1999.
AS-Interface is a networking alternative to the hard wiring of field devices. It can be used as a partner network for higher level fieldbus networks such as Profibus, DeviceNet, Interbus and Industrial Ethernet, for whom it offers a low-cost remote I/O solution. It is used in automation applications, including conveyor control, packaging machines, process control valves, bottling plants, electrical distribution systems, airport baggage carousels, elevators, bottling lines and food production lines. AS-Interface provides a basis for Functional Safety in machinery safety/emergency stop applications. Safety devices communicating over AS-Interface follow all the normal AS-Interface data rules. The AS-Interface specification is managed by AS-International, a member funded non-profit organization located in Gelnhausen/Germany. Several international subsidiaries exist around the world.
History
AS-Interface was developed during the late 1980s and early 1990s by a development partnership of 11 companies mostly known for their offering of industrial non-contact sensing devices like inductive sensors, photoelectric sensors, capacitive sensors and ultrasonic sensors. Once development was completed the consortium was resolved and a member organization, AS-International, was founded. The first operational system was shown at the 1994 Ha |
https://en.wikipedia.org/wiki/Refinement%20calculus | The refinement calculus is a formalized approach to stepwise refinement for program construction. The required behaviour of the final executable program is specified as an abstract and perhaps non-executable "program", which is then refined by a series of correctness-preserving transformations into an efficiently executable program.
Proponents include Ralph-Johan Back, who originated the approach in his 1978 PhD thesis On the Correctness of Refinement Steps in Program Development, and Carroll Morgan, especially with his book Programming from Specifications (Prentice Hall, 2nd edition, 1994, ). In the latter case, the motivation was to link Abrial's specification notation Z, via a rigorous relation of behaviour-preserving program refinement, to an executable programming notation based on Dijkstra's language of guarded commands. Behaviour-preserving in this case means that any Hoare triple satisfied by a program should also be satisfied by any refinement of it, which notion leads directly to specification statements as pre- and postconditions standing, on their own, for any program that could soundly be placed between them. |
https://en.wikipedia.org/wiki/Oxygen%20pulse | Oxygen pulse is a physiological term for oxygen uptake per heartbeat at rest. |
https://en.wikipedia.org/wiki/Retromer | Retromer is a complex of proteins that has been shown to be important in recycling transmembrane receptors from endosomes to the trans-Golgi network (TGN) and directly back to the plasma membrane. Mutations in retromer and its associated proteins have been linked to Alzheimer's and Parkinson's diseases.
Background
Retromer is a heteropentameric complex, which in humans is composed of a less defined membrane-associated sorting nexin dimer (SNX1, SNX2, SNX5, SNX6), and a vacuolar protein sorting (Vps) heterotrimer containing Vps26, Vps29, and Vps35. Although the SNX dimer is required for the recruitment of retromer to the endosomal membrane, the cargo binding function of this complex is contributed by the core heterotrimer through the binding of Vps26 and Vps35 subunits to various cargo molecules including M6PR, wntless, SORL1 (which is also a receptor for other cargo proteins such as APP), and sortilin. Early study on sorting of acid hydrolases such as carboxypeptidase Y (CPY) in S. cerevisiae mutants has led to the identification of retromer in mediating the retrograde trafficking of the pro-CPY receptor (Vps10) from the endosomes to the TGN.
Structure
The retromer complex is highly conserved: homologs have been found in C. elegans, mouse and human. The retromer complex consists of 5 proteins in yeast: Vps35p, Vps26p, Vps29p, Vps17p, Vps5p. The mammalian retromer consists of Vps26, Vps29, Vps35, SNX1 and SNX2, and possibly SNX5 and SNX6. It is proposed to act in two subcomplexes: (1) A cargo recognition heterotrimeric complex that consist of Vps35, Vps29 and Vps26, and (2) SNX-BAR dimers, which consist of SNX1 or SNX2 and SNX5 or SNX6 that facilitate endosomal membrane remodulation and curvature, resulting in the formation of tubules/vesicles that transport cargo molecules to the trans-golgi network (TGN). Humans have two orthologs of VPS26: VPS26A, which is ubiquitous, and VPS26B, which is found in the central nervous system, where it forms a unique retr |
https://en.wikipedia.org/wiki/Ralph-Johan%20Back | Ralph-Johan Back is a Finnish computer scientist. Back originated the refinement calculus, an important approach to the formal development of programs using stepwise refinement, in his 1978 PhD thesis at the University of Helsinki, On the Correctness of Refinement Steps in Program Development. He has undertaken much subsequent research in this area. He has held positions at CWI Amsterdam, the Academy of Finland and the University of Tampere.
Since 1983, he has been Professor of Computer Science at the Åbo Akademi University in Turku. For 2002–2007, he was an Academy Professor at the Academy of Finland. He is Director of CREST (Center for Reliable Software Technology) at Åbo Akademi.
Back is a member of Academia Europaea. |
https://en.wikipedia.org/wiki/Keynesian%20beauty%20contest | A Keynesian beauty contest describes a beauty contest where judges are rewarded for selecting the most popular faces among all judges, rather than those they may personally find the most attractive. This idea is often applied in financial markets, whereby investors could profit more by buying whichever stocks they think other investors will buy, rather than the stocks that have fundamentally the best value. Because when other people buy a stock, they bid up the price, allowing an earlier investor to cash out with a profit, regardless of whether the price increases are supported by its fundamentals.
The concept was developed by John Maynard Keynes and introduced in Chapter 12 of his work, The General Theory of Employment, Interest and Money (1936), to explain price fluctuations in equity markets.
Overview
Keynes described the action of rational agents in a market using an analogy based on a fictional newspaper contest, in which entrants are asked to choose the six most attractive faces from a hundred photographs. Those who picked the most popular faces are then eligible for a prize.
A naive strategy would be to choose the face that, in the opinion of the entrant, is the most handsome. A more sophisticated contest entrant, wishing to maximize the chances of winning a prize, would think about what the majority perception of attractiveness is, and then make a selection based on some inference from their knowledge of public perceptions. This can be carried one step further to take into account the fact that other entrants would each have their own opinion of what public perceptions are. Thus the strategy can be extended to the next order and the next and so on, at each level attempting to predict the eventual outcome of the process based on the reasoning of other rational agents.
"It is not a case of choosing those [faces] that, to the best of one's judgment, are really the prettiest, nor even those that average opinion genuinely thinks the prettiest. We have reached |
https://en.wikipedia.org/wiki/Adult%20stem%20cell | Adult stem cells are undifferentiated cells, found throughout the body after development, that multiply by cell division to replenish dying cells and regenerate damaged tissues. Also known as somatic stem cells (from Greek σωματικóς, meaning of the body), they can be found in juvenile, adult animals, and humans, unlike embryonic stem cells.
Scientific interest in adult stem cells is centered around two main characteristics. The first of which is their ability to divide or self-renew indefinitely, and the second their ability to generate all the cell types of the organ from which they originate, potentially regenerating the entire organ from a few cells. Unlike embryonic stem cells, the use of human adult stem cells in research and therapy is not considered to be controversial, as they are derived from adult tissue samples rather than human embryos designated for scientific research. The main functions of adult stem cells are to replace cells that are at risk of possibly dying as a result of disease or injury and to maintain a state of homeostasis within the cell. There are three main methods to determine if the adult stem cell is capable of becoming a specialized cell. The adult stem cell can be labeled in vivo and tracked, it can be isolated and then transplanted back into the organism, and it can be isolated in vivo and manipulated with growth hormones. They have mainly been studied in humans and model organisms such as mice and rats.
Structure
Defining properties
A stem cell possesses two properties:
Self-renewal is the ability to go through numerous cycles of cell division while still maintaining its undifferentiated state. Stem cells can replicate several times and can result in the formation of two stem cells, one stem cell more differentiated than the other, or two differentiated cells.
Multipotency or multidifferentiative potential is the ability to generate progeny of several distinct cell types, (for example glial cells and neurons) as opposed to u |
https://en.wikipedia.org/wiki/Cathemerality | Cathemerality, sometimes called "metaturnality", is an organismal activity pattern of irregular intervals during the day or night in which food is acquired, socializing with other organisms occurs, and any other activities necessary for livelihood are undertaken. This activity differs from the generally monophasic pattern (sleeping once per day) of nocturnal and diurnal species as it is polyphasic (sleeping 4-6 times per day) and is approximately evenly distributed throughout the 24-hour cycle.
Many animals do not fit the traditional definitions of being strictly nocturnal, diurnal, or crepuscular, often driven by factors that include the availability of food, predation pressure, and variable ambient temperature. Although cathemerality is not as widely observed in individual species as diurnality or nocturnality, this activity pattern is seen across the mammal taxa, such as in lions, coyotes, and lemurs.
Cathemeral behaviour can also vary on a seasonal basis over an annual period by exhibiting periods of predominantly nocturnal behaviour and exhibiting periods of predominantly diurnal behaviour. For example, seasonal cathemerality has been described for the mongoose lemur (Eulemur mongoz) as activity that shifts from being predominantly diurnal to being predominantly nocturnal over a yearly cycle, but the common brown lemurs (Eulemur fulvus) have been observed as seasonally shifting from diurnal activity to cathemerality.
As research on cathemerality continues, many factors that have been identified as influencing whether or why an animal behaves cathemerally. Such factors include resource variation, food quality, photoperiodism, nocturnal luminosity, temperature, predator avoidance, and energetic constraints.
Etymology
In the original manuscript for his article "Patterns of activity in the Mayotte lemur, Lemur fulvus mayottensis," Ian Tattersall introduced the term cathemerality to describe a pattern of observed activity that was neither diurnal nor nocturn |
https://en.wikipedia.org/wiki/Optimal%20maintenance | Optimal maintenance is the discipline within operations research concerned with maintaining a system in a manner that maximizes profit or minimizes cost. Cost functions depending on the reliability, availability and maintainability characteristics of the system of interest determine the parameters to minimize. Parameters often considered are the cost of failure, the cost per time unit of "downtime" (for example: revenue losses), the cost (per time unit) of corrective maintenance, the cost per time unit of preventive maintenance and the cost of repairable system replacement [Cassady and Pohl]. The foundation of any maintenance model relies on the correct description of the underlying deterioration process and failure behavior of the component, and on the relationships between maintained components in the product breakdown (system / sub-system / assembly / sub-assembly...).
Optimal Maintenance strategies are often constructed using stochastic models and focus on finding an optimal inspection time or the optimal acceptable degree of system degradation before maintenance and/or replacement. Cost considerations on an Asset scale may also lead to select a "run-to-failure" approach for specific components.
There are four main survey papers available accomplished to cover the spectrum of optimal maintenance:
Optimal maintenance models for systems subject to failure–a review by YS Sherif, ML Smith published in Naval Research Logistics Quarterly, 1981.
C. Valdez-Flores, R.M. Feldman, “A survey of preventive maintenance models for stochastically deteriorating single-unit systems”, Naval Research Logistics, vol 36, 1989 Aug, pp 419–446.
J.J. McCall, “Maintenance policies for stochastically failing equipment:a survey”, Management Science, vol 11, 1965 Mar, pp 493–524.
W.P. Pierskalla, J.A. Voelker, “A survey of maintenance models: The control and surveillance of deteriorating systems”, Naval Research Logistics Quarterly, vol 23, 1976 Sep, pp 353–388.
Operations research |
https://en.wikipedia.org/wiki/Edward%20Dwelly | Edward Dwelly (1864–1939) was an English lexicographer and genealogist. He created the authoritative dictionary of Scottish Gaelic, and his work has had an influence on Irish Gaelic lexicography. He also practised as a professional genealogist and published transcripts of many original documents relating to Somerset.
Biography
Born in Twickenham, Middlesex, in England, he became interested in Scottish Gaelic after being stationed in Scotland with the army and working with the Ordnance Survey. He began collecting words at the age of seventeen and was also a keen bagpiper.
He released the dictionary in sections from 1901 onwards and the first full edition of his Illustrated Gaelic Dictionary in 1911 under the pen name of Eoghann MacDhòmhnaill (Ewen MacDonald) fearing that his work would not be well accepted under his own obviously English name.
He continued collating entries from older dictionaries and also recording thousands of new words, both from publications and from his travels in the Gaelic-speaking parts of Scotland. He illustrated, printed, bound and marketed his dictionary with help from his children and wife Mary McDougall (from Kilmadock) whom he had married in 1896, herself a native Gaelic speaker, teaching himself the skills required.
In 1912, Dwelly self-published his Compendium of Notes on the Dwelly Family, a 54-page genealogical work on the Dwelly family from a John Duelye in 1229, mainly covering Britain, but with an American section, and pedigrees and parish register extracts with supporting notes.
He subsequently gained a state pension from Edward VII for his work. In later life, alienated by the attitude of some people in Scotland, both Gaels and non-speakers, he returned to England, leaving behind his great legacy and dying in obscurity.
In 1991, the late Dr Douglas Clyne sourced several manuscripts in the National Library of Scotland which were published by him as Appendix to Dwelly's Gaelic-English Dictionary, over half of the entries be |
https://en.wikipedia.org/wiki/Spin%20tensor | In mathematics, mathematical physics, and theoretical physics, the spin tensor is a quantity used to describe the rotational motion of particles in spacetime. The spin tensor has application in
general relativity and special relativity, as well as quantum mechanics, relativistic quantum mechanics, and quantum field theory.
The special Euclidean group SE(d) of direct isometries is generated by translations and rotations. Its Lie algebra is written .
This article uses Cartesian coordinates and tensor index notation.
Background on Noether currents
The Noether current for translations in space is momentum, while the current for increments in time is energy. These two statements combine into one in spacetime: translations in spacetime, i.e. a displacement between two events, is generated by the four-momentum P. Conservation of four-momentum is given by the continuity equation:
where is the stress–energy tensor, and ∂ are partial derivatives that make up the four-gradient (in non-Cartesian coordinates this must be replaced by the covariant derivative). Integrating over space:
gives the four-momentum vector at time t.
The Noether current for a rotation about the point y is given by a tensor of 3rd order, denoted . Because of the Lie algebra relations
where the 0 subscript indicates the origin (unlike momentum, angular momentum depends on the origin), the integral:
gives the angular momentum tensor at time t.
Definition
The spin tensor is defined at a point x to be the value of the Noether current at x of a rotation about x,
The continuity equation
implies:
and therefore, the stress–energy tensor is not a symmetric tensor.
The quantity S is the density of spin angular momentum (spin in this case is not only for a point-like particle, but also for an extended body), and M is the density of orbital angular momentum. The total angular momentum is always the sum of spin and orbital contributions.
The relation:
gives the torque density showing the rate of con |
https://en.wikipedia.org/wiki/Phytosociology | Phytosociology, also known as phytocoenology or simply plant sociology, is the study of groups of species of plant that are usually found together. Phytosociology aims to empirically describe the vegetative environment of a given territory. A specific community of plants is considered a social unit, the product of definite conditions, present and past, and can exist only when such conditions are met. In phyto-sociology, such a unit is known as a phytocoenosis (or phytocoenose). A phytocoenosis is more commonly known as a plant community, and consists of the sum of all plants in a given area. It is a subset of a biocoenosis, which consists of all organisms in a given area. More strictly speaking, a phytocoenosis is a set of plants in area that are interacting with each other through competition or other ecological processes. Coenoses are not equivalent to ecosystems, which consist of organisms and the physical environment that they interact with. A phytocoensis has a distribution which can be mapped. Phytosociology has a system for describing and classifying these phytocoenoses in a hierarchy, known as syntaxonomy, and this system has a nomenclature. The science is most advanced in Europe, Africa and Asia.
In the United States this concept was largely rejected in favour of studying environments in more individualistic terms regarding species, where specific associations of plants occur randomly because of individual preferences and responses to gradients, and there are no sharp boundaries between phytocoenoses. The terminology 'plant community' is usually used in the US for a habitat consisting of a number of specific plant species.
It has been a successful approach in the scope of contemporary vegetation science because of its highly descriptive and predictive powers, and its usefulness in nature management issues.
History
The term 'phytosociology' was coined in 1896 by Józef Paczoski. The term 'phytocoenology' was coined by Helmut Gams in 1918. While the termin |
https://en.wikipedia.org/wiki/CCSID | A CCSID (coded character set identifier) is a 16-bit number that represents a particular encoding of a specific code page. For example, Unicode is a code page that has several character encoding schemes (referred to as "transformation forms")—including UTF-8, UTF-16 and UTF-32—but which may or may not actually be accompanied by a CCSID number to indicate that this encoding is being used.
Difference between a code page and a CCSID
The terms code page and CCSID are often used interchangeably, even though they are not synonymous. A code page may be only part of what makes up a CCSID. The following definitions from IBM help to illustrate this point:
A glyph is the actual physical pattern of pixels or ink that shows up on a display or printout.
A character is a concept that covers all glyphs associated with a certain symbol. For instance, "F", "F", "F", "", "", and "" are all different glyphs, but use the same character. The various modifiers (bold, italic, underline, color, and font) do not change the F's essential F-ness.
A character set contains the characters necessary to allow a particular human to carry on a meaningful interaction with the computer. It does not specify how those characters are represented in a computer. This level is the first one to separate characters into various alphabets (Latin, Arabic, Hebrew, Cyrillic, and so on) or ideographic groups (e.g., Chinese, Korean). It corresponds to a "character repertoire" in the Unicode encoding model.
A code page represents a particular assignment of code point values to characters. It corresponds to a "coded character set" in the Unicode encoding model. A code point for a character is the computer's internal representation of that character in a given code page. Many characters are represented by different code points in different code pages. Certain character sets can be adequately represented with single-byte code pages (which have a maximum 256 code points, hence a maximum of 256 characters), but m |
https://en.wikipedia.org/wiki/Quarter-pixel%20motion | Quarter-pixel motion (also known as Q-pel motion or Qpel motion) refers to using a quarter of the distance between pixels (or luma sample positions) as the motion vector precision for motion estimation and motion compensation in video compression schemes. It is used in many modern video coding formats such as MPEG-4 ASP, H.264/AVC, and HEVC. Though higher precision motion vectors take more bits to encode, they can sometimes result in more efficient compression overall, by increasing the quality of the prediction signal.
Operation
Video encoding software products such as Xvid, 3ivx, and DivX Pro Codec, which are based upon the MPEG-4 specification, use motion estimation algorithms to significantly improve video compression. The default level of resolution for motion estimation for most MPEG-4 ASP implementations is half a pixel, although quarter pixel is specified under the standard. H.264 decoders always support quarter-pixel motion. Quarter-pixel resolution can improve the quality of the video prediction signal as compared to half-pixel resolution, although the improvement may not always be enough to offset the increased bit cost of the quarter-pixel-precision motion vector; additional techniques such as rate-distortion optimization, which takes both quality and bit cost into account, are used to significantly improve the effectiveness of quarter-pel motion estimation.
Interpolation methods
Quarter-pixel motion compensation, much like half-pixel, is achieved through interpolation. Different specific schemes are used in different designs:
VC-1 uses bicubic interpolation.
H.264/AVC uses a 6-tap filter for half-pixel interpolation and then simple linear interpolation to achieve quarter-pixel precision from the half-pixel data.
HEVC uses separable 7-tap or 8-tap filtering.
Hardware compatibility in MPEG-4 ASP
Videos encoded with quarter-pixel precision motion vectors require up to twice as much processing power to encode, and 30-60% more processing power to d |
https://en.wikipedia.org/wiki/Rosette%20%28botany%29 | In botany, a rosette is a circular arrangement of leaves or of structures resembling leaves.
In flowering plants, rosettes usually sit near the soil. Their structure is an example of a modified stem in which the internode gaps between the leaves do not expand, so that all the leaves remain clustered tightly together and at a similar height. Some insects induce the development of galls that are leafy rosettes.
In bryophytes and algae, a rosette results from the repeated branching of the thallus as the plant grows, resulting in a circular outline.
Taxonomies
Many plant families have varieties with rosette morphology; they are particularly common in Asteraceae (such as dandelions), Brassicaceae (such as cabbage), and Bromeliaceae. The fern Blechnum fluviatile or New Zealand Water Fern (kiwikiwi) is a rosette plant.
Function in flowering plants
Often, rosettes form in perennial plants whose upper foliage dies back with the remaining vegetation protecting the plant. Another form occurs when internodes along a stem are shortened, bringing the leaves closer together, as in lettuce, dandelion and some succulents. (When plants such as lettuce grow too quickly, the stem lengthens instead, a condition known as bolting.) In yet other forms, the rosette persists at the base of the plant (such as the dandelion), and there is a taproot.
Protection
Part of the protective function of a rosette like the dandelion is that it is hard to pull from the ground; the leaves come away easily while the taproot is left intact.
Another kind of protection is provided by the caulescent rosette, which is part of the growth form of the giant genus Espeletia in South America, which has a well-developed stem above the ground. In tropical alpine environments, a wide variety of plants in different plant families and different parts of the world have evolved this growth form characterized by evergreen rosettes growing above marcescent leaves. Examples where this arrangement has been confirmed to |
https://en.wikipedia.org/wiki/Computer%20network%20diagram | A computer network diagram is a schematic depicting the nodes and connections amongst nodes in a computer network or, more generally, any telecommunications network. Computer network diagrams form an important part of network documentation.
Symbolization
Readily identifiable icons are used to depict common network appliances, e.g. routers, and the style of lines between them indicates the type of connection. Clouds are used to represent networks external to the one pictured for the purposes of depicting connections between internal and external devices, without indicating the specifics of the outside network. For example, in the hypothetical local area network pictured to the right, three personal computers and a server are connected to a switch; the server is further connected to a printer and a gateway router, which is connected via a WAN link to the Internet.
Depending on whether the diagram is intended for formal or informal use, certain details may be lacking and must be determined from context. For example, the sample diagram does not indicate the physical type of connection between the PCs and the switch, but since a modern LAN is depicted, Ethernet may be assumed. If the same style of line was used in a WAN (wide area network) diagram, however, it may indicate a different type of connection.
At different scales diagrams may represent various levels of network granularity. At the LAN level, individual nodes may represent individual physical devices, such as hubs or file servers, while at the WAN level, individual nodes may represent entire cities. In addition, when the scope of a diagram crosses the common LAN/MAN/WAN boundaries, representative hypothetical devices may be depicted instead of showing all actually existing nodes. For example, if a network appliance is intended to be connected through the Internet to many end-user mobile devices, only a single such device may be depicted for the purposes of showing the general relationship between the ap |
https://en.wikipedia.org/wiki/Quintuple%20bond | A quintuple bond in chemistry is an unusual type of chemical bond, first reported in 2005 for a dichromium compound. Single bonds, double bonds, and triple bonds are commonplace in chemistry. Quadruple bonds are rarer and are currently known only among the transition metals, especially for Cr, Mo, W, and Re, e.g. [Mo2Cl8]4− and [Re2Cl8]2−. In a quintuple bond, ten electrons participate in bonding between the two metal centers, allocated as σ2π4δ4.
In some cases of high-order bonds between metal atoms, the metal-metal bonding is facilitated by ligands that link the two metal centers and reduce the interatomic distance. By contrast, the chromium dimer with quintuple bonding is stabilized by a bulky terphenyl (2,6-[(2,6-diisopropyl)phenyl]phenyl) ligands. The species is stable up to 200 °C. The chromium–chromium quintuple bond has been analyzed with multireference ab initio and DFT methods, which were also used to elucidate the role of the terphenyl ligand, in which the flanking aryls were shown to interact very weakly with the chromium atoms, causing only a small weakening of the quintuple bond. A 2007 theoretical study identified two global minima for quintuple bonded RMMR compounds: a trans-bent molecular geometry and surprisingly another trans-bent geometry with the R substituent in a bridging position.
In 2005, a quintuple bond was postulated to exist in the hypothetical uranium molecule U2 based on computational chemistry. Diuranium compounds are rare, but do exist; for example, the anion.
In 2007 the shortest-ever metal–metal bond (180.28 pm) was reported to exist also in a compound containing a quintuple chromium-chromium bond with diazadiene bridging ligands. Other metal–metal quintuple bond containing complexes that have been reported include quintuply bonded dichromium with [6-(2,4,6-triisopropylphenyl)pyridin-2-yl](2,4,6-trimethylphenyl)amine bridging ligands and a dichromium complex with amidinate bridging ligands.
Synthesis of quintuple bonds is usu |
https://en.wikipedia.org/wiki/Subfactor | In the theory of von Neumann algebras, a subfactor of a factor is a subalgebra that is a factor and contains . The theory of subfactors led to the discovery of the
Jones polynomial in knot theory.
Index of a subfactor
Usually is taken to be a factor of type , so that it has a finite trace.
In this case every Hilbert space module has a dimension which is a non-negative real number or .
The index of a subfactor is defined to be . Here is the representation
of obtained from the GNS construction of the trace of .
Jones index theorem
This states that if is a subfactor of (both of type ) then the index is either of the form for , or is at least . All these values occur.
The first few values of are
Basic construction
Suppose that is a subfactor of , and that both are finite von Neumann algebras.
The GNS construction produces a Hilbert space acted on by
with a cyclic vector . Let be the projection onto the subspace . Then and generate a new von Neumann algebra acting on , containing as a subfactor. The passage from the inclusion of in to the inclusion of in is called the basic construction.
If and are both factors of type and has finite index in then is also of type .
Moreover the inclusions have the same index: and .
Jones tower
Suppose that is an inclusion of type factors of finite index. By iterating the basic construction we get a tower of inclusions
where and , and each is generated by the previous algebra and a projection. The union of all these algebras has a tracial state whose restriction to each is the tracial state, and so the closure of the union is another type von Neumann algebra .
The algebra contains a sequence of projections which satisfy the Temperley–Lieb relations at parameter . Moreover, the algebra generated by the is a -algebra in which the are self-adjoint, and such that when is in the algebra generated by up to . Whenever these extra conditions are satisfied, the algebra is c |
https://en.wikipedia.org/wiki/Masked-man%20fallacy | In philosophical logic, the masked-man fallacy (also known as the intensional fallacy or epistemic fallacy) is committed when one makes an illicit use of Leibniz's law in an argument. Leibniz's law states that if A and B are the same object, then A and B are indiscernible (that is, they have all the same properties). By modus tollens, this means that if one object has a certain property, while another object does not have the same property, the two objects cannot be identical. The fallacy is "epistemic" because it posits an immediate identity between a subject's knowledge of an object with the object itself, failing to recognize that Leibniz's Law is not capable of accounting for intensional contexts.
Examples
The name of the fallacy comes from the example:
Premise 1: I know who Flint is.
Premise 2: I do not know who the masked man is.
Conclusion: Therefore, Flint is not the masked man.
The premises may be true and the conclusion false if Flint is the masked man and the speaker does not know that. Thus the argument is a fallacious one.
In symbolic form, the above arguments are
Premise 1: I know who X is.
Premise 2: I do not know who Y is.
Conclusion: Therefore, X is not Y.
Note, however, that this syllogism happens in the reasoning by the speaker "I"; Therefore, in the formal modal logic form, it'll be
Premise 1: The speaker believes he knows who X is.
Premise 2: The speaker believes he does not know who Y is.
Conclusion: Therefore, the speaker believes X is not Y.
Premise 1 is a very strong one, as it's logically equivalent to . It's very likely that this is a false belief: is likely a false proposition, as the ignorance on the proposition doesn't imply the negation of it is true.
Another example:
Premise 1: Lois Lane thinks Superman can fly.
Premise 2: Lois Lane thinks Clark Kent cannot fly.
Conclusion: Therefore Superman and Clark Kent are not the same person.
Expressed in doxastic logic, the above syllogism is:
Premise 1:
Premise 2:
C |
https://en.wikipedia.org/wiki/Ethnomedicine | Ethnomedicine is a study or comparison of the traditional medicine based on bioactive compounds in plants and animals and practiced by various ethnic groups, especially those with little access to western medicines, e.g., indigenous peoples. The word ethnomedicine is sometimes used as a synonym for traditional medicine.
Ethnomedical research is interdisciplinary; in its study of traditional medicines, it applies the methods of ethnobotany and medical anthropology. Often, the medicine traditions it studies are preserved only by oral tradition. In addition to plants, some of these traditions constitute significant interactions with insects on the Indian Subcontinent, in Africa, or elsewhere around the globe.
Scientific ethnomedical studies constitute either anthropological research or drug discovery research. Anthropological studies examine the cultural perception and context of a traditional medicine. Ethnomedicine has been used as a starting point in drug discovery, specifically those using reverse pharmacological techniques.
Ethnopharmacology
Ethnopharmacology is a related field which studies ethnic groups and their use of plant compounds. It is linked to pharmacognosy, phytotherapy (study of medicinal plants) use and ethnobotany, as this is a source of lead compounds for drug discovery. Emphasis has long been on traditional medicines, although the approach also has proven useful to the study of modern pharmaceuticals.
It involves studies of the:
identification and ethnotaxonomy (cognitive categorisation) of the (eventual) natural material, from which the candidate compound will be produced
traditional preparation of the pharmaceutical forms
bio-evaluation of the possible pharmacological action of such preparations (ethnopharmacology)
their potential for clinical effectiveness
socio-medical aspects implied in the uses of these compounds (medical anthropology).
See also
Ayurveda
Ethnobotany
Herbalism
Pharmacognosy
Shamanism
Traditional medicine |
https://en.wikipedia.org/wiki/ABINIT | ABINIT is an open-source suite of programs for materials science, distributed under the GNU General Public License. ABINIT implements density functional theory, using a plane wave basis set and pseudopotentials, to compute the electronic density and derived properties of materials ranging from molecules to surfaces to solids. It is developed collaboratively by researchers throughout the world.
A web-based easy-to-use graphical version, which includes access to a limited set of ABINIT's full functionality, is available for free use through the nanohub.
The latest version 9.10.3 was released on June 24, 2023.
Overview
ABINIT implements density functional theory by solving the Kohn–Sham equations describing the electrons in a material, expanded in a plane wave basis set and using a self-consistent conjugate gradient method to determine the energy minimum. Computational efficiency is achieved through the use of fast Fourier transforms, and pseudopotentials to describe core electrons. As an alternative to standard norm-conserving pseudopotentials, the projector augmented-wave method may be used. In addition to total energy, forces and stresses are also calculated so that geometry optimizations and ab initio molecular dynamics may be carried out. Materials that can be treated by ABINIT include insulators, metals, and magnetically ordered systems including Mott-Hubbard insulators.
Derived properties
In addition to computing the electronic ground state of materials, ABINIT implements density functional perturbation theory to compute response functions including
Phonons
Dielectric response
Born effective charges and IR oscillator strength tensor
Response to strain and elastic properties
Nonlinear responses, including piezoelectric response, Raman cross sections, and electro-optic response.
ABINIT can also compute excited state properties via
time-dependent density functional theory
many-body perturbation theory, using the GW approximation and Bethe–Salpeter equat |
https://en.wikipedia.org/wiki/Behavior%20modification | Behavior modification is a treatment approach that uses respondent and operant conditioning to change behavior. Based on methodological behaviorism, overt behavior is modified with consequences, including positive and negative reinforcement contingencies to increase desirable behavior, or administering positive and negative punishment and/or extinction to reduce problematic behavior. It also uses flooding desensitization to combat phobias.
Applied behavior analysis (ABA)—the application of behavior analysis—is a contemporary application and is based on radical behaviorism, which refers to B. F. Skinner's viewpoint that cognition and emotions are covert behavior that are to be subjected to the same conditions as overt behavior.
Description and history
The first use of the term behavior modification appears to have been by Edward Thorndike in 1911. His article Provisional Laws of Acquired Behavior or Learning makes frequent use of the term "modifying behavior". Through early research in the 1940s and the 1950s the term was used by Joseph Wolpe's research group. The experimental tradition in clinical psychology used it to refer to psycho-therapeutic techniques derived from empirical research. In the 1960s, behavior modification operated on stimulus-response-reinforcement framework (S-R-SR), emphasizing the concept of 'transactional' explanations of behavior. It has since come to refer mainly to techniques for increasing adaptive behavior through reinforcement and decreasing maladaptive behavior through extinction or punishment (with emphasis on the former).
In recent years, the concept of punishment has had many critics, though these criticisms tend not to apply to negative punishment (time-outs) and usually apply to the addition of some aversive event. The use of positive punishment by board certified behavior analysts is restricted to extreme circumstances when all other forms of treatment have failed and when the behavior to be modified is a danger to the person |
https://en.wikipedia.org/wiki/Dotfuscator | Dotfuscator is a tool performing a combination of code obfuscation, optimization, shrinking, and hardening on .NET, Xamarin and Universal Windows Platform apps. Ordinarily, .NET executables can easily be reverse engineered by free tools (such as ILSpy, dotPeek and JustDecompile), potentially exposing algorithms and intellectual property (trade secrets), licensing and security mechanisms. Also, code can be run through a debugger and its data inspected. Dotfuscator can make all of these things more difficult.
Dotfuscator was developed by PreEmptive Solutions. A free version of the .NET Obfuscator, called the Dotfuscator Community Edition, is distributed as part of Microsoft's Visual Studio. However, the current version is free for personal, non-commercial use only. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.