source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Digital%20Devil%20Story%3A%20Megami%20Tensei | Digital Devil Story: Megami Tensei refers to two distinct role-playing video games based on a trilogy of science fantasy novels by Japanese author Aya Nishitani. One version was developed by Atlus and published by Namco in 1987 for the Famicom—Atlus would go on to create further games in the Megami Tensei franchise. A separate version for personal computers was developed and published by Telenet Japan with assistance from Atlus during the same year.
The story sees Japanese high school students Akemi Nakajima and Yumiko Shirasagi combat the forces of Lucifer, unleashed by a demon summoning program created by Nakajima. The gameplay features first-person dungeon crawling and turn-based battles or negotiation with demons in the Famicom version, and a journey through a hostile labyrinth as Nakajima featuring real-time combat in the Telenet version.
Development on both versions of the video game began as part of a multimedia expansion of Nishitani's book series. Nishitani was deeply involved with the design and scenario. The gameplay mechanics in Atlus' role-playing version of the game were based on the Wizardry series, but with an added demon negotiation system considered revolutionary for the time. Atlus and Telenet Japan worked on their projects simultaneously, playing against genre expectations for their respective platforms. The Famicom version proved the more popular with both critics and players, leading to the development of the 1990 Famicom sequel Digital Devil Story: Megami Tensei II. An enhanced port of both games for the Super Famicom was released in 1995.
Gameplay
The Famicom version of Digital Devil Story: Megami Tensei is a traditional role-playing video game in which the player takes control of a party composed of two humans and a number of demons. The party explores a large dungeon using a first-person perspective. The human characters use a variety of weapons and items, with the primary weapons being swords and guns. The items, which can range from h |
https://en.wikipedia.org/wiki/UltraMon | UltraMon is a commercial application for Microsoft Windows users who use multiple displays. UltraMon is developed by Realtime Soft, a small software development company based in Bern, Switzerland.
UltraMon currently contains the following features:
Two additional title bar buttons for managing windows among the monitors
Customizable button location
A taskbar on each additional monitor that displays tasks on that monitor
Pre-defined application window placement
Display profiles for multiple pre-defined display settings
Spannable wallpaper option
Different wallpapers for different monitors
Advanced multiple-monitor screensaver management
Display mirroring (Forces to software rendering)
Overcome Windows' limit of 10 displays
UltraMon is distributed as trialware, requiring the user to purchase the software after a trial period (30 days).
UltraMon 3.3.0 is available with full Windows XP, Vista, 7, and 8 support.
See also
Multi-monitor |
https://en.wikipedia.org/wiki/WFNA%20%28TV%29 | WFNA (channel 55) is a television station licensed to Gulf Shores, Alabama, United States, serving as the CW outlet for southwest Alabama and northwest Florida. It is owned and operated by network majority owner Nexstar Media Group alongside Mobile-licensed CBS affiliate WKRG-TV (channel 5). The two stations share studios with several radio stations owned by iHeartMedia on Broadcast Drive in southwest Mobile; WFNA's transmitter is located in unincorporated Baldwin County near Spanish Fort, Alabama.
History
Prior to the station's sign-on, WFNA's call letters were originally planned to be WGMP (standing for "Gulf Shores, Mobile, Pensacola"). The station first signed on the air as WBPG on September 2, 2001; it replaced WFGX (channel 35) as the area's WB affiliate after the station reverted to independent status four days earlier on August 31. The station was originally owned by Pegasus Broadcasting. At the time, WFGX's signal was all but unviewable over-the-air on the Alabama side of the market, but WBPG's signal decently covered the entire market.
In 2003, Emmis Communications purchased the station, which created a duopoly with Fox affiliate WALA-TV (channel 10); WBPG's operations were subsequently merged with WALA at the latter station's facility on Satchel Paige Drive. LIN TV Corporation acquired WALA-TV on November 30, 2005; instead of acquiring WBPG directly along with it, the company instead began to operate the station under a local marketing agreement. Just over seven months later, on July 7, 2006, LIN purchased WBPG outright.
On January 24, 2006, CBS Corporation and Time Warner announced the shutdown of both UPN and The WB effective that fall. In place of these two networks, a new "fifth" network—"The CW Television Network" (its name representing the first initials of parent companies CBS and Warner Bros.), jointly owned by both companies, would launch, with a lineup primarily featuring the most popular programs from both networks. WBPG joined The CW on Se |
https://en.wikipedia.org/wiki/Mathematical%20Kangaroo | Mathematical Kangaroo (also known as Kangaroo challenge, or jeu-concours Kangourou in French) is an international mathematics competition in over 77 countries. There are six levels of participation, ranging from grade 1 to grade 12. The competition is held annually on the third Thursday of March. The challenge consists of problems in multiple-choice form that are not standard notebook problems and come from a variety of topics. Besides basic computational skills, they require inspiring ideas, perseverance, creativity and imagination, logical thinking, and other problem-solving strategies. Often there are small stories, intriguing problems, and surprising results, which encourage discussions with friends and family.
It had over 6 million participants from 57 countries in 2014. In 2022, it has 84 participants countries and claims to be the largest competition for school students in the world.
History
Mathematicians in Australia came up with the idea to organize a competition that underlines the joy of mathematics and encourages mathematical problem-solving. A multiple-choice competition was created, which has been taking place in Australia since 1978. At the same time, both in France and all over the world, a widely supported movement emerged towards the popularization of mathematics. The idea of a multiple-choice competition then sprouted from two French teachers, André Deledicq and Jean Pierre Boudine, who visited their Australian colleagues Peter O’Holloran and Peter Taylor and witnessed their competition. In 1990, they decided to start a challenge in France under the name Kangourou des Mathématiques in order to pay tribute to their Australian colleagues. The particularity of this challenge was the desire for massive distribution of documentation, offering a gift to each participant (books, small games, fun objects, scientific and cultural trips). The first Kangaroo challenge took place on May 15, 1991. Since it was immediately very successful, shortly afterwa |
https://en.wikipedia.org/wiki/Veve | A veve (also spelled vèvè or vevè) is a religious symbol commonly used in different branches of Vodun throughout the African diaspora, such as Haitian Vodou and Louisiana Voodoo. The veve acts as a "beacon" for the lwa, and will serve as a lwas representation during rituals.
Veves should not be confused with the patipembas used in Palo, nor the pontos riscados used in Umbanda and Quimbanda, as these are separate African religions.
History
Possible origins include the cosmogram of the Kongo people, or originated as the Nsibidi system of writing for the Igboid and Ekoid languages from West and Central Africa.
Function
According to Milo Rigaud, "The veves represent figures of the astral forces... In the course of Vodou ceremonies, the reproduction of the astral forces represented by the veves obliges the lwa... to descend to earth."
Every lwa has their own unique veve, although regional differences have led to different veves for the same lwa in some cases. Sacrifices and offerings are usually placed upon them, with food and drink being most commonly used.
Presentation
In ritual and other formalities, veve is usually drawn on the floor by strewing a powder-like substance, commonly cornmeal, wheat flour, bark, red brick powder, or gunpowder, though the material depends entirely upon the ritual. In Haitian Vodou, a mixture of cornmeal and wood ash is used.
Veves use symbolism to communicate which spirit is being called upon - for example, gatekeeper Papa Legba is invoked with a vèvè that features a walking cane, to indicate his jolly grandpa-like demeanor. The illustration also features coded images that reflect the matrilineal and patrilineal culture of the artist, providing information about their ancestral lineage. Offerings will typically be given; in Louisiana Voodoo, this would entail a cup of coffee and/or candies associated with the spirit.
The spirit is generally meant to be invoked in the central cross of the veve.
Veve can be made into screenprint, pai |
https://en.wikipedia.org/wiki/Continuous%20obsolescence | Continuous obsolescence or perpetual revolution is a phenomenon where industry trends, or other items that do not immediately correspond to technical needs, mandate a continual readaptation of a system. Such work does not increase the usefulness of the system, but is required for the system to continue fulfilling its functions.
Unintentional reasons
Continuous obsolescence may be unintentional. One type of largely unintentional case of continuous obsolescence occurs when the rising demand for graphics- and experience-intensive video games collides with a long development time for a new title. While a game may promise to be acceptable or even revolutionary if released on schedule, a delay exposes it to the risk of being unable to compete with better games released during the delay (e.g. Daikatana), or of being continually rewritten to take advantage of better technologies as they become available (e.g. Duke Nukem Forever). This last behavior is an example of a software development anti-pattern.
Intentional reasons
Continuous obsolescence may also be intentional, for example when an application tries to include compatibility for the output of another widely used application. In this case, the software house responsible for the latter may vary its output format repeatedly, forcing the developer of the former to continuously expend resources to keep its compatibility up-to-date, rather than using those resources to expand features or otherwise make the product more competitive. Many accuse Microsoft of doing exactly this with the file formats used by its Office application suite.
See also
Planned obsolescence |
https://en.wikipedia.org/wiki/Temperature%20gradient%20gel%20electrophoresis | Temperature gradient gel electrophoresis (TGGE) and denaturing gradient gel electrophoresis (DGGE) are forms of electrophoresis which use either a temperature or chemical gradient to denature the sample as it moves across an acrylamide gel. TGGE and DGGE can be applied to nucleic acids such as DNA and RNA, and (less commonly) proteins. TGGE relies on temperature dependent changes in structure to separate nucleic acids. DGGE separates genes of the same size based on their different denaturing ability which is determined by their base pair sequence. DGGE was the original technique, and TGGE a refinement of it.
History
DGGE was invented by Leonard Lerman, while he was a professor at SUNY Albany.
The same equipment can be used for analysis of protein, which was first done by Thomas E. Creighton of the MRC Laboratory of Molecular Biology, Cambridge, England. Similar looking patterns are produced by proteins and nucleic acids, but the fundamental principles are quite different.
TGGE was first described by Thatcher and Hodson and by Roger Wartell of Georgia Tech. Extensive work was done by the group of Riesner in Germany. Commercial equipment for DGGE is available from Bio-Rad, INGENY and CBS Scientific; a system for TGGE is available from Biometra.
Temperature gradient gel electrophoresis
DNA has a negative charge and so will move to the positive electrode in an electric field. A gel is a molecular mesh, with holes roughly the same size as the diameter of the DNA string. When an electric field is applied, the DNA will begin to move through the gel, at a speed roughly inversely proportional to the length of the DNA molecule (shorter lengths of DNA travel faster) — this is the basis for size dependent separation in standard electrophoresis.
In TGGE there is also a temperature gradient across the gel. At room temperature, the DNA will exist stably in a double-stranded form. As the temperature is increased, the strands begin to separate (melting), and the speed at whic |
https://en.wikipedia.org/wiki/Simon%20P.%20Norton | Simon Phillips Norton (28 February 1952 – 14 February 2019) was a mathematician in Cambridge, England, who worked on finite simple groups.
Education
Simon Norton was born into a Sephardi family of Iraqi descent, the youngest of three brothers.
From 1964 he was a King's Scholar at Eton College, where he earned a reputation as an eccentric mathematical genius and was taught by Norman Routledge. He obtained an external first-class degree in Pure Mathematics at the University of London while still at the school, commuting to Royal Holloway College.
He also represented the United Kingdom at the International Mathematical Olympiad thrice consecutively starring from 1967, winning a gold medal each time and two special prizes in 1967 and 1969.
He then went up to Trinity College, Cambridge, and achieved a first in the final examinations.
Career and life
He stayed at Cambridge, working on finite groups. Norton was one of the authors of the ATLAS of Finite Groups. He constructed the Harada–Norton group and in 1979 together with John Conway proved there is a connection between the Monster group and the j-function in number theory. They dubbed this "monstrous moonshine", and made some conjectures later proved by Richard Borcherds. Norton also made several early discoveries in Conway's Game of Life, and invented the game Snort.
In 1985, Cambridge University did not renew his contract.
Norton is the subject of the biography The Genius in My Basement, written by his Cambridge tenant, Alexander Masters, which describes his eccentric lifestyle and his life-long obsession with buses. He was also an occasional contributor to Word Ways: The Journal of Recreational Linguistics.
Norton was very interested in transport issues and was a member of Subterranea Britannica. He coordinated the local group of the Campaign for Better Transport (United Kingdom), and had done so since the organisation was known as Transport 2000, writing most of the newsletter for the local Cambridge group a |
https://en.wikipedia.org/wiki/Earth%20Prime | Earth Prime (or Earth-Prime) is a term sometimes used in works of speculative fiction, most notably in DC Comics, involving parallel universes or a multiverse, and refers either to the universe containing "our" Earth, or to a parallel world with a bare minimum of divergence points from Earth as we know it — often the absence or near-absence of metahumans, or with their existence confined to fictional narratives like comics. The "Earth Prime" of a given fictional setting may or may not have an intrinsic value to or vital connection to the other Earths it exists alongside (although it appears to be the case that such Prime Earths — and sometimes the 'central universes' in which those Prime Earths exist as well — are portrayed in fiction to be vital to the existence of the other Earths).
DC Comics
In the DC Multiverse Earth-Prime is the true Earth from which all the other worlds within the Multiverse originate, the "actual" reality where the readers of DC Comics live (and where DC Comics operates as a publisher), and is an Earth where all superheroes are fictional. Earth-Prime does, however, became an alternate reality in its first appearance in The Flash #179 (May 1968), when the Flash accidentally travels there from Earth-One by being pushed by a creature called The Nok. The Flash, stranded, contacts then-DC Comics editor Julius Schwartz, who helps him construct a cosmic treadmill to return to Earth-One. Eventually, it was stated that the writers of DC Comics of Earth Prime subconsciously base their stories on the adventures of the heroes on Earth-One and Earth-Two.
In The Flash #228 (July/Aug 1974), Earth-Prime's Cary Bates travels to Earth-One, where he discovers that the stories he writes are not only based on events on Earth-One, but can actually influence these events as well. This power turns for the worse in Justice League of America #123 (October 1975), when Bates is accidentally transported to Earth-Two. The interdimensional trip temporarily turns Bates i |
https://en.wikipedia.org/wiki/Cis-3-Hexenal | cis-3-Hexenal, also known as (Z)-3-hexenal and leaf aldehyde, is an organic compound with the formula CH3CH2CH=CHCH2CHO. It is classified as an unsaturated aldehyde. It is a colorless liquid and an aroma compound with an intense odor of freshly cut grass and leaves.
Occurrence
It is one of the major volatile compounds in ripe tomatoes, although it tends to isomerize into the conjugated trans-2-hexenal. It is produced in small amounts by most plants and it acts as an attractant to many predatory insects. It is also a pheromone in many insect species.
See also
cis-3-hexen-1-ol has a similar but weaker odor and is used in flavors and perfumes.
1-Hexanol, another volatile organic compound, also considered responsible for the freshly mowed grass odor
External links
Hexenal |
https://en.wikipedia.org/wiki/Alfred%20Sturtevant | Alfred Henry Sturtevant (November 21, 1891 – April 5, 1970) was an American geneticist. Sturtevant constructed the first genetic map of a chromosome in 1911. Throughout his career he worked on the organism Drosophila melanogaster with Thomas Hunt Morgan. By watching the development of flies in which the earliest cell division produced two different genomes, he measured the embryonic distance between organs in a unit which is called the sturt in his honor. On February 13, 1968, Sturtevant received the 1967 National Medal of Science from President Lyndon B. Johnson.
Biography
Alfred Henry Sturtevant was born in Jacksonville, Illinois, United States on November 21, 1891, the youngest of Alfred Henry and Harriet Sturtevant's six children. His grandfather Julian Monson Sturtevant, a Yale University graduate, was a founding professor and second president of Illinois College, where his father taught mathematics.
When Sturtevant was seven years old, his father quit his teaching job and moved the family to Alabama to pursue farming. Sturtevant attended a one-room schoolhouse until entering high school in Mobile. In 1908, he enrolled at Columbia University. During this time, he lived with his older brother Edgar, a linguist, who taught nearby. Edgar taught Alfred about scholarship and research.
As a child, Sturtevant had created pedigrees of his father's horses. While in college, he read about Mendelism, which piqued Sturtevant's interest because it could explain the traits expressed in the horse pedigrees. He further pursued his interest in genetics under Thomas Hunt Morgan, who encouraged him to publish a paper of his pedigrees shown through Mendelian genetics. In 1914, Sturtevant completed his doctoral work under Morgan as well.
After earning his doctorate, Sturtevant stayed at Columbia as a research investigator for the Carnegie Institution of Washington. He joined Morgan's research team in the "fly room", in which huge advances were being made in the study of ge |
https://en.wikipedia.org/wiki/Freezer%20burn | Freezer burn is a condition that occurs when frozen food has been damaged by dehydration and oxidation due to air reaching the food. It is generally caused by food not being securely wrapped in air-tight packaging.
Freezer burn appears as grayish-brown leathery spots on frozen food and occurs when air reaches the food's surface and dries the product. Color changes result from chemical changes in the food's pigment. Freezer burn does not make the food unsafe; it merely causes dry spots in foods. The food remains usable and edible, but removing the freezer burns will improve the flavor.
The dehydration of freezer-burned food is caused by water sublimating from the food into the surrounding atmosphere. The lost water may then be deposited elsewhere in the food and packaging as snow-like crystals.
See also
Freeze drying
Ice crystals |
https://en.wikipedia.org/wiki/Sandvine | Sandvine Incorporated is an application and network intelligence company based in Waterloo, Ontario.
Sandvine markets network policy control products that are designed to implement broad network policies, including Internet censorship, congestion management, and security. Sandvine's products target Tier 1 and Tier 2 networks for consumers, including cable, DSL, and mobile.
Operation
Sandvine classifies application traffic across mobile and local networks by user, device, network type, location and other parameters. The company then applies machine learning-based analytics to real-time data and makes technical policy changes.
As of 2021, Sandvine has over 500 customers globally.
Company history
Sandvine was formed in August, 2001 in Waterloo, Ontario, by a team of approximately 30 people from PixStream, a then-recently closed company acquired by Cisco. An initial round of VC funding launched the company with $20 million CDN. A subsequent round of financing of $19 million (CDN) was completed in May 2005. In March 2006 Sandvine completed an initial public offering on the London AIM exchange under the ticker 'SAND'. In October 2006 Sandvine completed an initial public offering on the Toronto Stock Exchange under the ticker 'SVC'.
Initial product sales focused on congestion management and fair usage as service providers struggled with the rapid growth in broadband traffic. As fiber rollouts and 4G networks became more prevalent, the company's application optimization and monetization use cases were adopted by many customers. This allowed service providers to deliver usage and application-based plans, zero-rate applications, reduce fraud, and introduce security and parental controls as a way to generate new revenues.
In June 2007 Sandvine acquired CableMatrix Technologies for its PacketCable Multimedia (PCMM)-based PCRF that enable broadband operators to increase subscriber satisfaction while delivering media-rich IP applications and services such as SIP telephony, |
https://en.wikipedia.org/wiki/Sporophyll | A sporophyll is a leaf that bears sporangia. Both microphylls and megaphylls can be sporophylls. In heterosporous plants, sporophylls (whether they are microphylls or megaphylls) bear either megasporangia and thus are called megasporophylls, or microsporangia and are called microsporophylls. The overlap of the prefixes and roots makes these terms a particularly confusing subset of botanical nomenclature.
Sporophylls vary greatly in appearance and structure, and may or may not look similar to sterile leaves. Plants that produce sporophylls include:
Alaria esculenta, a brown alga which shows sporophylls attached near the base of the alga.
Lycophytes, where sporophylls may be aggregated into strobili (Selaginella and some Lycopodium and related genera) or distributed singly among sterile leaves (Huperzia). Sporangia are borne in the axil or on the adaxial surface of the sporophyll. In heterosporous members, megasporophylls and microsporophylls may be intermixed or separated in a variety of patterns.
Ferns, which may produce sporophylls that are similar to sterile fronds or that appear very different from sterile fronds. These may be non-photosynthetic and lack typical pinnae, e.g. Onoclea sensibilis.
Cycads produce strobili, both pollen-producing and seed-producing, that are composed of sporophylls.
Ginkgo produces microsporophylls aggregated into a pollen strobilus. Ovules are not born on sporophylls .
Gymnosperms, like Ginkgo and cycads, produce microsporophylls, aggregated into pollen strobili. However, unlike these other groups, ovules are produced on cone scales, which are modified shoots rather than sporophylls.
Some plants do not produce sporophylls. Sporangia are produced directly on stems. Psilotum has been interpreted as producing sporangia (fused in a synangium) on the terminus of a stem. Equisetum always produce strobili, but the structures bearing sporangia (sporangiophores) have been interpreted as modified stems. The sporangia, despite being rec |
https://en.wikipedia.org/wiki/Lamella%20%28materials%29 | A lamella () is a small plate or flake, from the Latin, and may also be used to refer to collections of fine sheets of material held adjacent to one another, in a gill-shaped structure, often with fluid in between though sometimes simply a set of 'welded' plates. The term is used in biological contexts to describe thin membranes of plates of tissue. In context of materials science, the microscopic structures in bone and nacre are called lamellae. Moreover, the term lamella is often used as a way to describe crystal structure of some materials.
Uses of the term
In surface chemistry (especially mineralogy and materials science), lamellar structures are fine layers, alternating between different materials. They can be produced by chemical effects (as in eutectic solidification), biological means, or a deliberate process of lamination, such as pattern welding. Lamellae can also describe the layers of atoms in the crystal lattices of materials such as metals.
In surface anatomy, a lamella is a thin plate-like structure, often one amongst many lamellae very close to one another, with open space between.
In chemical engineering, the term is used for devices such as filters and heat exchangers.
In mycology, a lamella (or gill) is a papery hymenophore rib under the cap of some mushroom species, most often agarics.
The term has been used to describe the construction of lamellar armour, as well as the layered structures that can be described by a lamellar vector field.
In medical professions, especially orthopedic surgery, the term is used to refer to 3D printed titanium technology which is used to create implantable medical devices (in this case, orthopedic implants).
In context of water-treatment, lamellar filters may be referred to as plate filters or tube filters.
This term is used to describe a certain type of ichthyosis, a congenital skin condition. Lamellar Ichthyosis often presents with a "colloidal" membrane at birth. It is characterized by generalized dark |
https://en.wikipedia.org/wiki/Dioecy | Dioecy ( ; ; adj. dioecious, ) is a characteristic of certain species that have distinct unisexual individuals, each producing either male or female gametes, either directly (in animals) or indirectly (in seed plants). Dioecious reproduction is biparental reproduction. Dioecy has costs, since only the female part of the population directly produces offspring. It is one method for excluding self-fertilization and promoting allogamy (outcrossing), and thus tends to reduce the expression of recessive deleterious mutations present in a population. Plants have several other methods of preventing self-fertilization including, for example, dichogamy, herkogamy, and self-incompatibility.
In zoology
In zoology, dioecy means that an animal is either male or female, in which case the synonym gonochory is more often used. For example, most animal species are gonochoric, almost all vertebrate species are gonochoric, and all bird and mammal species are gonochoric. Dioecy may also describe colonies within a species, such as the colonies of Siphonophorae (Portuguese man-of-war), which may be either dioecious or monoecious.
In botany
Land plants (embryophytes) differ from animals in that their life cycle involves alternation of generations. In animals, typically an individual produces gametes of one kind, either sperm or egg cells. The gametes have half the number of chromosomes of the individual producing them, so are haploid. Without further dividing, a sperm and an egg cell fuse to form a zygote that develops into a new individual. In land plants, by contrast, one generation – the sporophyte generation – consists of individuals that produce haploid spores rather than haploid gametes. Spores do not fuse, but germinate by dividing repeatedly by mitosis to give rise to haploid multicellular individuals, the gametophytes, which produce gametes. A male gamete and a female gamete then fuse to produce a new diploid sporophyte.
In bryophytes (mosses, liverworts and hornworts), the g |
https://en.wikipedia.org/wiki/Filum%20terminale | The filum terminale ("terminal thread") is a delicate strand of fibrous tissue, about 20 cm in length, extending inferior-ward from the apex of the conus medullaris to attach onto the coccyx. The filum terminale acts to anchor the spinal cord and spinal meninges inferiorly.
The upper portion of the fila terminale is formed by spinal pia mater within a dilated dural sac, while the lower portion is formed by both pia and dura mater (with the outer dural layer closely adhering to the inner pial component).
Anatomy
The proximal/superior part - the filum terminale internum or pial part of terminal filum - measures 15 cm in length and extends as far as the inferior border of the second sacral vertebra (S2) (the inferior limit sacral canal). It is composed of the vestiges of neural tissue, connective tissue, and neuroglial tissue lined by pia mater. It is contained within a tubular sheath of the dura mater and is surrounded by the nerves of the cauda equina (from which it can be easily recognized by its bluish-white color).
The inferior/distal part - the filum terminale externum, dural part of terminal filum, or coccygeal ligament - is formed as the filum terminale internum reaches the inferior extremity of the dural sac; henceforth, the filum terminale becomes invested by a layer of dura mater.
The filum terminale ultimately terminates inferiorly by attaching to the dorsum of the coccyx at the first coccygeal segment, blending with the coccygeal periosteum.
Relations
The filum terminale is situated centrally amid the spinal nerve roots of the cauda equina (but is not itself a part of the cauda equina).
The inferior-most spinal nerve, the coccygeal nerve, leaves the spinal cord at the level of the conus medullaris via respective vertebrae through their intervertebral foramina, superior to the filum terminale. However, adhering to the outer surface of the filum terminale are a few strands of nerve fibres which probably represent rudimentary second and third coccyge |
https://en.wikipedia.org/wiki/Fluorescence-lifetime%20imaging%20microscopy | Fluorescence-lifetime imaging microscopy or FLIM is an imaging technique based on the differences in the exponential decay rate of the photon emission of a fluorophore from a sample. It can be used as an imaging technique in confocal microscopy, two-photon excitation microscopy, and multiphoton tomography.
The fluorescence lifetime (FLT) of the fluorophore, rather than its intensity, is used to create the image in FLIM. Fluorescence lifetime depends on the local micro-environment of the fluorophore, thus precluding any erroneous measurements in fluorescence intensity due to change in brightness of the light source, background light intensity or limited photo-bleaching. This technique also has the advantage of minimizing the effect of photon scattering in thick layers of sample. Being dependent on the micro-environment, lifetime measurements have been used as an indicator for pH, viscosity and chemical species concentration.
Fluorescence lifetimes
A fluorophore which is excited by a photon will drop to the ground state with a certain probability based on the decay rates through a number of different (radiative and/or nonradiative) decay pathways. To observe fluorescence, one of these pathways must be by spontaneous emission of a photon. In the ensemble description, the fluorescence emitted will decay with time according to
where
.
In the above, is time, is the fluorescence lifetime, is the initial fluorescence at , and are the rates for each decay pathway, at least one of which must be the fluorescence decay rate . More importantly, the lifetime, is independent of the initial intensity and of the emitted light. This can be utilized for making non-intensity based measurements in chemical sensing.
Measurement
Fluorescence-lifetime imaging yields images with the intensity of each pixel determined by , which allows one to view contrast between materials with different fluorescence decay rates (even if those materials fluoresce at exactly the same wavelength) |
https://en.wikipedia.org/wiki/Symmetric%20space | In mathematics, a symmetric space is a Riemannian manifold (or more generally, a pseudo-Riemannian manifold) whose group of symmetries contains an inversion symmetry about every point. This can be studied with the tools of Riemannian geometry, leading to consequences in the theory of holonomy; or algebraically through Lie theory, which allowed Cartan to give a complete classification. Symmetric spaces commonly occur in differential geometry, representation theory and harmonic analysis.
In geometric terms, a complete, simply connected Riemannian manifold is a symmetric space if and only if its curvature tensor is invariant under parallel transport. More generally, a Riemannian manifold (M, g) is said to be symmetric if and only if, for each point p of M, there exists an isometry of M fixing p and acting on the tangent space as minus the identity (every symmetric space is complete, since any geodesic can be extended indefinitely via symmetries about the endpoints). Both descriptions can also naturally be extended to the setting of pseudo-Riemannian manifolds.
From the point of view of Lie theory, a symmetric space is the quotient G/H of a connected Lie group G by a Lie subgroup H which is (a connected component of) the invariant group of an involution of G. This definition includes more than the Riemannian definition, and reduces to it when H is compact.
Riemannian symmetric spaces arise in a wide variety of situations in both mathematics and physics. Their central role in the theory of holonomy was discovered by Marcel Berger. They are important objects of study in representation theory and harmonic analysis as well as in differential geometry.
Geometric definition
Let M be a connected Riemannian manifold and p a point of M. A diffeomorphism f of a neighborhood of p is said to be a geodesic symmetry if it fixes the point p and reverses geodesics through that point, i.e. if γ is a geodesic with then It follows that the derivative of the map f at p is minus the |
https://en.wikipedia.org/wiki/Appendage | An appendage (or outgrowth) is an external body part, or natural prolongation, that protrudes from an organism's body.
In arthropods, an appendage refers to any of the homologous body parts that may extend from a body segment, including antennae, mouthparts (including mandibles, maxillae and maxillipeds), gills, locomotor legs (pereiopods for walking, and pleopods for swimming), sexual organs (gonopods), and parts of the tail (uropods). Typically, each body segment carries one pair of appendages. An appendage which is modified to assist in feeding is known as a maxilliped or gnathopod.
In vertebrates, an appendage can refer to a locomotor part such as a tail, fins on a fish, limbs (legs, flippers or wings) on a tetrapod; exposed sex organ; defensive parts such as horns and antlers; or sensory organs such as auricles, proboscis (trunk and snout) and barbels.
Appendages may become uniramous, as in insects and centipedes, where each appendage comprises a single series of segments, or it may be biramous, as in many crustaceans, where each appendage branches into two sections. Triramous (branching into three) appendages are also possible.
All arthropod appendages are variations of the same basic structure (homologous), and which structure is produced is controlled by "homeobox" genes. Changes to these genes have allowed scientists to produce animals (chiefly Drosophila melanogaster) with modified appendages, such as legs instead of antennae. |
https://en.wikipedia.org/wiki/Richard%20Goldschmidt | Richard Benedict Goldschmidt (April 12, 1878 – April 24, 1958) was a German geneticist. He is considered the first to attempt to integrate genetics, development, and evolution. He pioneered understanding of reaction norms, genetic assimilation, dynamical genetics, sex determination, and heterochrony. Controversially, Goldschmidt advanced a model of macroevolution through macromutations popularly known as the "Hopeful Monster" hypothesis.
Goldschmidt also described the nervous system of the nematode, a piece of work that influenced Sydney Brenner to study the "wiring diagram" of Caenorhabditis elegans, winning Brenner and his colleagues the Nobel Prize in 2002.
Childhood and education
Goldschmidt was born in Frankfurt-am-Main, Germany to upper-middle class parents of Ashkenazi Jewish heritage. He had a classical education and entered the University of Heidelberg in 1896, where he became interested in natural history. From 1899 Goldschmidt studied anatomy and zoology at the University of Heidelberg with Otto Bütschli and Carl Gegenbaur. He received his Ph.D. under Bütschli in 1902, studying development of the trematode Polystomum.
Career
In 1903 Goldschmidt began working as an assistant to Richard Hertwig at the University of Munich, where he continued his work on nematodes and their histology, including studies of the nervous system development of Ascaris and the anatomy of Amphioxus. He founded the histology journal Archiv für Zellforschung while working in Hertwig's laboratory. Under Hertwig's influence, he also began to take an interest in chromosome behavior and the new field of genetics.
In 1909 Goldschmidt became professor at the University of Munich and, inspired by Wilhelm Johannsen's genetics treatise Elemente der exakten Erblichkeitslehre, began to study sex determination and other aspects of the genetics of Lymantria dispar, the gypsy moth, of which he was crossbreeding different races. He observed various stages of their sexual development, and |
https://en.wikipedia.org/wiki/Supergene | A supergene is a chromosomal region encompassing multiple neighboring genes that are inherited together because of close genetic linkage, i.e. much less recombination than would normally be expected. This mode of inheritance can be due to genomic rearrangements between supergene variants.
A supergene region can contain few, functionally related genes that clearly contribute to a shared phenotype.
Phenotypes encoded by supergenes
Supergenes have cis-effects due to multiple loci (which may be within a gene, or within a single gene's regulatory region), and tight linkage. They are classically polymorphic, whereby different supergene variants code for different phenotypes.
Classic supergenes include many sex chromosomes, the Primula heterostyly locus, which controls "pin" and "thrum" types, and the locus controlling Batesian mimetic polymorphism in Papilio memnon butterflies. Recently discovered supergenes are responsible complex phenotypes including color-morphs in the white-throated sparrow.
Primula supergene. Pin and thrum morphs of Primula have effects on genetic compatibility (pin style x thrum pollen, or thrum style x pin pollen matings are successful, while pin x pin, and thrum x thrum matings are rarely successful due to pollen-style incompatibility), and have different style length, anther height in the corolla tube, pollen size, and papilla size on the stigma. Each of these effects is controlled by a different locus in the same supergene, but recombinants are occasionally found with traits combining those of "pin" and "thrum" morphs.
Origin
The earliest use of the term "supergene" may be in an article by A. Ernst (1936) in the journal Archiv der Julius Klaus-Stiftung für Vererbungsforschung, Sozialanthropologie und Rassenhygiene.
Classically, supergenes were hypothesized to have evolved from less tightly-linked genes coming together via chromosomal rearrangement or reduced crossing over, due to selection for particular multilocus phenotypes. For insta |
https://en.wikipedia.org/wiki/Cribriform%20plate | In mammalian anatomy, the cribriform plate (Latin for lit. sieve-shaped), horizontal lamina or lamina cribrosa is part of the ethmoid bone. It is received into the ethmoidal notch of the frontal bone and roofs in the nasal cavities. It supports the olfactory bulb, and is perforated by olfactory foramina for the passage of the olfactory nerves to the roof of the nasal cavity to convey smell to the brain. The foramina at the medial part of the groove allow the passage of the nerves to the upper part of the nasal septum while the foramina at the lateral part transmit the nerves to the superior nasal concha.
A fractured cribriform plate can result in olfactory dysfunction, septal hematoma, cerebrospinal fluid rhinorrhoea (CSF rhinorrhoea), and possibly infection which can lead to meningitis. CSF rhinorrhoea (clear fluid leaking from the nose) is very serious and considered a medical emergency. Aging can cause the openings in the cribriform plate to close, pinching olfactory nerve fibers. A reduction in olfactory receptors, loss of blood flow, and thick nasal mucus can also cause an impaired sense of smell.
Structure
The cribriform plate is part of the ethmoid bone, which has a low density, and is spongy. It is narrow, with deep grooves supporting the olfactory bulb.
Its anterior border, short and thick, articulates with the frontal bone. It has two small projecting alae (wings), which are received into corresponding depressions in the frontal bone to complete the foramen cecum.
Its sides are smooth, and sometimes bulging due to the presence of a small air sinus in the interior.
The crista galli projects upwards from the middle line of the cribriform plate. The long thin posterior border of the crista galli serves for the attachment of the falx cerebri. On either side of the crista galli, the cribriform plate is narrow and deeply grooved. At the front part of the cribriform plate, on either side of the crista galli, is a small fissure that is occupied by a process |
https://en.wikipedia.org/wiki/14%3A9%20aspect%20ratio | 14:9 (1.:1) is a compromise aspect ratio between 4:3 and 16:9. It is used to create an acceptable picture on both 4:3 and 16:9 TV, conceived following audience tests conducted by the BBC. It has been used by most UK, Irish, French, Spanish, Colombian and Australian terrestrial analogue networks, and in the US on Discovery Networks' HD simulcast channels with programming and advertising originally compiled in 4:3. Note that 14:9 is not a shooting format; 14:9 material is almost always derived from either a 16:9 or 4:3 shot, and no televisions have ever been made in 14:9.
Usage
With native 16:9 material
A common usage is for material shot in 16:9 format. During production, the important action is kept within the centre of the picture, known as the 14:9 safe area. When the material is broadcast in a 4:3 format (such as for analog television), the sides of the image are cropped to 14:9 and narrow black bars are added to the top and bottom. It is considered that viewers who are not used to wide-screen will find this less distracting than the letterbox format that would result from broadcasting the full 16:9 picture in analogue, while still seeing more of the picture than would be visible if cropped into 4:3. When the same material is broadcast in 16:9 (such as for digital television), the full 16:9 frame is left intact, but widescreen signaling auxiliary signals tell the receiver that the picture is suitable for cropping to 14:9 if necessary (for example, when the receiver is connected to a 4:3 display).
The major benefit in shooting 16:9 with protection for 14:9 (rather than 4:3) is improving the usable screen area for titles, logos and scrolling text. The visible enhancement is significant due to the restrictive requirements of overscan. When shooting in 16:9 for potential 4:3 distribution, the "Shoot And Protect" method (from the BBC's "Widescreen Book") is employed. As the name suggests, footage is shot in 16:9 but important visual information is protected inside |
https://en.wikipedia.org/wiki/Spectroscopic%20notation | Spectroscopic notation provides a way to specify atomic ionization states, atomic orbitals, and molecular orbitals.
Ionization states
Spectroscopists customarily refer to the spectrum arising from a given ionization state of a given element by the element's symbol followed by a Roman numeral. The numeral I is used for spectral lines associated with the neutral element, II for those from the first ionization state, III for those from the second ionization state, and so on. For example, "He I" denotes lines of neutral helium, and "C IV" denotes lines arising from the third ionization state, C3+, of carbon. This notation is used for example to retrieve data from the NIST Atomic Spectrum Database.
Atomic and molecular orbitals
Before atomic orbitals were understood, spectroscopists discovered various distinctive series of spectral lines in atomic spectra, which they identified by letters. These letters were later associated with the azimuthal quantum number, ℓ. The letters, "s", "p", "d", and "f", for the first four values of ℓ were chosen to be the first letters of properties of the spectral series observed in alkali metals. Other letters for subsequent values of ℓ were assigned in alphabetical order, omitting the letter "j" because some languages do not distinguish between the letters "i" and "j":
{| class="wikitable"
|- align="center"
! width="40px" | letter !! name !! width="30px" | ℓ
|- align="center"
| s || align="left" | sharp || 0
|- align="center"
| p || align="left" | principal || 1
|- align="center"
| d || align="left" | diffuse || 2
|- align="center"
| f || align="left" | fundamental || 3
|- align="center"
| g
|
| 4
|- align="center"
| h
|
| 5
|- align="center"
| i
|
| 6
|- align="center"
| k
|
| 7
|- align="center"
| l
|
| 8
|- align="center"
| m
|
| 9
|- align="center"
| n
|
| 10
|- align="center"
| o
|
| 11
|- align="center"
| q
|
| 12
|- align="center"
| r
|
| 13
|- align="center"
| t
|
| 14
|- align="center"
| u
|
| 15
|- align="center"
|
https://en.wikipedia.org/wiki/Cedric%20Smith%20%28statistician%29 | Cedric Austen Bardell Smith (5 February 1917 – 10 January 2002) was a British statistician and geneticist. Smith was born in Leicester. He was the younger son of John Bardell Smith (1876–1950), a mechanical engineer, and Ada (née Horrocks; 1876–1969). He was educated at Wyggeston Grammar School for Boys until 1929, when the family moved to London. His education continued at Bec School, Tooting, for three years, then at University College School, London. In 1935, although having failed his Higher School Certificate, he was awarded an exhibition to Trinity College, Cambridge. He graduated in the Mathematical Tripos, with a First in Part II in 1937 and a Distinction in Part III in 1938. Following graduation he began postgraduate research, taking his PhD in 1942.
Work on combinatorics
While a student at Cambridge, Smith became close friends with three other students at Trinity College, R. L. Brooks, A. H. Stone and W. T. Tutte. Together they tackled a number of problems in the mathematical field of combinatorics and devised an imaginary mathematician, 'Blanche Descartes', under which name to publish their work. The group studied dissections of rectangles into squares, especially the 'perfect' squared square, a square that is divided into a number of smaller squares, no two of which are the same size. Publications under the name of 'Blanche Descartes' or 'F. de Carteblanche' continued to appear into the 1980s. The group also published more mainstream articles under their own names, the final one being R.L. Brooks, C.A.B. Smith, A.H. Stone and W.T. Tutte, 'Determinants and current flows in electric networks', Discrete Math., Vol. 100 (1992).
World War II
During World War II, as a Quaker and conscientious objector, Smith joined the Friends Relief Service; he worked as a hospital porter at Addenbrooke's Hospital in Cambridge. Smith's pacifist views saw him develop an interest in peace studies. Among other responsibilities for the Society of Friends, he was a member of the |
https://en.wikipedia.org/wiki/Cosmic%20latte | Cosmic latte is the average color of the universe as perceived from the Earth, found by a team of astronomers from Johns Hopkins University (JHU). In 2002, Karl Glazebrook and Ivan Baldry determined that the average color of the universe was a greenish white, but they soon corrected their analysis in a 2003 paper in which they reported that their survey of the light from over 200,000 galaxies averaged to a slightly beigeish white. The hex triplet value for cosmic latte is #FFF8E7.
Discovery of the color
Finding the average color of the universe was not the focus of the study. Rather, the study examined spectral analysis of different galaxies to study star formation. Like Fraunhofer lines, the dark lines displayed in the study's spectral ranges display older and younger stars and allow Glazebrook and Baldry to determine the age of different galaxies and star systems. What the study revealed is that the overwhelming majority of stars formed about 5 billion years ago. Because these stars would have been "brighter" in the past, the color of the universe changes over time, shifting from blue to red as more blue stars change to yellow and eventually red giants.
As light from distant galaxies reaches the Earth, the average "color of the universe" (as seen from Earth) tends towards pure white, due to the light coming from the stars when they were much younger and bluer.
Naming the color
The corrected color was initially published on the Johns Hopkins University (JHU) News website and updated on the team's initial announcement. Multiple news outlets, including NPR and BBC, displayed the color in stories and some relayed the request by Glazebrook on the announcement asking for suggestions for names, jokingly adding all were welcome as long as they were not "beige".
These were the results of a vote of the JHU astronomers involved based on the new color:
Though Drum's suggestion of "cappuccino cosmico" received the most votes, the researchers favored Drum's other suggest |
https://en.wikipedia.org/wiki/Multiplicative%20quantum%20number | In quantum field theory, multiplicative quantum numbers are conserved quantum numbers of a special kind. A given quantum number q is said to be additive if in a particle reaction the sum of the q-values of the interacting particles is the same before and after the reaction. Most conserved quantum numbers are additive in this sense; the electric charge is one example. A multiplicative quantum number q is one for which the corresponding product, rather than the sum, is preserved.
Any conserved quantum number is a symmetry of the Hamiltonian of the system (see Noether's theorem). Symmetry groups which are examples of the abstract group called Z2 give rise to multiplicative quantum numbers. This group consists of an operation, P, whose square is the identity, P2 = 1. Thus, all symmetries which are mathematically similar to parity (physics) give rise to multiplicative quantum numbers.
In principle, multiplicative quantum numbers can be defined for any abelian group. An example would be to trade the electric charge, Q, (related to the abelian group U(1) of electromagnetism), for the new quantum number exp(2iπ Q). Then this becomes a multiplicative quantum number by virtue of the charge being an additive quantum number. However, this route is usually followed only for discrete subgroups of U(1), of which Z2 finds the widest possible use.
See also
Parity, C-symmetry, T-symmetry and G-parity |
https://en.wikipedia.org/wiki/K%C5%8Dsaku%20Yosida | was a Japanese mathematician who worked in the field of functional analysis. He is known for the Hille-Yosida theorem concerning C0-semigroups. Yosida studied mathematics at the University of Tokyo, and held posts at Osaka and Nagoya Universities. In 1955, Yosida returned to the University of Tokyo.
See also
Einar Carl Hille
Functional analysis |
https://en.wikipedia.org/wiki/List%20of%20wort%20plants | This is an alphabetical listing of wort plants, meaning plants that employ the syllable wort in their English-language common names.
According to the Oxford English Dictionary's Ask Oxford site, "A word with the suffix -wort is often very old. The Old English word was wyrt. The modern variation, root, comes from Old Norse. It was often used in the names of herbs and plants that had medicinal uses, the first part of the word denoting the complaint against which it might be specially efficacious. By the middle of the 17th-century -wort was beginning to fade from everyday use.
The Naturalist Newsletter states, "Wort derives from the Old English wyrt, which simply meant plant. The word goes back even further, to the common ancestor of English and German, to the Germanic wurtiz. Wurtiz also evolved into the modern German word Wurzel, meaning root."
Wort plants
Adderwort, adder's wort - Persicaria bistorta.
American lungwort - Mertensia virginica.
Asterwort - Any composite plant of the family Asteraceae.
Awlwort - Subularia aquatica. The plant bears awl-shaped leaves.
Banewort - Ranunculus flammula or Atropa belladonna
Barrenwort - Epimedium, especially Epimedium alpinum.
Bearwort - Meum athamanticum
Bellwort - Uvularia or plants in the family Campanulaceae.
Birthwort - Aristolochia. Also, birthroot (Trillium erectum).
Birthwort - Aristolochiaceae, the birthwort family.
Bishop's wort - Stachys officinalis. Also, fennel flower.
Bitterwort - Gentiana lutea.
Bladderwort - Utricularia (aquatic plants).
Blawort - A flower, commonly called harebell. Also, a certain plant bearing blue flowers.
Bloodwort - Sanguinaria canadensis. Produces escharotic alkaloids that corrode skin, leaving wounds. More commonly known as bloodroot, or sometimes tetterwort.
Blue navelwort - Cynoglossum omphaloides
Blue throatwort - Trachelium caeruleum.
Blushwort - A member of the gentian family. Shame flower.
Bogwort - The bilberry or whortleberry.
Bollockwort - A Middle English name for some types |
https://en.wikipedia.org/wiki/Prince%20of%20Wales%27s%20feathers | The Prince of Wales's feathers are the heraldic badge of the Prince of Wales, the heir to the British throne. The badge consists of three white ostrich feathers encircled by a gold coronet. A ribbon below the coronet bears the German motto (, "I serve"). As well as being used in royal heraldry, the feathers are sometimes used to symbolise Wales itself, particularly in Welsh rugby union and Welsh regiments of the British Army.
Bearers of the motif
The feathers are the badge of the heir apparent to the British throne regardless of whether or not the Prince of Wales title is held.
House of Plantagenet
The ostrich feathers heraldic motif is generally traced back to Edward, the Black Prince (1330–1376), eldest son and heir apparent of King Edward III of England. The Black Prince bore (as an alternative to his paternal arms) a shield of Sable, three ostrich feathers argent, described as his "shield for peace", probably meaning the shield he used for jousting. These arms appear several times on his chest tomb in Canterbury Cathedral, alternating with his paternal arms (the royal arms of King Edward III differenced by a label of three points argent). The Black Prince also used heraldic badges of one or more ostrich feathers in various other contexts.
The feathers had first appeared at the time of the marriage of Edward III to Philippa of Hainault, and Edward III himself occasionally used ostrich feather badges. It is therefore likely that the Black Prince inherited the badge from his mother, descended from the Counts of Hainault, whose eldest son bore the title "Count of Ostrevent", the ostrich (, Old French spellings including ostruce) feathers being possibly an heraldic pun on that name. Alternatively, the badge may have derived from the Counts of Luxembourg, from whom Philippa was also descended, who had used the badge of an ostrich. Sir Roger de Clarendon, an illegitimate son of the Black Prince by his mistress Edith Willesford, bore arms of Or, on a bend sable th |
https://en.wikipedia.org/wiki/%C3%98ystein%20Ore | Øystein Ore (7 October 1899 – 13 August 1968) was a Norwegian mathematician known for his work in ring theory, Galois connections, graph theory, and the history of mathematics.
Life
Ore graduated from the University of Oslo in 1922, with a Cand.Real.degree in mathematics. In 1924, the University of Oslo awarded him the Ph.D. for a thesis titled Zur Theorie der algebraischen Körper, supervised by Thoralf Skolem. Ore also studied at Göttingen University, where he learned Emmy Noether's new approach to abstract algebra. He was also a fellow at the Mittag-Leffler Institute in Sweden, and spent some time at the University of Paris. In 1925, he was appointed research assistant at the University of Oslo.
Yale University’s James Pierpont went to Europe in 1926 to recruit research mathematicians. In 1927, Yale hired Ore as an assistant professor of mathematics, promoted him to associate professor in 1928, then to full professor in 1929. In 1931, he became a Sterling Professor (Yale's highest academic rank), a position he held until he retired in 1968.
Ore gave an American Mathematical Society Colloquium lecture in 1941 and was a plenary speaker at the International Congress of Mathematicians in 1936 in Oslo. He was also elected to the American Academy of Arts and Sciences and the Oslo Academy of Science. He was a founder of the Econometric Society.
Ore visited Norway nearly every summer. During World War II, he was active in the "American Relief for Norway" and "Free Norway" movements. In gratitude for the services rendered to his native country during the war, he was decorated in 1947 with the Order of St. Olav.
In 1930, Ore married Gudrun Lundevall. They had two children. Ore had a passion for painting and sculpture, collected ancient maps, and spoke several languages.
Work
Ore is known for his work in ring theory, Galois connections, and most of all, graph theory.
His early work was on algebraic number fields, how to decompose the ideal generated by a prime number |
https://en.wikipedia.org/wiki/Gabriel%20Andrew%20Dirac | Gabriel Andrew Dirac (13 March 1925 – 20 July 1984) was a Hungarian-British mathematician who mainly worked in graph theory. He served as Erasmus Smith's Professor of Mathematics at Trinity College Dublin from 1964 to 1966. In 1952, he gave a sufficient condition for a graph to contain a Hamiltonian circuit. The previous year, he conjectured that n points in the plane, not all collinear, must span at least two-point lines, where is the largest integer not exceeding . This conjecture was proven true when n is sufficiently large by Green and Tao in 2012.
Education
Dirac started his studies at St John's College, Cambridge in 1942, but in that same year the war saw him serving in the aircraft industry. He received his MA in 1949, and moved to the University of London, getting his Ph.D. "On the Colouring of Graphs: Combinatorial topology of Linear Complexes" there under Richard Rado.
Career
Dirac's main academic positions were at the King's College London (1948-1954), University of Toronto (1952-1953), University of Vienna (1954-1958), University of Hamburg (1958-1963), Trinity College Dublin (Erasmus Smith's Professor of Mathematics, 1964-1966), University of Wales at Swansea (1967-1970), and Aarhus University (1970-1984).
Family
He was born Balázs Gábor in Budapest, to Richárd Balázs, a military officer and businessman, and Margit "Manci" Wigner (sister of Eugene Wigner). When his mother married Paul Dirac in 1937, he and his sister resettled in England and were formally adopted, changing their family name to Dirac. He married Rosemari Dirac and they had four children together: Meike, Barbara, Holger and Annette.
See also
Dirac's theorem on Hamiltonian cycles
Dirac's theorem on chordal graphs
Dirac's theorem on cycles in -connected graphs
Notes |
https://en.wikipedia.org/wiki/Mitogen | A mitogen is a small bioactive protein or peptide that induces a cell to begin cell division, or enhances the rate of division (mitosis). Mitogenesis is the induction (triggering) of mitosis, typically via a mitogen. The mechanism of action of a mitogen is that it triggers signal transduction pathways involving mitogen-activated protein kinase (MAPK), leading to mitosis.
The cell cycle
Mitogens act primarily by influencing a set of proteins which are involved in the restriction of progression through the cell cycle. The G1 checkpoint is controlled most directly by mitogens: further cell cycle progression does not need mitogens to continue. The point where mitogens are no longer needed to move the cell cycle forward is called the "restriction point" and depends on cyclins to be passed. One of the most important of these is TP53, a gene which produces a family of proteins known as p53. It, combined with the Ras pathway, downregulate cyclin D1, a cyclin-dependent kinase, if they are not stimulated by the presence of mitogens. In the presence of mitogens, sufficient cyclin D1 can be produced. This process cascades onwards, producing other cyclins which stimulate the cell sufficiently to allow cell division. While animals produce internal signals that can drive the cell cycle forward, external mitogens can cause it to progress without these signals.
Endogenous mitogens
Mitogens can be either endogenous or exogenous factors. Endogenous mitogens function to control cell division is a normal and necessary part of the life cycle of multicellular organisms. For example, in zebrafish, an endogenous mitogen Nrg1 is produced in response to indications of heart damage. When it is expressed, it causes the outer layers of the heart to respond by increasing division rates and producing new layers of heart muscle cells to replace the damaged ones. This pathway can potentially be deleterious, however: expressing Nrg1 in the absence of heart damage causes uncontrolled growth of |
https://en.wikipedia.org/wiki/Einar%20Hille | Carl Einar Hille (28 June 1894 – 12 February 1980) was an American mathematics professor and scholar. Hille authored or coauthored twelve mathematical books and a number of mathematical papers.
Early life and education
Hille was born in New York City. His parents were both immigrants from Sweden who separated before his birth. His father, Carl August Heuman, was a civil engineer. He was brought up by his mother, Edla Eckman, who took the surname Hille. When Einar was two years old, he and his mother returned to Stockholm. Hille spent the next 24 years of his life in Sweden, returning to the United States when he was 26 years old. Hille entered the University of Stockholm in 1911. Hille was awarded his first degree in mathematics in 1913 and the equivalent of a master's degree in the following year. He received a Ph.D. from Stockholm in 1918 for a doctoral dissertation entitled Some Problems Concerning Spherical Harmonics.
Career
In 1919 Hille was awarded the Mittag-Leffler Prize and was given the right to teach at the University of Stockholm. He subsequently taught at Harvard University, Princeton University, Stanford University and the University of Chicago. In 1933, he became an endowed professor on mathematics in the Graduate School of Yale University, retiring in 1962.
Hille's main work was on integral equations, differential equations, special functions, Dirichlet series and Fourier series. Later in his career his interests turned more towards functional analysis. His name persists among others in the Hille–Yosida theorem. Hille was a member of the London Mathematical Society and the Circolo Matematico di Palermo. Hille served as president of the American Mathematical Society (1947–48) and was the Society's Colloquium lecturer in 1944. He received many honours including election to the United States National Academy of Sciences (1953) and the Swedish Royal Academy of Sciences. He was awarded by Sweden with the Order of the Polar Star.
Personal life
Hille |
https://en.wikipedia.org/wiki/IEEE%20P1363 | IEEE P1363 is an Institute of Electrical and Electronics Engineers (IEEE) standardization project for public-key cryptography. It includes specifications for:
Traditional public-key cryptography (IEEE Std 1363-2000 and 1363a-2004)
Lattice-based public-key cryptography (IEEE Std 1363.1-2008)
Password-based public-key cryptography (IEEE Std 1363.2-2008)
Identity-based public-key cryptography using pairings (IEEE Std 1363.3-2013)
The chair of the working group as of October 2008 is William Whyte of NTRU Cryptosystems, Inc., who has served since August 2001. Former chairs were Ari Singer, also of NTRU (1999–2001), and Burt Kaliski of RSA Security (1994–1999).
The IEEE Standard Association withdrew all of the 1363 standards except 1363.3-2013 on 7 November 2019.
Traditional public-key cryptography (IEEE Std 1363-2000 and 1363a-2004)
This specification includes key agreement, signature, and encryption
schemes using several mathematical approaches: integer factorization,
discrete logarithm, and elliptic curve discrete logarithm.
Key agreement schemes
DL/ECKAS-DH1 and DL/ECKAS-DH2 (Discrete Logarithm/Elliptic Curve Key Agreement Scheme, Diffie–Hellman version): This includes both traditional Diffie–Hellman and elliptic curve Diffie–Hellman.
DL/ECKAS-MQV (Discrete Logarithm/Elliptic Curve Key Agreement Scheme, Menezes–Qu–Vanstone version)
Signature schemes
DL/ECSSA (Discrete Logarithm/Elliptic Curve Signature Scheme with Appendix): Includes four main variants: DSA, ECDSA, Nyberg-Rueppel, and Elliptic Curve Nyberg-Rueppel.
IFSSA (Integer Factorization Signature Scheme with Appendix): Includes two variants of RSA, Rabin-Williams, and ESIGN, with several message encoding methods. "RSA1 with EMSA3" is essentially PKCS#1 v1.5 RSA signature; "RSA1 with EMSA4 encoding" is essentially RSA-PSS; "RSA1 with EMSA2 encoding" is essentially ANSI X9.31 RSA signature.
DL/ECSSR (Discrete Logarithm/Elliptic Curve Signature Scheme with Recovery)
DL/ECSSR-PV (Discrete Logari |
https://en.wikipedia.org/wiki/Refinement%20%28computing%29 | Refinement is a generic term of computer science that encompasses various approaches for producing correct computer programs and simplifying existing programs to enable their formal verification.
Program refinement
In formal methods, program refinement is the verifiable transformation of an abstract (high-level) formal specification into a concrete (low-level) executable program. Stepwise refinement allows this process to be done in stages. Logically, refinement normally involves implication, but there can be additional complications.
The progressive just-in-time preparation of the product backlog (requirements list) in agile software development approaches, such as Scrum, is also commonly described as refinement.
Data refinement
Data refinement is used to convert an abstract data model (in terms of sets for example) into implementable data structures (such as arrays). Operation refinement converts a specification of an operation on a system into an implementable program (e.g., a procedure). The postcondition can be strengthened and/or the precondition weakened in this process. This reduces any nondeterminism in the specification, typically to a completely deterministic implementation.
For example, x ∈ {1,2,3} (where x is the value of the variable x after an operation) could be refined to x ∈ {1,2}, then x ∈ {1}, and implemented as x := 1. Implementations of x := 2 and x := 3 would be equally acceptable in this case, using a different route for the refinement. However, we must be careful not to refine to x ∈ {} (equivalent to false) since this is unimplementable; it is impossible to select a member from the empty set.
The term reification is also sometimes used (coined by Cliff Jones). Retrenchment is an alternative technique when formal refinement is not possible. The opposite of refinement is abstraction.
Refinement calculus
Refinement calculus is a formal system (inspired from Hoare logic) that promotes program refinement. The FermaT Transformation System i |
https://en.wikipedia.org/wiki/Exercise%20intolerance | Exercise intolerance is a condition of inability or decreased ability to perform physical exercise at the normally expected level or duration for people of that age, size, sex, and muscle mass. It also includes experiences of unusually severe post-exercise pain, fatigue, nausea, vomiting or other negative effects. Exercise intolerance is not a disease or syndrome in and of itself, but can result from various disorders.
In most cases, the specific reason that exercise is not tolerated is of considerable significance when trying to isolate the cause down to a specific disease. Dysfunctions involving the pulmonary, cardiovascular or neuromuscular systems have been frequently found to be associated with exercise intolerance, with behavioural causes also playing a part.
Signs and symptoms
Exercise in this context means physical activity, not specifically exercise in a fitness program. For example, a person with exercise intolerance after a heart attack may not be able to sustain the amount of physical activity needed to walk through a grocery store or to cook a meal. In a person who does not tolerate exercise well, physical activity may cause unusual breathlessness (dyspnea), muscle pain (myalgia), tachypnoea (abnormally rapid breathing), inappropriate rapid heart rate or tachycardia (having a faster heart rate than normal), increasing muscle weakness or muscle fatigue; or exercise might result in severe headache, nausea, dizziness, occasional muscle cramps or extreme fatigue, which would make it intolerable.
The three most common reasons people give for being unable to tolerate a normal amount of exercise or physical activity are:
breathlessness – commonly seen in people with lung diseases, and heart disease.
fatigue – when it appears early in an exercise test, it is usually due to deconditioning (either through a sedentary lifestyle or while convalescing from a long illness), but it can indicate heart, lung or neuromuscular diseases.
pain – can be caused by a v |
https://en.wikipedia.org/wiki/C0-semigroup | {{DISPLAYTITLE:C0-semigroup }}
In mathematics, a C0-semigroup, also known as a strongly continuous one-parameter semigroup, is a generalization of the exponential function. Just as exponential functions provide solutions of scalar linear constant coefficient ordinary differential equations, strongly continuous semigroups provide solutions of linear constant coefficient ordinary differential equations in Banach spaces. Such differential equations in Banach spaces arise from e.g. delay differential equations and partial differential equations.
Formally, a strongly continuous semigroup is a representation of the semigroup (R+, +) on some Banach space X that is continuous in the strong operator topology. Thus, strictly speaking, a strongly continuous semigroup is not a semigroup, but rather a continuous representation of a very particular semigroup.
Formal definition
A strongly continuous semigroup on a Banach space is a map
such that
, (the identity operator on )
, as .
The first two axioms are algebraic, and state that is a representation of the semigroup ; the last is topological, and states that the map is continuous in the strong operator topology.
Infinitesimal generator
The infinitesimal generator A of a strongly continuous semigroup T is defined by
whenever the limit exists. The domain of A, D(A), is the set of x∈X for which this limit does exist; D(A) is a linear subspace and A is linear on this domain. The operator A is closed, although not necessarily bounded, and the domain is dense in X.
The strongly continuous semigroup T with generator A is often denoted by the symbol (or, equivalently, ). This notation is compatible with the notation for matrix exponentials, and for functions of an operator defined via functional calculus (for example, via the spectral theorem).
Uniformly continuous semigroup
A uniformly continuous semigroup is a strongly continuous semigroup T such that
holds. In this case, the infinitesimal generator A of T is |
https://en.wikipedia.org/wiki/Valence%20%28chemistry%29 | In chemistry, the valence (US spelling) or valency (British spelling) of an atom is a measure of its combining capacity with other atoms when it forms chemical compounds or molecules. Valence is generally understood to be the number of chemical bonds that each atom of a given element typically forms. For a specified compound the valence of an atom is the number of bonds formed by the atom. Double bonds are considered to be two bonds, and triple bonds to be three. In most compounds, the valence of hydrogen is 1, of oxygen is 2, of nitrogen is 3, and of carbon is 4. Valence is not to be confused with the related concepts of the coordination number, the oxidation state, or the number of valence electrons for a given atom.
Description
The valence is the combining capacity of an atom of a given element, determined by the number of hydrogen atoms that it combines with. In methane, carbon has a valence of 4; in ammonia, nitrogen has a valence of 3; in water, oxygen has a valence of 2; and in hydrogen chloride, chlorine has a valence of 1. Chlorine, as it has a valence of one, can be substituted for hydrogen in many compounds. Phosphorus has a valence 3 in phosphine () and a valence of 5 in phosphorus pentachloride (), which shows that elements may have exhibit than one valence. The structural formula of a compound represents the connectivity of the atoms, with lines drawn between two atoms to represent bonds. The two tables below show examples of different compounds, their structural formulas, and the valences for each element of the compound.
Definition
Valence is defined by the IUPAC as:
The maximum number of univalent atoms (originally hydrogen or chlorine atoms) that may combine with an atom of the element under consideration, or with a fragment, or for which an atom of this element can be substituted.
An alternative modern description is:
The number of hydrogen atoms that can combine with an element in a binary hydride or twice the number of oxygen atoms combini |
https://en.wikipedia.org/wiki/Statistical%20Methods%20for%20Research%20Workers | Statistical Methods for Research Workers is a classic book on statistics, written by the statistician R. A. Fisher. It is considered by some to be one of the 20th century's most influential books on statistical methods, together with his The Design of Experiments (1935). It was originally published in 1925, by Oliver & Boyd (Edinburgh); the final and posthumous 14th edition was published in 1970.
Reviews
According to Denis Conniffe:
Ronald A. Fisher was "interested in application and in the popularization
of statistical methods and his early book Statistical Methods for Research Workers, published in 1925, went through many editions and
motivated and influenced the practical use of statistics in many fields of
study. His Design of Experiments (1935) [promoted] statistical technique and application. In that book he
emphasized examples and how to design experiments systematically from
a statistical point of view. The mathematical justification of the methods
described was not stressed and, indeed, proofs were often barely sketched
or omitted altogether ..., a fact which led H. B. Mann to fill the gaps with a rigorous mathematical treatment in his well-known treatise, ."
Chapters
Prefaces
Introduction
Diagrams
Distributions
Tests of Goodness of Fit, Independence and Homogeneity; with table of χ2
Tests of Significance of Means, Difference of Means, and Regression Coefficients
The Correlation Coefficient
Intraclass Correlations and the Analysis of Variance
Further Applications of the Analysis of Variance
SOURCES USED FOR DATA AND METHODS INDEX
In the second edition of 1928 a chapter 9 was added: The Principles of Statistical Estimation.
See also
The Design of Experiments
Notes
Further reading
The March 1951 issue of the Journal of the American Statistical Association contains articles celebrating the 25th anniversary of the publication of the first edition.
A.W.F. Edwards (2005) "R. A. Fisher, Statistical Methods for Research Workers, 1925," in I. G |
https://en.wikipedia.org/wiki/Compound%20annual%20growth%20rate | Compound annual growth rate (CAGR) is a business and investing specific term for the geometric progression ratio that provides a constant rate of return over the time period. CAGR is not an accounting term, but it is often used to describe some element of the business, for example revenue, units delivered, registered users, etc. CAGR dampens the effect of volatility of periodic returns that can render arithmetic means irrelevant. It is particularly useful to compare growth rates from various data sets of common domain such as revenue growth of companies in the same industry or sector.
CAGR is equivalent to the more generic exponential growth rate when the exponential growth interval is one year.
Formula
CAGR is defined as:
where is the initial value, is the end value, and is the number of years.
Actual or normalized values may be used for calculation as long as they retain the same mathematical proportion.
Example
In this example, we will compute the CAGR over a three-year period. Assume that the year-end revenues of a business over a three-year period, , have been:
{| class="wikitable" border="1"
|-
! Year-End
! 2004-12-31
! 2007-12-31
|-
| Year-End Revenue
| 9,000
| 13,000
|}
Therefore, to calculate the CAGR of the revenues over the three-year period spanning the "end" of 2004 to the "end" of 2007 is:
Note that this is a smoothed growth rate per year. This rate of growth would take you to the ending value, from the starting value, in the number of years given, if growth had been at the same rate every year.
Verification:
Multiply the initial value (2004 year-end revenue) by (1 + CAGR) three times (because we calculated for 3 years). The product will equal the year-end revenue for 2007. This shows the compound growth rate:
For n = 3:
For comparison:
the Arithmetic Mean Return (AMR) would be the sum of annual revenue changes (compared with the previous year) divided by number of years, or:
In contrast to CAGR, you cannot obtain by multi |
https://en.wikipedia.org/wiki/Trigonal%20pyramidal%20molecular%20geometry | In chemistry, a trigonal pyramid is a molecular geometry with one atom at the apex and three atoms at the corners of a trigonal base, resembling a tetrahedron (not to be confused with the tetrahedral geometry). When all three atoms at the corners are identical, the molecule belongs to point group C3v. Some molecules and ions with trigonal pyramidal geometry are the pnictogen hydrides (XH3), xenon trioxide (XeO3), the chlorate ion, , and the sulfite ion, . In organic chemistry, molecules which have a trigonal pyramidal geometry are sometimes described as sp3 hybridized. The AXE method for VSEPR theory states that the classification is AX3E1.
Trigonal pyramidal geometry in ammonia
The nitrogen in ammonia has 5 valence electrons and bonds with three hydrogen atoms to complete the octet. This would result in the geometry of a regular tetrahedron with each bond angle equal to cos−1(−) ≈ 109.5°. However, the three hydrogen atoms are repelled by the electron lone pair in a way that the geometry is distorted to a trigonal pyramid (regular 3-sided pyramid) with bond angles of 107°. In contrast, boron trifluoride is flat, adopting a trigonal planar geometry because the boron does not have a lone pair of electrons. In ammonia the trigonal pyramid undergoes rapid nitrogen inversion.
See also
VSEPR theory#AXE method
Molecular geometry |
https://en.wikipedia.org/wiki/EComStation | eComStation or eCS is an operating system based on OS/2 Warp for the 32-bit x86 architecture. It was originally developed by Serenity Systems and Mensys BV under license from IBM. It includes additional applications, and support for new hardware which were not present in OS/2 Warp. It is intended to allow OS/2 applications to run on modern hardware, and is used by a number of large organizations for this purpose. By 2014, approximately thirty to forty thousand licenses of eComStation had been sold.
Financial difficulties at Mensys in 2012 led to the development of eComStation stalling, and ownership being transferred to a sister company named XEU.com (now known as PayGlobal Technologies BV), who continue to sell and support the operating system. The lack of a new release since 2011 was one of the motivations for the creation of the ArcaOS OS/2 distribution.
Differences between eComStation and OS/2
Version 1 of eComStation, released in 2001, was based around the integrated OS/2 version 4.5 client Convenience Package for OS/2 Warp version 4, which was released by IBM in 2000. The latter had been made available only to holders of existing OS/2 support contracts; it included the following new features (among others) compared to the final retail version of OS/2 (1996's OS/2 Warp version 4):
IBM-supplied updates of software and components that had shipped with the 1999 release of OS/2 Warp Server for e-business, but had not been made available to users of the client version. Key among these were the JFS file system and the logical volume manager.
Operating system features and enhancements that had been made available as updates but never offered as an install-time option. These included an updated kernel, a 32-bit TCP/IP stack and associated networking utilities, a firewall, updated drivers and other system components, newer versions of Java, SciTech SNAP Graphics video support, and more.
IBM-supplied updates that had previously only been offered to customers with |
https://en.wikipedia.org/wiki/Franklin%20Stahl | Franklin (Frank) William Stahl (born October 8, 1929) is an American molecular biologist and geneticist. With Matthew Meselson, Stahl conducted the famous Meselson-Stahl experiment showing that DNA is replicated by a semiconservative mechanism, meaning that each strand of the DNA serves as a template for production of a new strand.
He is Emeritus Professor of Biology at the University of Oregon's Institute of Molecular Biology in Eugene, Oregon.
Career
Stahl, like his two older sisters, graduated from the public schools of Needham, a Boston suburb. In 1951, he was awarded an AB degree in biology from Harvard College, and matriculated in the biology department of the University of Rochester. His interest in genetics was cemented in 1952 by his introduction to bacterial viruses (phages) in a course taught by A. H. (Gus) Doermann at the Cold Spring Harbor Biological Laboratory. In 1956, he received a PhD in biology for his work with Doermann on the genetics of T4 phage. In 1955, he undertook postdoctoral studies with Giuseppe Bertani (in the Phage group) at Caltech (Pasadena) with the aim of learning some bacterial genetics. He subsequently turned his attentions to collaborations with Charley Steinberg and Matt Meselson. With Steinberg, he undertook mathematical analyses of T4 growth, mutation, and genetic recombination. With Meselson, he studied DNA replication in Escherichia coli. That study produced strong support for the semiconservative model proposed by Jim Watson and Francis Crick.
For one year, Stahl served on the zoology faculty at the University of Missouri in Columbia, Missouri before accepting, in 1959, a position in the new Institute of Molecular Biology at the University of Oregon in Eugene. In the succeeding years, his research involved the phages T4 and Lambda and the budding yeast, Saccharomyces cerevisiae, with his primary focus on genetic recombination. He taught various genetics courses at Oregon and presented phage courses in America, Italy and |
https://en.wikipedia.org/wiki/Waggle%20dance | Waggle dance is a term used in beekeeping and ethology for a particular figure-eight dance of the honey bee. By performing this dance, successful foragers can share information about the direction and distance to patches of flowers yielding nectar and pollen, to water sources, or to new nest-site locations with other members of the colony.
The waggle dance and the round dance are two forms of dance behaviour that are part of a continuous transition. As the distance between the resource and the hive increases, the round dance transforms into variations of a transitional dance, which, when communicating resources at even greater distances, becomes the waggle dance. In the case of Apis mellifera ligustica, the round dance is performed until the resource is about 10 metres away from the hive, transitional dances are performed when the resource is at a distance of 20 to 30 metres away from the hive, and finally, when it is located at distances greater than 40 metres from the hive, the waggle dance is performed. However, even close to the nest, the round dance can contain elements of the waggle dance, such as a waggle portion. It has therefore been suggested that the term waggle dance is better for describing both the waggle dance and the round dance.
Austrian ethologist and Nobel laureate Karl von Frisch was one of the first who translated the meaning of the waggle dance.
Description
A waggle dance consists of one to 100 or more circuits, each of which consists of two phases: the waggle phase and the return phase. A worker bee's waggle dance involves running through a small figure-eight pattern: a waggle run (aka waggle phase) followed by a turn to the right to circle back to the starting point (aka return phase), another waggle run, followed by a turn and circle to the left, and so on in a regular alternation between right and left turns after waggle runs. Waggle-dancing bees produce and release two alkanes, tricosane and pentacosane, and two alkenes, (Z)-9-tricosen |
https://en.wikipedia.org/wiki/Secure%20by%20design | Secure by design, in software engineering, means that software products and capabilities have been designed to be foundationally secure.
Alternate security strategies, tactics and patterns are considered at the beginning of a software design, and the best are selected and enforced by the architecture, and they are used as guiding principles for developers. It is also encouraged to use strategic design patterns that have beneficial effects on security, even though those design patterns were not originally devised with security in mind.
Secure by Design is increasingly becoming the mainstream development approach to ensure security and privacy of software systems. In this approach, security is considered and built into the system at every layer and starts with a robust architecture design. Security architectural design decisions are based on well-known security strategies, tactics, and patterns defined as reusable techniques for achieving specific quality concerns. Security tactics/patterns provide solutions for enforcing the necessary authentication, authorization, confidentiality, data integrity, privacy, accountability, availability, safety and non-repudiation requirements, even when the system is under attack.
In order to ensure the security of a software system, not only is it important to design a robust intended security architecture but it is also necessary to map updated security strategies, tactics and patterns to software development in order to maintain security persistence.
Expect attacks
Malicious attacks on software should be assumed to occur, and care is taken to minimize impact. Security vulnerabilities are anticipated, along with invalid user input. Closely related is the practice of using "good" software design, such as domain-driven design or cloud native, as a way to increase security by reducing risk of vulnerability-opening mistakes—even though the design principles used were not originally conceived for security purposes.
Avoid security th |
https://en.wikipedia.org/wiki/Clipper%20architecture | The Clipper architecture is a 32-bit RISC-like instruction set architecture designed by Fairchild Semiconductor. The architecture never enjoyed much market success, and the only computer manufacturers to create major product lines using Clipper processors were Intergraph and High Level Hardware, although Opus Systems offered a product based on the Clipper as part of its Personal Mainframe range. The first processors using the Clipper architecture were designed and sold by Fairchild, but the division responsible for them was subsequently sold to Intergraph in 1987; Intergraph continued work on Clipper processors for use in its own systems.
The Clipper architecture used a simplified instruction set compared to earlier CISC architectures, but it did incorporate some more complicated instructions than were present in other contemporary RISC processors. These instructions were implemented in a so-called Macro Instruction ROM within the Clipper CPU. This scheme allowed the Clipper to have somewhat higher code density than other RISC CPUs.
Versions
The initial Clipper microprocessor produced by Fairchild was the C100, which became available in 1986. This was followed by the faster C300 from Intergraph in 1988. The final model of the Clipper was the C400, released in 1990, which was extensively redesigned to be faster and added more floating-point registers. The C400 processor combined two key architectural techniques to achieve a new level of performance — superscalar instruction dispatch and superpipelined operation.
While many processors of the time used either superscalar instruction dispatch or superpipelined operation, the Clipper C400 was the first processor to use both.
Intergraph started work on a subsequent Clipper processor design known as the C5, but this was never completed or released. Nonetheless, some advanced processor design techniques were devised for the C5, and Intergraph was granted patents on these. These patents, along with the original C |
https://en.wikipedia.org/wiki/Trigonal%20planar%20molecular%20geometry | In chemistry, trigonal planar is a molecular geometry model with one atom at the center and three atoms at the corners of an equilateral triangle, called peripheral atoms, all in one plane. In an ideal trigonal planar species, all three ligands are identical and all bond angles are 120°. Such species belong to the point group D3h. Molecules where the three ligands are not identical, such as H2CO, deviate from this idealized geometry. Examples of molecules with trigonal planar geometry include boron trifluoride (BF3), formaldehyde (H2CO), phosgene (COCl2), and sulfur trioxide (SO3). Some ions with trigonal planar geometry include nitrate (), carbonate (), and guanidinium (). In organic chemistry, planar, three-connected carbon centers that are trigonal planar are often described as having sp2 hybridization.
Nitrogen inversion is the distortion of pyramidal amines through a transition state that is trigonal planar.
Pyramidalization is a distortion of this molecular shape towards a tetrahedral molecular geometry. One way to observe this distortion is in pyramidal alkenes.
See also
AXE method
Molecular geometry
VSEPR theory |
https://en.wikipedia.org/wiki/Neutrophile | A neutrophile is a neutrophilic organism that thrives in a neutral pH environment between 6.5 and 7.5.
Environment
The pH of the environment can support growth or hinder neutrophilic organisms. When the pH is within the microbe's range, they grow and within that range there is an optimal growth pH. Neutrophiles are adapted to live in an environment where the hydrogen ion concentration is at equilibrium. They are sensitive to the concentration, and when the pH become too basic or acidic, the cell's proteins can denature. Depending on the microbe and the pH, the microbe's growth can be slowed or stopped altogether. Manipulation of the pH of the environment that the microbe is in is used by the food industry to control its growth in order to increase the shelf life of food.
See also
Acidophile
Acidophobe
Alkaliphile
Extremophile
Mesophile |
https://en.wikipedia.org/wiki/Microsoft%20Mail | Microsoft Mail (or MSMail/MSM) was the name given to several early Microsoft e-mail products for local area networks, primarily two architectures: one for Macintosh networks, and one for PC architecture-based LANs. All were eventually replaced by the Exchange and Outlook product lines.
Mac Networks
The first Microsoft Mail product was introduced in 1988 for AppleTalk Networks. It was based on InterMail, a product that Microsoft purchased and updated. An MS-DOS client was added for PCs on AppleTalk networks. It was later sold off to become Star Nine Mail, then Quarterdeck Mail, and has long since been discontinued.
PC Networks
The second Microsoft Mail product, Microsoft Mail for PC Networks v2.1, was introduced in 1991. It was based on Network Courier, a LAN email system produced by Consumers Software of Vancouver BC, which Microsoft had purchased. Following the initial 1991 rebranding release, Microsoft issued its first major update as Version 3.0 in 1992. This version included Microsoft's first Global Address Book technology and first networked scheduling application, Microsoft Schedule+.
Versions 3.0 through 3.5 included email clients for MS-DOS, OS/2 1.31, Mac OS, Windows (both 16 and 32-bit), a separate Windows for Workgroups Mail client, and a DOS-based Remote Client for use over pre-PPP/pre-SLIP dialup modem connections. A stripped-down version of the PC-based server, Microsoft Mail for PC Networks, was included in Windows 95 and Windows NT 4.0. The last version based on this architecture was 3.5; afterwards, it was replaced by Microsoft Exchange Server, which started with version 4.0.
The client software was also named Microsoft Mail, and was included in some older versions of Microsoft Office such as version 4.x. The original "Inbox" (Exchange client or Windows Messaging) of Windows 95 also had the capability to connect to an MS Mail server.
Microsoft Mail Server was eventually replaced by Microsoft Exchange; Microsoft Mail Client, Microsoft Exchange C |
https://en.wikipedia.org/wiki/Chacha%20Cricket | Chaudhry Abdul Jalil (, born 8 October 1949), famously known as Chacha Cricket () (meaning 'Uncle Cricket'), is a renowned Pakistani cricket mascot.
Jalil is regularly seen at cricket matches involving Pakistan. He is easily recognized by his white beard, his full green kurta dress, and his white cap decorated with a sequined star and crescent moon. He is usually armed with a Pakistani flag and initiates many crowd chants.
While his support for Pakistan is very strong, he remains good-natured and is also a popular figure amongst opposition fans such as England's Barmy Army.
Career
Early years (1969–1996)
Jalil watched his first international match at Lahore Stadium at the age of 19, when the England team led by Colin Cowdrey visited Pakistan in 1969. From 1973 to 1996, he worked as an assistant foreman at a water-pumping station in Abu Dhabi. He first rose to prominence during the 1994 Austral-Asia Cup in Sharjah, where he debuted his unique outfit and his ability to engage the crowd in passionate chants.
By 1996, his face was recognizable to virtually every Pakistani cricket fan. It was then that the Pakistan Cricket Board offered him to become their official cheerleader. PCB Chairman Syed Zulfiqar Bokhari asked him to return to Pakistan, where he would be sponsored by the Board to accompany the national team on domestic and foreign tours. Jalil consulted with cricketers Wasim Akram and Moin Khan, who persuaded him to come since they expected Pakistan International Airlines cricket team, the team they represented domestically, to employ him. Therefore, he returned to his native country after leaving his job in 1998.
Return to Pakistan
When he returned, elections had taken place, and the new government brought changes to the Pakistan Cricket Board. Bokhari stepped down and the new secretary, Waqar Ahmed, was not on board with the idea of a travelling cheerleader. As a result, he did not get an official recommendation for his visa application for the 1999 Cri |
https://en.wikipedia.org/wiki/Respiratory%20burst | Respiratory burst (or oxidative burst) is the rapid release of the reactive oxygen species (ROS), superoxide anion () and hydrogen peroxide (), from different cell types.
This is usually utilised for mammalian immunological defence, but also plays a role in cell signalling. Respiratory burst is also implicated in the ovum of animals following fertilization. It may also occur in plant cells.
Immunity
Immune cells can be divided into myeloid cells and lymphoid cells. Myeloid cells, including macrophages and neutrophils, are especially implicated in the respiratory burst. They are phagocytic, and the respiratory burst is vital for the subsequent degradation of internalised bacteria or other pathogens. This is an important aspect of the innate immunity.
Respiratory burst requires a 10 to 20 fold increase in oxygen consumption through NADPH oxidase (NOX2 in humans) activity. NADPH is the key substrate of NOX2, and bears reducing power. Glycogen breakdown is vital to produce NADPH. This occurs via the pentose phosphate pathway.
The NOX2 enzyme is bound in the phagolysosome membrane. Post bacterial phagocytosis, it is activated, producing superoxide via its redox centre, which transfers electrons from cytosolic NADPH to O2 in the phagosome.
2O2 + NADPH —> 2O2•– + NADP+ + H+
The superoxide can then spontaneously or enzymatically react with other molecules to give rise to other ROS. The phagocytic membrane reseals to limit exposure of the extracellular environment to the generated reactive free radicals.
Pathways for reactive species generation
There are 3 main pathways for the generation of reactive oxygen species or reactive nitrogen species (RNS) in effector cells:
Superoxide dismutase (or alternatively, myeloperoxidase) generates hydrogen peroxide from superoxide. Hydroxyl radicals are then generated via the Haber–Weiss reaction or the Fenton reaction, of which are both catalyzed by Fe2+. O2•–+ H2O2 —> •OH + OH– + O2
In the presence of halide ions, promine |
https://en.wikipedia.org/wiki/Luria%E2%80%93Delbr%C3%BCck%20experiment | The Luria–Delbrück experiment (1943) (also called the Fluctuation Test) demonstrated that in bacteria, genetic mutations arise in the absence of selective pressure rather than being a response to it. Thus, it concluded Darwin's theory of natural selection acting on random mutations applies to bacteria as well as to more complex organisms. Max Delbrück and Salvador Luria won the 1969 Nobel Prize in Physiology or Medicine in part for this work.
History
By the 1940s the ideas of inheritance and mutation were generally accepted, though the role of DNA as the hereditary material had not yet been established. It was thought that bacteria were somehow different and could develop heritable genetic mutations depending on the circumstances they found themselves: in short, was the mutation in bacteria pre-adaptive (pre-existent) or post-adaptive (directed adaption)?
In their experiment, Luria and Delbrück inoculated a small number of bacteria (Escherichia coli) into separate culture tubes. After a period of growth, they plated equal volumes of these separate cultures onto agar containing the T1 phage (virus). If resistance to the virus in bacteria were caused by an induced activation in bacteria i.e. if resistance were not due to heritable genetic components, then each plate should contain roughly the same number of resistant colonies.
Assuming a constant rate of mutation, Luria hypothesized that if mutations occurred after and in response to exposure to the selective agent, the number of survivors would be distributed according to a Poisson distribution with the mean equal to the variance. This was not what Delbrück and Luria found: Instead the number of resistant colonies on each plate varied drastically: the variance was considerably greater than the mean.
Luria and Delbrück proposed that these results could be explained by the occurrence of a constant rate of random mutations in each generation of bacteria growing in the initial culture tubes. Based on these assumptio |
https://en.wikipedia.org/wiki/Vair | Vair (; from Latin varius "variegated"), originating as a processed form of squirrel fur, gave its name to a set of different patterns used in heraldry. Heraldic vair represents a kind of fur common in the Middle Ages, made from pieces of the greyish-blue backs of squirrels sewn together with pieces of the animals' white underbellies. Vair is the second-most common fur in heraldry, after ermine.
Origins
The word vair, with its variant forms veir and vairé, was brought into Middle English from Old French, from Latin "variegated", and has been alternatively termed (Latin, meaning "variegated work").
The squirrel in question is a variety of the Eurasian red squirrel, Sciurus vulgaris. In the coldest parts of Northern and Central Europe, especially the Baltic region, the winter coat of this squirrel is blue-grey on the back and white on the belly, and was much used for the lining of cloaks called mantles. It was sewn together in alternating cup-shaped pieces of back and belly fur, resulting in a pattern of grey-blue and grey-white which, when simplified in heraldic drawing and painting, became blue and white in alternating pieces.
Variations
In early heraldry, vair was represented by means of straight horizontal lines alternating with wavy lines. Later it mutated into a pattern of bell or pot-like shapes, conventionally known as panes or "vair bells", of argent and azure, arranged in horizontal rows, so that the panes of one tincture form the upper part of the row, while those of the opposite tincture are on the bottom. The early form of the fur is still sometimes found, under the name vair ondé (wavy vair) or vair ancien (ancient vair)(Ger. Wolkenfeh, "cloud vair"). The only mandatory rule concerning the choice of tincture is the respect of the heraldic rule of tincture, that orders the use of a metal and a colour.
When the pattern of vair is used with other colours, the field is termed vairé or vairy of the tinctures used. Normally vairé consists of one meta |
https://en.wikipedia.org/wiki/Khaja | Khaja is an Indian deep-fried pastry, commonly filled with fruit or soaked with sugar syrup.
History
Khaja, plain or sweet mentioned in Silao, was a wheat flour preparation fried in ghee. Khaja is believed to have originated from the eastern parts of the former state of Magadh and the former United Provinces and Magadh. Silao , Nalanda districts of Bihar. and is also native to state of Magadh as well as regions like Kutch and Andhra Pradesh and Karnataka. Refined wheat flour with sugar is made into layered dough, with or without dry fruit or other stuffing, and lightly fried in oil to make khaja. It is one of the famous sweets of Silao and is related to emotions of all Magadh people. It is also offered as an offering Magadh. International sweets of Magadh.
Khajas from Silao and Rajgir in Bihar are almost entirely similar to baklava, whereas the ones from Odisha and Andhra Pradesh are made with thicker pastry sheets, and are generally hard. The batter is prepared from wheat flour, mawa and oil. It is then deep fried until crisp, before being soaked in a sugar syrup known as Paga, the pastry absorbing the syrup. Khaja served in of Kakinada, a coastal town of Andhra Pradesh, are served dry on the outside and soaked with sugar syrup on the inside.
Khaja sweet is popular in Magahia and Bihari in Magadha. This sweet is a part of Chhath Puja, given as a gift at the daughter's wedding in Magadh Bihar.
See also
Kakinada Kaaja
Indian sweets |
https://en.wikipedia.org/wiki/Cluttering | Cluttering is a speech and communication disorder characterized by a rapid rate of speech, erratic rhythm, and poor syntax or grammar, making speech difficult to understand.
Classification
Cluttering is a speech and communication disorder that has also been described as a fluency disorder.
It is defined as:
Signs and symptoms
Stuttering is often misapplied as a common term referring to any dysfluency. It is also often incorrectly applied to normal dysfluency rather than dysfluency from a disorder. Cluttered speech is exhibited by normal speakers, and is often referred to as stuttering. This is especially true when the speaker is nervous, where nervous speech more closely resembles cluttering than stuttering.
Cluttering is sometimes confused with stuttering. Both communication disorders break the normal flow of speech, but they are distinct. A stutterer has a coherent pattern of thoughts, but may have a difficult time vocally expressing those thoughts; in contrast, a clutterer has no problem putting thoughts into words, but those thoughts become disorganized during speaking. Cluttering affects not only speech, but also thought patterns, writing, typing, and conversation.
Stutterers are usually dysfluent on initial sounds, when beginning to speak, and become more fluent towards the ends of utterances. In contrast, clutterers are most clear at the start of utterances, but their speaking rate increases and intelligibility decreases towards the end of utterances.
Stuttering is characterized by struggle behavior, such as overtense speech production muscles. Cluttering, in contrast, is effortless. Cluttering is also characterized by slurred speech, especially dropped or distorted /r/ and /l/ sounds; and monotone speech that starts loud and trails off into a murmur.
A clutterer described the feeling associated with a clutter as:
Differential diagnosis
Cluttering can often be confused with various language disorders, learning disabilities, and attention deficit hyp |
https://en.wikipedia.org/wiki/No-arbitrage%20bounds | In financial mathematics, no-arbitrage bounds are mathematical relationships specifying limits on financial portfolio prices. These price bounds are a specific example of good–deal bounds, and are in fact the greatest extremes for good–deal bounds.
The most frequent nontrivial example of no-arbitrage bounds is put–call parity for option prices. In incomplete markets, the bounds are given by the subhedging and superhedging prices.
The essence of no-arbitrage in mathematical finance is excluding the possibility of "making money out of nothing" in the financial market. This is necessary because the existence of arbitrage is not only unrealistic, but also contradicts the possibility of an economic equilibrium. All mathematical models of financial markets have to satisfy a no-arbitrage condition to be realistic models.
See also
Box spread
Indifference price |
https://en.wikipedia.org/wiki/Heterogeneous%20network | In computer networking, a heterogeneous network is a network connecting computers and other devices where the operating systems and protocols have significant differences. For example, local area networks (LANs) that connect Microsoft Windows and Linux based personal computers with Apple Macintosh computers are heterogeneous.
Heterogeneous network also describes wireless networks using different access technologies. For example, a wireless network that provides a service through a wireless LAN and is able to maintain the service when switching to a cellular network is called a wireless heterogeneous network.
HetNet
Reference to a HetNet often indicates the use of multiple types of access nodes in a wireless network. A Wide Area Network can use some combination of macrocells, picocells, and femtocells in order to offer wireless coverage in an environment with a wide variety of wireless coverage zones, ranging from an open outdoor environment to office buildings, homes, and underground areas. Mobile experts define a HetNet as a network with complex interoperation between macrocell, small cell, and in some cases WiFi network elements used together to provide a mosaic of coverage, with handoff capability between network elements. A study from ARCchart estimates that HetNets will help drive the mobile infrastructure market to account for nearly US$57 billion in spending globally by 2017. Small Cell Forum defines the HetNet as ‘multi-x environment – multi-technology, multi-domain, multi-spectrum, multi-operator and multi-vendor. It must be able to automate the reconfiguration of its operation to deliver assured service quality across the entire network, and flexible enough to accommodate changing user needs, business goals and subscriber behaviours.’
HetNet architecture
From an architectural perspective, the HetNet can be viewed as encompassing conventional macro radio access network (RAN) functions, RAN transport capability, small cells, and Wi-Fi functionality, |
https://en.wikipedia.org/wiki/Galactomannan | Galactomannans are polysaccharides consisting of a mannose backbone with galactose side groups, more specifically, a (1-4)-linked beta-D-mannopyranose backbone with branchpoints from their 6-positions linked to alpha-D-galactose, (i.e. 1-6-linked alpha-D-galactopyranose).
In order of increasing number of mannose-to-galactose ratio:
fenugreek gum, mannose:galactose ~1:1
guar gum, mannose:galactose ~2:1
tara gum, mannose:galactose ~3:1
locust bean gum or carob gum, mannose:galactose ~4:1
cassia gum, mannose:galactose ~5:1
Galactomannans are often used in food products to increase the viscosity of the water phase.
Guar gum has been used to add viscosity to artificial tears, but is not as stable as carboxymethylcellulose.
Food use
Galactomannans are used in foods as stabilisers. Guar and locust bean gum (LBG) are commonly used in ice cream to improve texture and reduce ice cream meltdown. LBG is also used extensively in cream cheese, fruit preparations and salad dressings. Tara gum is seeing growing acceptability as a food ingredient but is still used to a much lesser extent than guar or LBG. Guar has the highest usage in foods, largely due to its low and stable price.
Clinical use
Galactomannan is a component of the cell wall of the mold Aspergillus and is released during growth. Detection of galactomannan in blood is used to diagnose invasive aspergillosis infections in humans. This is performed with monoclonal antibodies in a double-sandwich ELISA; this assay from Bio-Rad Laboratories was approved by the FDA in 2003 and is of moderate accuracy. The assay is most useful in patients who have had hemopoietic cell transplants (stem cell transplants). False positive Aspergillus Galactomannan test have been found in patients on intravenous treatment with some antibiotics or fluids containing gluconate or citric acid such as some transfusion platelets, parenteral nutrition or PlasmaLyte. |
https://en.wikipedia.org/wiki/Torre%20Entel | Torre Entel (Entel Tower) is the name of a high TV and telecommunications tower in Santiago, Chile. Torre Entel has an observation deck open for visitors. Construction began in 1970 during Eduardo Frei Montalva term as president and it was inaugurated in 1974. In 1976 it carried its first television transmissions. For many years it was the tallest building in Chile and today remains a symbol of Santiago. The tower is constructed of concrete, steel, and aluminum.
With 128 m high and 18 floors, it was after the end of its construction in 1974, the highest architectural structure in the country, a title it kept until the inauguration of the Telefonica Tower in 1996 with 143 m. Already surpassed in height by other buildings, it continues being the structure of greater prominence in the commune of Santiago, being located next to the Avenida Libertador General Bernardo O'Higgins and to a block of the La Moneda Palace, reason why it has stayed like an icon of the city. Its design represents a torch, an ancient form of telecommunication.
History
Its construction began during the government of Eduardo Frei Montalva, on July 1, 1970, as part of the National Telecommunications Center. After four years of construction, the Tower reached its current height on August 30, 1974. Its structure was influenced by the Post Office Tower in London, which had been built a few years earlier.
Later, on September 8, 1975, two satellite dishes were installed, which were the first telecommunications elements visible from the outside, and finally, on April 12, 1976, the telephone channels came into service. From that moment on, the Entel Tower became the vital nucleus of the country's communications system by allowing the interconnection of Entel's telephone, television, radio and microwave network services with those of the Chilean Telephone Company (currently Movistar) and with the north, center and south of the country and the province of Mendoza, Argentina. In addition, it is connecte |
https://en.wikipedia.org/wiki/Nuclear%20localization%20sequence | A nuclear localization signal or sequence (NLS) is an amino acid sequence that 'tags' a protein for import into the cell nucleus by nuclear transport. Typically, this signal consists of one or more short sequences of positively charged lysines or arginines exposed on the protein surface. Different nuclear localized proteins may share the same NLS. An NLS has the opposite function of a nuclear export signal (NES), which targets proteins out of the nucleus.
Types
Classical
These types of NLSs can be further classified as either monopartite or bipartite. The major structural differences between the two are that the two basic amino acid clusters in bipartite NLSs are separated by a relatively short spacer sequence (hence bipartite - 2 parts), while monopartite NLSs are not. The first NLS to be discovered was the sequence PKKKRKV in the SV40 Large T-antigen (a monopartite NLS). The NLS of nucleoplasmin, KR[PAATKKAGQA]KKKK, is the prototype of the ubiquitous bipartite signal: two clusters of basic amino acids, separated by a spacer of about 10 amino acids. Both signals are recognized by importin α. Importin α contains a bipartite NLS itself, which is specifically recognized by importin β. The latter can be considered the actual import mediator.
Chelsky et al. proposed the consensus sequence K-K/R-X-K/R for monopartite NLSs. A Chelsky sequence may, therefore, be part of the downstream basic cluster of a bipartite NLS. Makkah et al. carried out comparative mutagenesis on the nuclear localization signals of SV40 T-Antigen (monopartite), C-myc (monopartite), and nucleoplasmin (bipartite), and showed amino acid features common to all three. The role of neutral and acidic amino acids was shown for the first time in contributing to the efficiency of the NLS.
Rotello et al. compared the nuclear localization efficiencies of eGFP fused NLSs of SV40 Large T-Antigen, nucleoplasmin (AVKRPAATKKAGQAKKKKLD), EGL-13 (MSRRRKANPTKLSENAKKLAKEVEN), c-Myc (PAAKRVKLD) and TUS-protein (KLKIK |
https://en.wikipedia.org/wiki/Unimodular%20lattice | In geometry and mathematical group theory, a unimodular lattice is an integral lattice of determinant 1 or −1. For a lattice in n-dimensional Euclidean space, this is equivalent to requiring that the volume of any fundamental domain for the lattice be 1.
The E8 lattice and the Leech lattice are two famous examples.
Definitions
A lattice is a free abelian group of finite rank with a symmetric bilinear form (·, ·).
The lattice is integral if (·,·) takes integer values.
The dimension of a lattice is the same as its rank (as a Z-module).
The norm of a lattice element a is (a, a).
A lattice is positive definite if the norm of all nonzero elements is positive.
The determinant of a lattice is the determinant of the Gram matrix, a matrix with entries (ai, aj), where the elements ai form a basis for the lattice.
An integral lattice is unimodular if its determinant is 1 or −1.
A unimodular lattice is even or type II if all norms are even, otherwise odd or type I.
The minimum of a positive definite lattice is the lowest nonzero norm.
Lattices are often embedded in a real vector space with a symmetric bilinear form. The lattice is positive definite, Lorentzian, and so on if its vector space is.
The signature of a lattice is the signature of the form on the vector space.
Examples
The three most important examples of unimodular lattices are:
The lattice Z, in one dimension.
The E8 lattice, an even 8-dimensional lattice,
The Leech lattice, the 24-dimensional even unimodular lattice with no roots.
Properties
An integral lattice is unimodular if and only if its dual lattice is integral. Unimodular lattices are equal to their dual lattices, and for this reason, unimodular lattices are also known as self-dual.
Given a pair (m,n) of nonnegative integers, an even unimodular lattice of signature (m,n) exists if and only if m−n is divisible by 8, but an odd unimodular lattice of signature (m,n) always exists. In particular, even unimodular definite lattices on |
https://en.wikipedia.org/wiki/Global%20Infectious%20Disease%20Epidemiology%20Network | Global Infectious Diseases and Epidemiology Online Network (GIDEON) is a web-based program for decision support and informatics in the fields of Infectious Diseases and Geographic Medicine. Due to the advancement of both disease research and digital media, print media can no longer follow the dynamics of outbreaks and epidemics as they emerge in "real time." As of 2005, more than 300 generic infectious diseases occur haphazardly in time and space and are challenged by over 250 drugs and vaccines. 1,500 species of pathogenic bacteria, viruses, parasites and fungi have been described. GIDEON works to combat this by creating a diagnosis through geographical indicators, a map of the status of the disease in history, a detailed list of potential vaccines and treatments, and finally listing all the potential species of the disease or outbreak such as bacterial classifications.
Organization
GIDEON consists of four modules. The first Diagnosis module generates a Bayesian ranked differential diagnosis based on signs, symptoms, laboratory tests, country of origin and incubation period – and can be used for diagnosis support and simulation of all infectious diseases in all countries. Since the program is web-based, this module can also be adapted to disease and bioterror surveillance.
The second module follows the epidemiology of individual diseases, including their global background and status in each of 205 countries and regions. All past and current outbreaks of all diseases, in all countries, are described in detail. The user may also access a list of diseases compatible with any combination of agent, vector, vehicle, reservoir and country (for example, one could list all the mosquito-borne flaviviruses of Brazil which have an avian reservoir). Over 30,000 graphs display all the data, and are updated in "real time". These graphs can be used for preparation of PowerPoint displays, pamphlets, lecture notes, etc. Several thousand high-quality images are also available, i |
https://en.wikipedia.org/wiki/Riken | is a large scientific research institute in Japan. Founded in 1917, it now has about 3,000 scientists on seven campuses across Japan, including the main site at Wakō, Saitama Prefecture, just outside Tokyo. Riken is a Designated National Research and Development Institute, and was formerly an Independent Administrative Institution.
Riken conducts research in many areas of science, including physics, chemistry, biology, genomics, medical science, engineering, high-performance computing and computational science, and ranging from basic research to practical applications with 485 partners worldwide. It is almost entirely funded by the Japanese government, and its annual budget is about ¥88 billion (US$790 million).
Name
"Riken" is an acronym of the formal name , and its full name in Japanese is and in English is the Institute of Physical and Chemical Research.
History
In 1913, the well-known scientist Jokichi Takamine first proposed the establishment of a national science research institute in Japan. This task was taken on by Viscount Shibusawa Eiichi, a prominent businessman, and following a resolution by the Diet in 1915, Riken came into existence in March 1917. In its first incarnation, Riken was a private foundation (zaidan), funded by a combination of industry, the government, and the Imperial Household. It was located in the Komagome district of Tokyo, and its first director was the mathematician Baron Dairoku Kikuchi.
In 1927, Viscount Masatoshi Ōkōchi, the third director, established the Riken Concern (a zaibatsu). This was a group of spin-off companies that used Riken's scientific achievements for commercial ends and returned the profits to Riken. At its peak in 1939 the zaibatsu comprised about 121 factories and 63 companies, including Riken Kankōshi, which is now Ricoh.
During World War II, the Japanese army's atomic bomb program was conducted at Riken. In April 1945 the US bombed Riken's laboratories in Komagome, and in November, after the end of the |
https://en.wikipedia.org/wiki/Random%20matrix | In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all elements are random variables. Many important properties of physical systems can be represented mathematically as matrix problems. For example, the thermal conductivity of a lattice can be computed from the dynamical matrix of the particle-particle interactions within the lattice.
Applications
Physics
In nuclear physics, random matrices were introduced by Eugene Wigner to model the nuclei of heavy atoms. Wigner postulated that the spacings between the lines in the spectrum of a heavy atom nucleus should resemble the spacings between the eigenvalues of a random matrix, and should depend only on the symmetry class of the underlying evolution. In solid-state physics, random matrices model the behaviour of large disordered Hamiltonians in the mean-field approximation.
In quantum chaos, the Bohigas–Giannoni–Schmit (BGS) conjecture asserts that the spectral statistics of quantum systems whose classical counterparts exhibit chaotic behaviour are described by random matrix theory.
In quantum optics, transformations described by random unitary matrices are crucial for demonstrating the advantage of quantum over classical computation (see, e.g., the boson sampling model). Moreover, such random unitary transformations can be directly implemented in an optical circuit, by mapping their parameters to optical circuit components (that is beam splitters and phase shifters).
Random matrix theory has also found applications to the chiral Dirac operator in quantum chromodynamics, quantum gravity in two dimensions, mesoscopic physics, spin-transfer torque, the fractional quantum Hall effect, Anderson localization, quantum dots, and superconductors
Mathematical statistics and numerical analysis
In multivariate statistics, random matrices were introduced by John Wishart, who sought to estimate covariance matrices of large samples. Chernoff-, B |
https://en.wikipedia.org/wiki/Windows%20on%20Windows | In computing, Windows on Windows (commonly referred to as WOW) was a compatibility layer of 32-bit versions of the Windows NT family of operating systems since 1993 with the release of Windows NT 3.1, which extends NTVDM to provide limited support for running legacy 16-bit programs written for Windows 3.x or earlier. There is a similar subsystem, known as WoW64, on 64-bit Windows versions that runs 32-bit programs.
This subsystem is not available in 64-bit editions since Windows 11 (including Windows Server 2008 R2 and later, which only have 64-bit editions) and therefore cannot run 16-bit software without third-party emulation software (e.g. DOSBox). Windows 10 is the final version of Windows to include this subsystem.
This subsystem has since been discontinued, as Windows 11 dropped support for 32-bit processors.
Background
Many 16-bit Windows legacy programs can run without changes on newer 32-bit editions of Windows. The reason designers made this possible was to allow software developers time to remedy their software during the industry transition from Windows 3.1x to Windows 95 and later, without restricting the ability for the operating system to be upgraded to a current version before all programs used by a customer had been taken care of.
The Windows 9x series of operating systems, reflecting their roots in DOS, functioned as hybrid 16- and 32-bit systems in the sense that the underlying operating system was not truly 32-bit, and therefore could run 16-bit software natively without requiring any special emulation; Windows NT operating systems differ significantly from Windows 9x in their architecture, and therefore require a more complex solution. Two separate strategies are used in order to let 16-bit programs run on 32-bit versions of Windows (with some runtime limitations). They are called thunking and shimming.
Thunking
The WOW subsystem of the operating system in order to provide support for 16-bit pointers, memory models and address space.
Al |
https://en.wikipedia.org/wiki/Digital%20identity | Digital identity refers to the information utilized by computer systems to represent external entities, including a person, organization, application, or device. When used to describe an individual, it encompasses a person's compiled information and plays a crucial role in automating access to computer-based services, verifying identity online, and enabling computers to mediate relationships between entities. Digital identity for individuals is an aspect of a person's social identity and can also be referred to as online identity.
The widespread use of digital identities can include the entire collection of information generated by a person's online activity. This includes usernames, passwords, search history, birthday, social security number, and purchase history. When publicly available, this data can be used by others to discover a person's civil identity. It can also be harvested to create what has been called a "data double", an aggregated profile based on the user's data trail across databases. In turn, these data doubles serve to facilitate personalization methods on the web and across various applications.
If personal information is no longer the currency that people give for online content and services, something else must take its place. Media publishers, app makers, and e-commerce shops are now exploring different paths to surviving a privacy-conscious internet, in some cases overturning their business models. Many are choosing to make people pay for what they get online by levying subscription fees and other charges instead of using their personal data.
An individual's digital identity is often linked to their civil or national identity and many countries have instituted national digital identity systems that provide digital identities to their citizenry.
The legal and social effects of digital identity are complex and challenging. Faking a legal identity in the digital world may present many threats to a digital society and raises the opportunity fo |
https://en.wikipedia.org/wiki/Flash%20animation | Adobe Flash animation (formerly Macromedia Flash animation and FutureSplash animation) is an animation that is created with the Adobe Animate (formerly Flash Professional) platform or similar animation software and often distributed in the SWF file format. The term Adobe Flash animation refers to both the file format and the medium in which the animation is produced. Adobe Flash animation has enjoyed mainstream popularity since the mid-2000s, with many Adobe Flash-animated television series, television commercials, and award-winning online shorts being produced since then.
In the late 1990s, when bandwidth was still at 56 kbit/s for most Internet users, many Adobe Flash animation artists employed limited animation or cutout animation when creating projects intended for web distribution. This allowed artists to release shorts and interactive experiences well under 1 MB, which could stream both audio and high-end animation.
Adobe Flash is able to integrate bitmaps and other raster-based art, as well as video, though most Adobe Flash films are created using only vector-based drawings, which often result in a somewhat clean graphic appearance. Some hallmarks of poorly produced Adobe Flash animation are jerky natural movements (seen in walk-cycles and gestures), auto-tweened character movements, lip-sync without interpolation and abrupt changes from front to profile view.
Adobe Flash animations are typically distributed by way of the World Wide Web, in which case they are often referred to as Internet cartoons, online cartoons, or web cartoons. Web Adobe Flash animations may be interactive and are often created in a series. An Adobe Flash animation is distinguished from a Webcomic, which is a comic strip distributed via the Web, rather than an animated cartoon.
History
The first prominent use of the Adobe Flash animation format was by Ren & Stimpy creator John Kricfalusi. On October 15, 1997, he launched The Goddamn George Liquor Program, the first cartoon series |
https://en.wikipedia.org/wiki/Richard%20Schroeppel | Richard C. Schroeppel (born 1948) is an American mathematician born in Illinois. His research has included magic squares, elliptic curves, and cryptography. In 1964, Schroeppel won first place in the United States among over 225,000 high school students in the Annual High School Mathematics Examination, a contest sponsored by the Mathematical Association of America and the Society of Actuaries. In both 1966 and 1967, Schroeppel scored among the top 5 in the U.S. in the William Lowell Putnam Mathematical Competition. In 1973 he discovered that there are 275,305,224 normal magic squares of order 5. In 1998–1999 he designed the Hasty Pudding Cipher, which was a candidate for the Advanced Encryption Standard, and he is one of the designers of the SANDstorm hash, a submission to the NIST SHA-3 competition.
Among other contributions, Schroeppel was the first to recognize the sub-exponential running time of certain integer factoring algorithms. While not entirely rigorous, his proof that Morrison and Brillhart's continued fraction factoring algorithm ran in roughly steps was an important milestone in factoring and laid a foundation for much later work, including the current "champion" factoring algorithm, the number field sieve.
Schroeppel analyzed Morrison and Brillhart's algorithm, and saw how to cut the run time to roughly by modifications that allowed sieving. This improvement doubled the size of numbers that could be factored in a given amount of time. Coming around the time of the RSA algorithm, which depends on the difficulty of factoring for its security, this was a critically important result.
Due to Schroeppel's apparent prejudice against publishing (though he freely circulated his ideas within the research community), and in spite of Pomerance noting that his quadratic sieve factoring algorithm owed a debt to Schroeppel's earlier work, the latter's contribution is often overlooked. (See the section on "Smooth Numbers" on pages 1476–1477 of Pomerance's "A Ta |
https://en.wikipedia.org/wiki/Thread-local%20storage | In computer programming, thread-local storage (TLS) is a memory management method that uses static or global memory local to a thread.
While the use of global variables is generally discouraged in modern programming, legacy operating systems such as UNIX are designed for uniprocessor hardware and require some additional mechanism to retain the semantics of pre-reentrant APIs. An example of such situations is where functions use a global variable to set an error condition (for example the global variable errno used by many functions of the C library). If errno were a global variable, a call of a system function on one thread may overwrite the value previously set by a call of a system function on a different thread, possibly before following code on that different thread could check for the error condition. The solution is to have errno be a variable that looks like it is global, but in fact exists once per thread—i.e., it lives in thread-local storage. A second use case would be multiple threads accumulating information into a global variable. To avoid a race condition, every access to this global variable would have to be protected by a mutex. Alternatively, each thread might accumulate into a thread-local variable (that, by definition, cannot be read from or written to from other threads, implying that there can be no race conditions). Threads then only have to synchronise a final accumulation from their own thread-local variable into a single, truly global variable.
Many systems impose restrictions on the size of the thread-local memory block, in fact often rather tight limits. On the other hand, if a system can provide at least a memory address (pointer) sized variable thread-local, then this allows the use of arbitrarily sized memory blocks in a thread-local manner, by allocating such a memory block dynamically and storing the memory address of that block in the thread-local variable. On RISC machines, the calling convention often reserves a thread pointer re |
https://en.wikipedia.org/wiki/Cookie%20Puss | Cookie Puss is an ice cream cake character created by Carvel in the 1970s as an expansion of its line of freshly made exclusive products, along with Hug Me the Bear and Fudgie the Whale. The cake is fashioned with a clown face that uses cookies for eyes and an ice cream cone for the nose. According to Carvel's backstory for the character, Cookie Puss is a space alien who was born on planet Birthday. His original name was "Celestial Person," but the initials "C.P." later came to stand for "Cookie Puss." In his television commercials, Cookie Puss has the ability to fly, though he requires a saucer-shaped spacecraft for interplanetary travel. During the 1980s, Cookie Puss was repurposed to serve as a cake for St Patrick's Day, dubbed "Cookie O'Puss", which continues to be sold annually.
Appearance
Since its introduction in 1972, the Cookie Puss design developed by Carvel corporate chef Andrew Bianchi has evolved into the version that is sold today. The initial design introduced the general pear shape of the cake, but all ornamentation was frosting applied by the stores. The first effort to achieve a consistent look was a face printed on a large cookie wafer using edible inks.
A group of franchisees led by Liam Gray of Schenectady, New York, rallied against the corporate requirement for stores to purchase the pre-printed wafers. Gray created a new variation using items already stocked in Carvel shops—sugar cones and Flying Saucer ice cream sandwiches. The other franchisees in the (now defunct) North East Carvel Franchisee Group followed suit, and by May 1974, the Carvel corporation had adopted this as the official design of the Cookie Puss product.
In media
Cookie Puss was a frequent topic used for comic effect on The Howard Stern Show. Typically, the cast of the show would torment Fred Norris for having purchased Cookie Puss as a gift for his mother on Mother's Day. Howard Stern would use voice enhancements to impersonate the voice of Cookie Puss from the Carvel com |
https://en.wikipedia.org/wiki/Viral%20pathogenesis | Viral pathogenesis is the study of the process and mechanisms by which viruses cause diseases in their target hosts, often at the cellular or molecular level. It is a specialized field of study in virology.
Pathogenesis is a qualitative description of the process by which an initial infection causes disease. Viral disease is the sum of the effects of viral replication on the host and the host's subsequent immune response against the virus. Viruses are able to initiate infection, disperse throughout the body, and replicate due to specific virulence factors.
There are several factors that affect pathogenesis. Some of these factors include virulence characteristics of the virus that is infecting. In order to cause disease, the virus must also overcome several inhibitory effects present in the host. Some of the inhibitory effects include distance, physical barriers and host defenses. These inhibitory effects may differ among individuals due to the inhibitory effects being genetically controlled.
Viral pathogenesis is affected by various factors: (1) transmission, entry and spread within the host, (2) tropism, (3) virus virulence and disease mechanisms, (4) host factors and host defense.
Mechanisms of infection
Viruses need to establish infections in host cells in order to multiply. For infections to occur, the virus has to hijack host factors and evade the host immune response for efficient replication. Viral replication frequently requires complex interactions between the virus and host factors that may result in deleterious effects in the host, which confers the virus its pathogenicity.
Important steps of a virus life cycle that shape pathogenesis
Transmission from a host with an infection to a second host
Entry of the virus into the body
Local replication in susceptible cells
Dissemination and spread to secondary tissues and target organs
Secondary replication in susceptible cells
Shedding of the virus into the environment
Onward transmission to third |
https://en.wikipedia.org/wiki/Ekman%20spiral | The oceanic, wind driven Ekman spiral is the result of a force balance created by a shear stress force, Coriolis force and the water drag. This force balance gives a resulting current of the water different from the winds. In the ocean, there are two places where the Ekman spiral can be observed. At the surface of the ocean, the shear stress force corresponds with the wind stress force. At the bottom of the ocean, the shear stress force is created by friction with the ocean floor. This phenomenon was first observed at the surface by the Norwegian oceanographer Fridtjof Nansen during his Fram expedition. He noticed that icebergs did not drift in the same direction as the wind. His student, the Swedish oceanographer Vagn Walfrid Ekman, was the first person to physically explain this process.
Bottom Ekman Spiral
In order to derive the properties of an Ekman spiral a look is taken at a uniform, horizontal geostrophic interior flow in a homogeneous fluid. This flow will be denoted by , where the two components are constant because of uniformity. Another result of this property is that the horizontal gradients will equal zero. As a result, the continuity equation will yield, . Note that the concerning interior flow is horizontal, so at all depths, even in the boundary layers. In this case, the Navier-Stokes momentum equations, governing geophysical motion can now be reduced to:
Where is the Coriolis parameter, the fluid density and the eddy viscosity, which are all taken as a constant here for simplicity. These parameters have a small variance on the scale of an Ekman spiral, thus this approximation will hold. A uniform flow requires a uniformly varying pressure gradient. When substituting the flow components of the interior flow, and , in the equations above, the following is obtained:
Using the last of the three equations at the top of this section, yields that the pressure is independent of depth.
and will suffice as a solution to the differential equatio |
https://en.wikipedia.org/wiki/Viral%20replication | Viral replication is the formation of biological viruses during the infection process in the target host cells. Viruses must first get into the cell before viral replication can occur. Through the generation of abundant copies of its genome and packaging these copies, the virus continues infecting new hosts. Replication between viruses is greatly varied and depends on the type of genes involved in them. Most DNA viruses assemble in the nucleus while most RNA viruses develop solely in cytoplasm.
Viral production / replication
Viruses multiply only in living cells. The host cell must
provide the energy and synthetic machinery and the low-
molecular-weight precursors for the synthesis of viral proteins and nucleic acids.
The virus replication occurs in seven stages, namely;
Attachment
Entry,
Uncoating,
Transcription / mRNA production,
Synthesis of virus components,
Virion assembly and
Release (Liberation Stage).
Attachment
It is the first step of viral replication. The virus attaches to the cell membrane of the host cell. It then injects its DNA or RNA into the host to initiate infection.
In animal cells these viruses get into the cell through the process of endocytosis which works through fusing of the virus and fusing of the viral envelope with the cell membrane of the animal cell and in plant cells it enters through the process of pinocytosis which works on pinching of the viruses.
Entry
The cell membrane of the host cell invaginates the virus particle, enclosing it in a pinocytotic vacuole. This protects the cell from antibodies like in the case of the HIV virus.
Uncoating
Cell enzymes (from lysosomes) strip off the virus protein coat.
This releases or renders accessible the virus nucleic acid or genome.
Transcription / mRNA production
For some RNA viruses, the infecting RNA produces messenger RNA (mRNA), which can translate the genome into protein products.
For viruses with negative stranded RNA, or DNA, viruses are produced by transcription then t |
https://en.wikipedia.org/wiki/Deadband | A deadband or dead-band (also known as a dead zone or a neutral zone) is a band of input values in the domain of a transfer function in a control system or signal processing system where the output is zero (the output is 'dead' - no action occurs). Deadband regions can be used in control systems such as servoamplifiers to prevent oscillation or repeated activation-deactivation cycles (called 'hunting' in proportional control systems). A form of deadband that occurs in mechanical systems, compound machines such as gear trains is backlash.
Voltage regulators
In some power substations there are regulators that keep the voltage within certain predetermined limits, but there is a range of voltage in-between during which no changes are made, such as between 112 and 118 volts (the deadband is 6 volts), or between 215 to 225 volts (deadband is 10 volts).
Backlash
Gear teeth with slop (backlash) exhibit deadband. There is no drive from the input to the output shaft in either direction while the teeth are not meshed. Leadscrews generally also have backlash and hence a deadband, which must be taken into account when making position adjustments, especially with CNC systems. If mechanical backlash eliminators are not available, the control can compensate for backlash by adding the deadband value to the position vector whenever direction is reversed.
Hysteresis versus Deadband
Deadband is different from hysteresis. With hysteresis, there is no deadband and so the output is always in one direction or another. Devices with hysteresis have memory, in that previous system states dictate future states. Examples of devices with hysteresis are single-mode thermostats and smoke alarms. Deadband is the range in a process where no changes to output are made. Hysteresis is the difference in a variable depending on the direction of travel.
Thermostats
Simple (single mode) thermostats exhibit hysteresis. For example, the furnace in the basement of a house is adjusted automatically by |
https://en.wikipedia.org/wiki/Appell%20sequence | In mathematics, an Appell sequence, named after Paul Émile Appell, is any polynomial sequence satisfying the identity
and in which is a non-zero constant.
Among the most notable Appell sequences besides the trivial example are the Hermite polynomials, the Bernoulli polynomials, and the Euler polynomials. Every Appell sequence is a Sheffer sequence, but most Sheffer sequences are not Appell sequences. Appell sequences have a probabilistic interpretation as systems of moments.
Equivalent characterizations of Appell sequences
The following conditions on polynomial sequences can easily be seen to be equivalent:
For ,
and is a non-zero constant;
For some sequence of scalars with ,
For the same sequence of scalars,
where
For ,
Recursion formula
Suppose
where the last equality is taken to define the linear operator on the space of polynomials in . Let
be the inverse operator, the coefficients being those of the usual reciprocal of a formal power series, so that
In the conventions of the umbral calculus, one often treats this formal power series as representing the Appell sequence . One can define
by using the usual power series expansion of the and the usual definition of composition of formal power series. Then we have
(This formal differentiation of a power series in the differential operator is an instance of Pincherle differentiation.)
In the case of Hermite polynomials, this reduces to the conventional recursion formula for that sequence.
Subgroup of the Sheffer polynomials
The set of all Appell sequences is closed under the operation of umbral composition of polynomial sequences, defined as follows. Suppose and are polynomial sequences, given by
Then the umbral composition is the polynomial sequence whose th term is
(the subscript appears in , since this is the th term of that sequence, but not in , since this refers to the sequence as a whole rather than one of its terms).
Under this operation, the set of all Sheffer sequ |
https://en.wikipedia.org/wiki/Pr%C3%BCfer%20sequence | In combinatorial mathematics, the Prüfer sequence (also Prüfer code or Prüfer numbers) of a labeled tree is a unique sequence associated with the tree. The sequence for a tree on n vertices has length n − 2, and can be generated by a simple iterative algorithm. Prüfer sequences were first used by Heinz Prüfer to prove Cayley's formula in 1918.
Algorithm to convert a tree into a Prüfer sequence
One can generate a labeled tree's Prüfer sequence by iteratively removing vertices from the tree until only two vertices remain. Specifically, consider a labeled tree T with vertices {1, 2, ..., n}. At step i, remove the leaf with the smallest label and set the ith element of the Prüfer sequence to be the label of this leaf's neighbour.
The Prüfer sequence of a labeled tree is unique and has length n − 2.
Both coding and decoding can be reduced to integer radix sorting and parallelized.
Example
Consider the above algorithm run on the tree shown to the right. Initially, vertex 1 is the leaf with the smallest label, so it is removed first and 4 is put in the Prüfer sequence. Vertices 2 and 3 are removed next, so 4 is added twice more. Vertex 4 is now a leaf and has the smallest label, so it is removed and we append 5 to the sequence. We are left with only two vertices, so we stop. The tree's sequence is {4,4,4,5}.
Algorithm to convert a Prüfer sequence into a tree
Let {a[1], a[2], ..., a[n]} be a Prüfer sequence:
The tree will have n+2 nodes, numbered from 1 to n+2.
For each node set its degree to the number of times it appears in the sequence plus 1.
For instance, in pseudo-code:
Convert-Prüfer-to-Tree(a)
1 n ← length[a]
2 T ← a graph with n + 2 isolated nodes, numbered 1 to n + 2
3 degree ← an array of integers
4 for each node i in T do
5 degree[i] ← 1
6 for each value i in a do
7 degree[i] ← degree[i] + 1
Next, for each number in the sequence a[i], find the first (lowest-numbered) node, j, with degree equal to 1, add the edge (j, a[ |
https://en.wikipedia.org/wiki/Palm%20kernel%20oil | Palm kernel oil is an edible plant oil derived from the kernel of the oil palm tree Elaeis guineensis. It is related to other two edible oils: palm oil, extracted from the fruit pulp of the oil palm, and coconut oil, extracted from the kernel of the coconut.
Palm kernel oil, palm oil, and coconut oil are three of the few highly saturated vegetable fats; these oils give the name to the 16-carbon saturated fatty acid palmitic acid that they contain.
Palm kernel oil, which is semi-solid at room temperature, is more saturated than palm oil and comparable to coconut oil.
History
Oil from the African oil palm Elaeis guineensis has long been recognized in West African and Central African countries. European merchants trading with West Africa occasionally purchased palm oil for use in Europe, but palm kernel oil remained rare outside West Africa.
The USDA has published historical production figures for palm kernel oil for years beginning October 1 and ending September 30:
Research institutions
In the 1960s, research and development (R&D) in oil palm breeding began to expand after Malaysia's Department of Agriculture established an exchange program with West African economies and four private plantations formed the Oil Palm Genetics Laboratory. The Malaysian government also established Kolej Serdang, which became the Universiti Pertanian Malaysia (UPM) in the 1970s to train agricultural and agroindustrial engineers and agribusiness graduates to conduct research in the field.
In 1979 with support from the Malaysian Agricultural Research and Development Institute (MARDI) and UPM, the government set up the Palm Oil Research Institute of Malaysia (Porim), a public-and-private-coordinated institution. B. C. Shekhar was appointed founder and chairman. Porim's scientists work in oil palm tree breeding, palm oil nutrition and potential oleochemical use. Porim was renamed Malaysian Palm Oil Board in 2000.
Nutrition
Palm kernel oil, similar to coconut oil, is high in saturate |
https://en.wikipedia.org/wiki/Galvanoluminescence | Galvanoluminescence Is the emission of light produced by the passage of an electric current through an appropriate electrolyte in which an electrode, made of certain metals such as aluminium or tantalum, has been immersed. An example being the electrolysis of sodium bromide (NaBr).
Luminescence
Materials science |
https://en.wikipedia.org/wiki/Henk%20Barendregt | Hendrik Pieter (Henk) Barendregt (born 18 December 1947, Amsterdam) is a Dutch logician, known for his work in lambda calculus and type theory.
Life and work
Barendregt studied mathematical logic at Utrecht University, obtaining his master's degree in 1968 and his PhD in 1971, both cum laude, under Dirk van Dalen and Georg Kreisel. After a postdoctoral position at Stanford University, he taught at Utrecht University.
Since 1986, Barendregt has taught at Radboud University Nijmegen, where he now holds the Chair of Foundations of Mathematics and Computer Science. His research group works on Constructive Interactive Mathematics. He is also Adjunct Professor at Carnegie Mellon University, Pittsburgh, USA. He has been a visiting scholar at Darmstadt, ETH Zürich, Siena, and Kyoto.
Barendregt was elected a member of Academia Europaea in 1992. In 1997 Barendregt was elected member of the Royal Netherlands Academy of Arts and Sciences. On 6 February 2003 Barendregt was awarded the Spinozapremie for 2002, the highest scientific award in the Netherlands. In 2002 he was knighted in the Orde van de Nederlandse Leeuw.
Barendregt received an honorary doctorate from Heriot-Watt University in 2015.
Selected publications
— See Errata |
https://en.wikipedia.org/wiki/Botanical%20name | A botanical name is a formal scientific name conforming to the International Code of Nomenclature for algae, fungi, and plants (ICN) and, if it concerns a plant cultigen, the additional cultivar or Group epithets must conform to the International Code of Nomenclature for Cultivated Plants (ICNCP). The code of nomenclature covers "all organisms traditionally treated as algae, fungi, or plants, whether fossil or non-fossil, including blue-green algae (Cyanobacteria), chytrids, oomycetes, slime moulds and photosynthetic protists with their taxonomically related non-photosynthetic groups (but excluding Microsporidia)."
The purpose of a formal name is to have a single name that is accepted and used worldwide for a particular plant or plant group. For example, the botanical name Bellis perennis denotes a plant species which is native to most of the countries of Europe and the Middle East, where it has accumulated various names in many languages. Later, the plant was introduced worldwide, bringing it into contact with more languages. English names for this plant species include: daisy, English daisy, and lawn daisy. The cultivar Bellis perennis 'Aucubifolia' is a golden-variegated horticultural selection of this species.
Type specimens and circumscription
The botanical name itself is fixed by a type, which is a particular specimen (or in some cases a group of specimens) of an organism to which the scientific name is formally attached. In other words, a type is an example that serves to anchor or centralize the defining features of that particular taxon.
The usefulness of botanical names is limited by the fact that taxonomic groups are not fixed in size; a taxon may have a varying circumscription, depending on the taxonomic system, thus, the group that a particular botanical name refers to can be quite small according to some people and quite big according to others. For example, the traditional view of the family Malvaceae has been expanded in some modern approaches to |
https://en.wikipedia.org/wiki/Tibor%20Gallai | Tibor Gallai (born Tibor Grünwald, 15 July 1912 – 2 January 1992) was a Hungarian mathematician. He worked in combinatorics, especially in graph theory, and was a lifelong friend and collaborator of Paul Erdős. He was a student of Dénes Kőnig and an advisor of László Lovász. He was a corresponding member of the Hungarian Academy of Sciences (1991).
His main results
The Edmonds–Gallai decomposition theorem, which was proved independently by Gallai and Jack Edmonds, describes finite graphs from the point of view of matchings. Gallai also proved, with Milgram, Dilworth's theorem in 1947, but as they hesitated to publish the result, Dilworth independently discovered and published it.
Gallai was the first to prove the higher-dimensional version of van der Waerden's theorem.
With Paul Erdős he gave a necessary and sufficient condition for a sequence to be the degree sequence of a graph, known as the Erdős–Gallai theorem.
See also
Gallai–Hasse–Roy–Vitaver theorem
Sylvester–Gallai theorem
Gallais-Edmonds decomposition |
https://en.wikipedia.org/wiki/Ambient%20device | Ambient devices are a type of consumer electronics, characterized by their ability to be perceived at-a-glance, also known as "glanceable". Ambient devices use pre-attentive processing to display information and are aimed at minimizing mental effort. Associated fields include ubiquitous computing and calm technology. The concept is closely related to the Internet of Things.
The New York Times Magazine announced ambient devices as one of its Ideas of the Year in 2002. The award recognized a start-up company, Ambient Devices, whose first product Ambient Orb, was a frosted-glass ball lamp, which maps information to a linear color spectrum and displays the trend in the data. Other products in the genre include the 2008 Chumby, and the 2012 52-LED device MooresCloud (a reference to Moore's Law) from Australia.
Research on ambient devices began at Xerox Parc, with a paper co-written by Mark Weiser and John Seely Brown, entitled Calm Computing.
Purpose
The purpose of ambient devices is to enable immediate and effortless access to information. The original developers of the idea state that an ambient device is designed to provide support to people carrying out everyday activities. Ambient devices decrease the effort needed to process incoming data, thus rendering individuals more productive.
The key issue lies with taking Internet-based content (e.g. traffic congestion, weather condition, stock market quotes) and mapping it into a single, usually one-dimensional spectrum (e.g. angle, colour). According to Rose, this presents data to an end user seamlessly, with an insignificant amount of cognitive load.
History
The concept of ambient devices can be traced back to the early 2000s, when preliminary research was carried at Xerox PARC, according to the company’s official website. The MIT Media Lab website lists the venture as founded by David L. Rose, Ben Resner, Nabeel Hyatt and Pritesh Gandhi as a lab spin-off.
Examples
Ambient Orb was introduced by Ambient Devices i |
https://en.wikipedia.org/wiki/Correlation%20function%20%28statistical%20mechanics%29 | In statistical mechanics, the correlation function is a measure of the order in a system, as characterized by a mathematical correlation function. Correlation functions describe how microscopic variables, such as spin and density, at different positions are related. More specifically, correlation functions quantify how microscopic variables co-vary with one another on average across space and time. A classic example of such spatial correlations is in ferro- and antiferromagnetic materials, where the spins prefer to align parallel and antiparallel with their nearest neighbors, respectively. The spatial correlation between spins in such materials is shown in the figure to the right.
Definitions
The most common definition of a correlation function is the canonical ensemble (thermal) average of the scalar product of two random variables, and , at positions and and times and :
Here the brackets, , indicate the above-mentioned thermal average. It is a matter of convention whether one subtracts the uncorrelated average product of and , from the correlated product, , with the convention differing among fields. The most common uses of correlation functions are when and describe the same variable, such as a spin-spin correlation function, or a particle position-position correlation function in an elemental liquid or a solid (often called a Radial distribution function or a pair correlation function). Correlation functions between the same random variable are autocorrelation functions. However, in statistical mechanics, not all correlation functions are autocorrelation functions. For example, in multicomponent condensed phases, the pair correlation function between different elements is often of interest. Such mixed-element pair correlation functions are an example of cross-correlation functions, as the random variables and represent the average variations in density as a function position for two distinct elements.
Equilibrium equal-time (spatial) correlation f |
https://en.wikipedia.org/wiki/Berezinskii%E2%80%93Kosterlitz%E2%80%93Thouless%20transition | The Berezinskii–Kosterlitz–Thouless (BKT) transition is a phase transition of the two-dimensional (2-D) XY model in statistical physics. It is a transition from bound vortex-antivortex pairs at low temperatures to unpaired vortices and anti-vortices at some critical temperature. The transition is named for condensed matter physicists Vadim Berezinskii, John M. Kosterlitz and David J. Thouless. BKT transitions can be found in several 2-D systems in condensed matter physics that are approximated by the XY model, including Josephson junction arrays and thin disordered superconducting granular films. More recently, the term has been applied by the 2-D superconductor insulator transition community to the pinning of Cooper pairs in the insulating regime, due to similarities with the original vortex BKT transition.
Work on the transition led to the 2016 Nobel Prize in Physics being awarded to Thouless and Kosterlitz; Berezinskii died in 1980.
XY model
The XY model is a two-dimensional vector spin model that possesses U(1) or circular symmetry. This system is not expected to possess a normal second-order phase transition. This is because the expected ordered phase of the system is destroyed by transverse fluctuations, i.e. the Nambu-Goldstone modes associated with this broken continuous symmetry, which logarithmically diverge with system size.
This is a specific case of what is called the Mermin–Wagner theorem in spin systems.
Rigorously the transition is not completely understood, but the existence of two phases was proved by and .
Disordered phases with different correlations
In the XY model in two dimensions, a second-order phase transition is not seen. However, one finds a low-temperature quasi-ordered phase with a correlation function (see statistical mechanics) that decreases with the distance like a power, which depends on the temperature. The transition from the high-temperature disordered phase with the exponential correlation to this low-temperature quasi-or |
https://en.wikipedia.org/wiki/Niemeier%20lattice | In mathematics, a Niemeier lattice is one of the 24
positive definite even unimodular lattices of rank 24,
which were classified by . gave a simplified proof of the classification. In the 1970s, has a sentence mentioning that he found more than 10 such lattices in the 1940s, but gives no further details. One example of a Niemeier lattice is the Leech lattice found in 1967.
Classification
Niemeier lattices are usually labelled by the Dynkin diagram of their
root systems. These Dynkin diagrams have rank either 0 or 24, and all of their components have the same Coxeter number. (The Coxeter number, at least in these cases, is
the number of roots divided by the dimension.) There are exactly 24 Dynkin diagrams with these properties, and there turns out to be a unique Niemeier
lattice for each of these Dynkin diagrams.
The complete list of Niemeier lattices is given in the following table.
In the table,
G0 is the order of the group generated by reflections
G1 is the order of the group of automorphisms fixing all components of the Dynkin diagram
G2 is the order of the group of automorphisms of permutations of components of the Dynkin diagram
G∞ is the index of the root lattice in the Niemeier lattice, in other words, the order of the "glue code". It is the square root of the discriminant of the root lattice.
G0×G1×G2 is the order of the automorphism group of the lattice
G∞×G1×G2 is the order of the automorphism group of the corresponding deep hole.
The neighborhood graph of the Niemeier lattices
If L is an odd unimodular lattice of dimension 8n and M its sublattice of even vectors, then M is contained in exactly 3 unimodular lattices, one of which is L and the other two of which are even. (If L has a norm 1 vector then the two even lattices are isomorphic.) The Kneser neighborhood graph in 8n dimensions has a point for each even lattice, and a line joining two points for each odd 8n dimensional lattice with no norm 1 vectors, where the vertices of each line are t |
https://en.wikipedia.org/wiki/Baghdad%20Tower | Baghdad Tower (Al-Ma'mun) (), previously called International Saddam Tower (), is a TV tower in Baghdad, Iraq. The tower opened in 1994 and replaced a communications tower destroyed in the Gulf War. A revolving restaurant and observation deck are located on the top floor. After the 2003 invasion of Iraq, the tower was occupied by American soldiers and was renamed as Baghdad Tower.
History
The tower's construction began in 1991 adjoining to the al-Ma'mun Telecom Exchange. It was built with a height of 204 meters with a revolving restaurant on top as well as the words "Allah is the Greatest" on the top of the restaurant. The center of the tower consists of seven floors with a modern architectural style and is located on the land of the old pedal. It also became the main international gateway to receive and send all types of communications in Baghdad. Located in the Yarmouk neighborhood west of Baghdad, it was a major tourist site.
On March 27, 2003, a week into the US-led invasion of the country, 4,700-pound bombs destroyed the al-Ma'mun Telecom Exchange and along with it, caused damages to the tower and was abandoned. In 2007, the tower was to be the centerpiece of a short-lived urban renewal project by the Ministry of Communications which ended three years later. During the project, a U.S. State Department advisor noted that it was the "only discernible sign of major building construction in Baghdad." For a while, the tower was once again abandoned as funds for ministry investments became limited due to the rise of ISIS in Northern Iraq. The tower was refurbished in 2016 and since then, many projects and announcements have came up about the opening of the tower but generally have all been delayed or had their process stopped for no clear reason. Interest in the tower was renewed in 2018 when even former-Iraqi President Barham Salih was brought into the occasion. As of 2020, the tower remained closed despite public outcry.
Gallery
See also
External links
|
https://en.wikipedia.org/wiki/Papain | Papain, also known as papaya proteinase I, is a cysteine protease () enzyme present in papaya (Carica papaya) and mountain papaya (Vasconcellea cundinamarcensis). It is the namesake member of the papain-like protease family.
It has wide ranging commercial applications in the leather, cosmetic, textiles, detergents, food and pharmaceutical industries. In the food industry, papain is used as an active ingredient in many commercial meat tenderizers.
Papain family
Papain belongs to a family of related proteins, known as the papain-like protease family, with a wide variety of activities, including endopeptidases, aminopeptidases, dipeptidyl peptidases and enzymes with both exo- and endopeptidase activity. Members of the papain family are widespread, found in baculoviruses, eubacteria, yeast, and practically all protozoa, plants and mammals. The proteins are typically lysosomal or secreted, and proteolytic cleavage of the propeptide is required for enzyme activation, although bleomycin hydrolase is cytosolic in fungi and mammals. Papain-like cysteine proteinases are essentially synthesised as inactive proenzymes (zymogens) with N-terminal propeptide regions. The activation process of these enzymes includes the removal of propeptide regions, which serve a variety of functions in vivo and in vitro. The pro-region is required for the proper folding of the newly synthesised enzyme, the inactivation of the peptidase domain and stabilisation of the enzyme against denaturing at neutral to alkaline pH conditions. Amino acid residues within the pro-region mediate their membrane association, and play a role in the transport of the proenzyme to lysosomes. Among the most notable features of propeptides is their ability to inhibit the activity of their cognate enzymes and that certain propeptides exhibit high selectivity for inhibition of the peptidases from which they originate.
Structure
The papain precursor protein contains 345 amino acid residues, and consists of a signal seq |
https://en.wikipedia.org/wiki/Isotopic%20labeling | Isotopic labeling (or isotopic labelling) is a technique used to track the passage of an isotope (an atom with a detectable variation in neutron count) through a reaction, metabolic pathway, or cell. The reactant is 'labeled' by replacing specific atoms by their isotope. The reactant is then allowed to undergo the reaction. The position of the isotopes in the products is measured to determine the sequence the isotopic atom followed in the reaction or the cell's metabolic pathway. The nuclides used in isotopic labeling may be stable nuclides or radionuclides. In the latter case, the labeling is called radiolabeling.
In isotopic labeling, there are multiple ways to detect the presence of labeling isotopes; through their mass, vibrational mode, or radioactive decay. Mass spectrometry detects the difference in an isotope's mass, while infrared spectroscopy detects the difference in the isotope's vibrational modes. Nuclear magnetic resonance detects atoms with different gyromagnetic ratios. The radioactive decay can be detected through an ionization chamber or autoradiographs of gels.
An example of the use of isotopic labeling is the study of phenol (C6H5OH) in water by replacing common hydrogen (protium) with deuterium (deuterium labeling). Upon adding phenol to deuterated water (water containing D2O in addition to the usual H2O), the substitution of deuterium for the hydrogen is observed in phenol's hydroxyl group (resulting in C6H5OD), indicating that phenol readily undergoes hydrogen-exchange reactions with water. Only the hydroxyl group is affected, indicating that the other 5 hydrogen atoms do not participate in the exchange reactions.
Isotopic tracer
An isotopic tracer, (also "isotopic marker" or "isotopic label"), is used in chemistry and biochemistry to help understand chemical reactions and interactions. In this technique, one or more of the atoms of the molecule of interest is substituted for an atom of the same chemical element, but of a different isotope |
https://en.wikipedia.org/wiki/Photon%20noise | Photon noise is the randomness in signal associated with photons arriving at a detector. For a simple black body emitting on an absorber, the noise-equivalent power is given by
where is the Planck constant, is the central frequency, is the bandwidth, is the occupation number and is the optical efficiency.
The first term is essentially shot noise whereas the second term is related to the bosonic character of photons, variously known as "Bose noise" or "wave noise". At low occupation number, such as in the visible spectrum, the shot noise term dominates. At high occupation number, however, typical of the radio spectrum, the Bose term dominates.
See also
Hanbury Brown and Twiss effect
Phonon noise |
https://en.wikipedia.org/wiki/Genetic%20anthropomorphism | In evolutionary biology, genetic anthropomorphism refers to "thinking like a gene". The central question is "if I were a gene, what would I do in order to reproduce myself". The question is an obvious fallacy since genes are incapable of thought. However, natural selection does act in such a way that those that are most successful at reproducing themselves (by following the optimum strategy) prosper. Thinking like a gene enables the results to be visualised. This is related to a philosophical tool known as the intentional stance.
The most notable genetic anthropomorphist was the British biologist, W. D. Hamilton. Hamilton's friend, Richard Dawkins, popularised the idea.
Anthropomorphism has been criticised on a number of grounds, including that it is reductionist.
Evolutionary biology |
https://en.wikipedia.org/wiki/Symbolic%20integration | In calculus, symbolic integration is the problem of finding a formula for the antiderivative, or indefinite integral, of a given function f(x), i.e. to find a differentiable function F(x) such that
This is also denoted
Discussion
The term symbolic is used to distinguish this problem from that of numerical integration, where the value of F is sought at a particular input or set of inputs, rather than a general formula for F.
Both problems were held to be of practical and theoretical importance long before the time of digital computers, but they are now generally considered the domain of computer science, as computers are most often used currently to tackle individual instances.
Finding the derivative of an expression is a straightforward process for which it is easy to construct an algorithm. The reverse question of finding the integral is much more difficult. Many expressions which are relatively simple do not have integrals that can be expressed in closed form. See antiderivative and nonelementary integral for more details.
A procedure called the Risch algorithm exists which is capable of determining whether the integral of an elementary function (function built from a finite number of exponentials, logarithms, constants, and nth roots through composition and combinations using the four elementary operations) is elementary and returning it if it is. In its original form, Risch algorithm was not suitable for a direct implementation, and its complete implementation took a long time. It was first implemented in Reduce in the case of purely transcendental functions; the case of purely algebraic functions was solved and implemented in Reduce by James H. Davenport; the general case was solved by Manuel Bronstein, who implemented almost all of it in Axiom, though to date there is no implementation of the Risch algorithm which can deal with all of the special cases and branches in it.
However, the Risch algorithm applies only to indefinite integrals, while most of th |
https://en.wikipedia.org/wiki/Spectral%20efficiency | Spectral efficiency, spectrum efficiency or bandwidth efficiency refers to the information rate that can be transmitted over a given bandwidth in a specific communication system. It is a measure of how efficiently a limited frequency spectrum is utilized by the physical layer protocol, and sometimes by the medium access control (the channel access protocol).
Link spectral efficiency
The link spectral efficiency of a digital communication system is measured in bit/s/Hz, or, less frequently but unambiguously, in (bit/s)/Hz. It is the net bit rate (useful information rate excluding error-correcting codes) or maximum throughput divided by the bandwidth in hertz of a communication channel or a data link. Alternatively, the spectral efficiency may be measured in bit/symbol, which is equivalent to bits per channel use (bpcu), implying that the net bit rate is divided by the symbol rate (modulation rate) or line code pulse rate.
Link spectral efficiency is typically used to analyze the efficiency of a digital modulation method or line code, sometimes in combination with a forward error correction (FEC) code and other physical layer overhead. In the latter case, a "bit" refers to a user data bit; FEC overhead is always excluded.
The modulation efficiency in bit/s is the gross bit rate (including any error-correcting code) divided by the bandwidth.
Example 1: A transmission technique using one kilohertz of bandwidth to transmit 1,000 bits per second has a modulation efficiency of 1 (bit/s)/Hz.
Example 2: A V.92 modem for the telephone network can transfer 56,000 bit/s downstream and 48,000 bit/s upstream over an analog telephone network. Due to filtering in the telephone exchange, the frequency range is limited to between 300 hertz and 3,400 hertz, corresponding to a bandwidth of 3,400 − 300 = 3,100 hertz. The spectral efficiency or modulation efficiency is 56,000/3,100 = 18.1 (bit/s)/Hz downstream, and 48,000/3,100 = 15.5 (bit/s)/Hz upstream.
An upper bound for the att |
https://en.wikipedia.org/wiki/Millionth | One millionth is equal to 0.000 001, or 1 x 10−6 in scientific notation. It is the reciprocal of a million, and can be also written as . Units using this fraction can be indicated using the prefix "micro-" from Greek, meaning "small". Numbers of this quantity are expressed in terms of μ (the Greek letter mu).
"Millionth" can also mean the ordinal number that comes after the nine hundred, ninety-nine thousand, nine hundred, ninety-ninth and before the million and first.
See also
International System of Units
Micro-
International Map of the World
Order of magnitude (numbers)
Order of magnitude
Parts-per notation
Per mille |
https://en.wikipedia.org/wiki/Viterbi%20decoder | A Viterbi decoder uses the Viterbi algorithm for decoding a bitstream that has been
encoded using a convolutional code or trellis code.
There are other algorithms for decoding a convolutionally encoded stream (for example, the Fano algorithm). The Viterbi algorithm is the most resource-consuming, but it does the maximum likelihood decoding. It is most often used for decoding convolutional codes with constraint lengths k≤3, but values up to k=15 are used in practice.
Viterbi decoding was developed by Andrew J. Viterbi and published in the paper
There are both hardware (in modems) and software implementations of a Viterbi decoder.
Viterbi decoding is used in the iterative Viterbi decoding algorithm.
Hardware implementation
A hardware Viterbi decoder for basic (not punctured) code usually consists of the following major blocks:
Branch metric unit (BMU)
Path metric unit (PMU)
Traceback unit (TBU)
Branch metric unit (BMU)
A branch metric unit's function is to calculate branch metrics, which are normed distances between every possible symbol in the code alphabet, and the received symbol.
There are hard decision and soft decision Viterbi decoders. A hard decision Viterbi decoder receives a simple bitstream on its input, and a Hamming distance is used as a metric. A soft decision Viterbi decoder receives a bitstream containing information about the reliability of each received symbol. For instance, in a 3-bit encoding, this reliability information can be encoded as follows:
Of course, it is not the only way to encode reliability data.
The squared Euclidean distance is used as a metric for soft decision decoders.
Path metric unit (PMU)
A path metric unit summarizes branch metrics to get metrics for paths, where K is the constraint length of the code, one of which can eventually be chosen as optimal. Every clock it makes decisions, throwing off wittingly nonoptimal paths. The results of these decisions are written to the memory of a traceback unit.
The co |
https://en.wikipedia.org/wiki/Evolutionary%20arms%20race | In evolutionary biology, an evolutionary arms race is an ongoing struggle between competing sets of co-evolving genes, phenotypic and behavioral traits that develop escalating adaptations and counter-adaptations against each other, resembling the geopolitical concept of an arms race. These are often described as examples of positive feedback. The co-evolving gene sets may be in different species, as in an evolutionary arms race between a predator species and its prey (Vermeij, 1987), or a parasite and its host. Alternatively, the arms race may be between members of the same species, as in the manipulation/sales resistance model of communication (Dawkins & Krebs, 1979) or as in runaway evolution or Red Queen effects. One example of an evolutionary arms race is in sexual conflict between the sexes, often described with the term Fisherian runaway. Thierry Lodé emphasized the role of such antagonistic interactions in evolution leading to character displacements and antagonistic coevolution.
Symmetrical versus asymmetrical arms races
Arms races may be classified as either symmetrical or asymmetrical. In a symmetrical arms race, selection pressure acts on participants in the same direction. An example of this is trees growing taller as a result of competition for light, where the selective advantage for either species is increased height. An asymmetrical arms race involves contrasting selection pressures, such as the case of cheetahs and gazelles, where cheetahs evolve to be better at hunting and killing while gazelles evolve not to hunt and kill, but rather to evade capture.
Hostparasite dynamic
Selective pressure between two species can include host-parasite coevolution. This antagonistic relationship leads to the necessity for the pathogen to have the best virulent alleles to infect the organism and for the host to have the best resistant alleles to survive parasitism. As a consequence, allele frequencies vary through time depending on the size of virulent an |
https://en.wikipedia.org/wiki/Parasite%20load | Parasite load is a measure of the number and virulence of the parasites that a host organism harbours. Quantitative parasitology deals with measures to quantify parasite loads in samples of hosts and to make statistical comparisons of parasitism across host samples.
In evolutionary biology, parasite load has important implications for sexual selection and the evolution of sex, as well as openness to experience.
Infection and distribution
A single parasite species usually has an aggregated distribution across host individuals, which means that most hosts harbor few parasites, while a few hosts carry the vast majority of parasite individuals. This poses considerable problems for students of parasite ecology: use of parametric statistics should be avoided. Log-transformation of data before the application of parametric test, or the use of non-parametric statistics is often recommended. However, this can give rise to further problems. Therefore, modern day quantitative parasitology is based on more advanced biostatistical methods.
In vertebrates, males frequently carry higher parasite loads than females. Differences in movement patterns, habitat choice, diet, body size, and ornamentation are all thought to contribute to this sex bias observed in parasite loads. Often males have larger habitat ranges and thus are likely to encounter more parasite-dense areas than female conspecifics. Whenever sexual dimorphism is exhibited in species, the larger sex is thought to tolerate higher parasite loads.
In insects, susceptibility to parasite load has been linked to genetic variation in the insect colony. In colonies of Hymenoptera (ants, bees and wasps), colonies with high genetic variation that were exposed to parasites experienced lesser parasite loads than colonies that are more genetically similar.
Methods of quantifying
Depending on the parasitic species in question, various methods of quantification allow scientists to measure the numbers of parasites present an |
https://en.wikipedia.org/wiki/Intimate%20ion%20pair | In chemistry, the intimate ion pair concept, introduced by Saul Winstein, describes the interactions between a cation, anion and surrounding solvent molecules. In ordinary aqueous solutions of inorganic salts, an ion is completely solvated and shielded from the counterion. In less polar solvents, two ions can still be connected to some extent. In a tight, intimate, or contact ion pair, there are no solvent molecules between the two ions. When solvation increases, ionic bonding decreases and a loose or solvent-shared ion pair results. The ion pair concept explains stereochemistry in solvolysis.
The concept of intimate ion pairs is used to explain the slight tendency for inversion of stereochemistry during an S1 reaction. It is proposed that solvent or other ions in solution may assist in the removal of a leaving group to form a carbocation which reacts in an S1 fashion; similarly, the leaving group may associate loosely with the cationic intermediate. The association of solvent or an ion with the leaving group effectively blocks one side of the incipient carbocation, while allowing the backside to be attacked by a nucleophile. This leads to a slight excess of the product with inverted stereochemistry, whereas a purely S1 reaction should lead to a racemic product. Intimate ion pairs are also invoked in the Si mechanism. Here, part of the leaving group detaches and attacks from the same face, leading to retention.
See also
Ion association
Asymmetric ion-pairing catalysis |
https://en.wikipedia.org/wiki/Fermion%20doubling | In lattice field theory, fermion doubling occurs when naively putting fermionic fields on a lattice, resulting in more fermionic states than expected. For the naively discretized Dirac fermions in Euclidean dimensions, each fermionic field results in identical fermion species, referred to as different tastes of the fermion. The fermion doubling problem is intractably linked to chiral invariance by the Nielsen–Ninomiya theorem. Most strategies used to solve the problem require using modified fermions which reduce to the Dirac fermion only in the continuum limit.
Naive fermion discretization
For simplicity we will consider a four-dimensional theory of a free fermion, although the fermion doubling problem remains in arbitrary dimensions and even if interactions are included. Lattice field theory is usually carried out in Euclidean spacetime arrived at from Minkowski spacetime after a Wick rotation, where the continuum Dirac action takes the form
This is discretized by introducing a lattice with lattice spacing and points indexed by a vector of integers . The integral becomes a sum over all lattice points, while the fermionic fields are replaced by four-component Grassmann variables at each lattice site denoted by and . The derivative discretization used is the symmetric derivative discretization, with the vectors being unit vectors in the direction. These steps give the naive free fermion action
This action reduces down to the continuum Dirac action in the continuum limit, so is expect to be a theory of a single fermion. However, it instead describes sixteen identical fermions, with each fermion said to have a different taste, analogously to how particles have different flavours in particle physics. The fifteen additional fermions are often referred to as doublers. This extended particle content can be seen by analyzing the symmetries or the correlation functions of the lattice theory.
Doubling symmetry
The naive fermion action possesses a new taste-exchan |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.