source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Bubble%20memory | Bubble memory is a type of non-volatile computer memory that uses a thin film of a magnetic material to hold small magnetized areas, known as bubbles or domains, each storing one bit of data. The material is arranged to form a series of parallel tracks that the bubbles can move along under the action of an external magnetic field. The bubbles are read by moving them to the edge of the material, where they can be read by a conventional magnetic pickup, and then rewritten on the far edge to keep the memory cycling through the material. In operation, bubble memories are similar to delay-line memory systems.
Bubble memory started out as a promising technology in the 1970s, offering memory density of an order similar to hard drives, but performance more comparable to core memory, while lacking any moving parts. This led many to consider it a contender for a "universal memory" that could be used for all storage needs. The introduction of dramatically faster semiconductor memory chips pushed bubble into the slow end of the scale, and equally dramatic improvements in hard-drive capacity made it uncompetitive in price terms. Bubble memory was used for some time in the 1970s and 1980s where its non-moving nature was desirable for maintenance or shock-proofing reasons. The introduction of flash storage and similar technologies rendered even this niche uncompetitive, and bubble disappeared entirely by the late 1980s.
History
Precursors
Bubble memory is largely the brainchild of a single person, Andrew Bobeck. Bobeck had worked on many kinds of magnetics-related projects through the 1960s, and two of his projects put him in a particularly good position for the development of bubble memory. The first was the development of the first magnetic-core memory system driven by a transistor-based controller, and the second was the development of twistor memory.
Twistor is essentially a version of core memory that replaces the "cores" with a piece of magnetic tape. The main advantag |
https://en.wikipedia.org/wiki/Cohl%20Furey | Cohl Furey, also known as Nichol Furey, is a Canadian mathematical physicist.
Career
Furey has a bachelor's degree in mathematics and physics from Simon Fraser University (2005), Master's degree from the University of Cambridge (2006) and a Ph.D in theoretical physics from the University of Waterloo (2015). She was a research fellow at the University of Cambridge from 2016 to 2019 and spent a few months at the African Institute for Mathematical Sciences in Cape Town. Since 2020, she has been at the Humboldt University of Berlin on a Freigeist-Fellowship by the Volkswagen Foundation.
Her main interests are division algebras, Clifford algebras, and Jordan algebras, and their relation to particle physics. Her work focuses on finding an underlying mathematical structure to the Standard Model of particle physics. She is most noted for her work on octonions.
She has worked on attempting to obtain the Standard Model of particle physics from octonionic constructions. In her 2018 paper "SU(3) × SU(2) × U(1) ( × U(1) ) as a symmetry of division algebraic ladder operators," according to Quanta Magazine, "she consolidated several findings to construct the full Standard Model symmetry group, SU(3) × SU(2) × U(1), for a single generation of particles, with the math producing the correct array of electric charges and other attributes for an electron, neutrino, three up quarks, three down quarks and their anti-particles. The math also suggests a reason why electric charge is quantized in discrete units — essentially, because whole numbers are." In 2022 together with Mia Hughes, she linked the symmetry breaking in physics to division algebras including octonions.
Media recognition
In 2019, Wired.com listed her in their article "10 Women in Science and Tech Who Should Be Household Names".
Notable publications
C. Furey, "Three generations, two unbroken gauge symmetries, and one eight-dimensional algebra", Phys. Lett. B, 785 (2018) p. 84-89 (See addendum, arXiv version
C. Fur |
https://en.wikipedia.org/wiki/Hereditary%20carrier | A hereditary carrier (genetic carrier or just carrier), is a person or other organism that has inherited a recessive allele for a genetic trait or mutation but usually does not display that trait or show symptoms of the disease. Carriers are, however, able to pass the allele onto their offspring, who may then express the genetic trait.
Carriers in autosomal inheritances
Autosomal dominant-recessive inheritance is made possible by the fact that the individuals of most species (including all higher animals and plants) have two alleles of most hereditary predispositions because the chromosomes in the cell nucleus are usually present in pairs (diploid). Carriers can be female or male as the autosomes are homologous independently from the sex.
In carriers the expression of a certain characteristic is recessive. The individual has both a genetic predisposition for the dominant trait and a genetic predisposition for the recessive trait, and the dominant expression prevails in the phenotype. In an individual which is heterozygous regarding a certain allele, it is not externally recognisable that it also has the recessive allele. But if the carrier has a child, the recessive trait appears in the phenotype, in case the descendant receives the recessive allele from both parents and therefore does not possess the dominant allele that would cover the recessive trait. According to Mendelian Law of Segregation of genes an average of 25% of the offspring become homozygous and express the recessive trait. Carriers can either pass on normal autosomal recessive hereditary traits or an autosomal recessive hereditary disease.
Carriers in gonosomal inheritances
Gonosomal recessive genes are also passed on by carriers. The term is used in human genetics in cases of hereditary traits in which the observed trait lies on the female sex chromosome, the X chromosome. The carriers are always women. Men cannot be carriers because they only have one X chromosome. The Y chromosome is not a |
https://en.wikipedia.org/wiki/Vibrational%20bond | A vibrational bond is a chemical bond that happens between two very large atoms, like bromine, and a very small atom, like hydrogen, at very high energy states. Vibrational bonds only exist for a few milliseconds. This bond is detectable through modern analytic chemistry and is significant because it affects the rate at which other reactions can occur.
History
Vibrational bonds were mathematically predicted almost thirty years before they were experimentally observed. The original theoretical calculations had been carried out by D.C. Clary and J.N.L Connor during the early 1980s. Together they hypothesized that with very large atoms and small atoms at high energy states, the elements would stabilize and create temporary bonds for very short periods of time. The vibrational bond would be weaker than any currently known bond, like the commonly known ionic or covalent bonds.
One year after the theoretical discovery of vibrational bonds, J. Manz and his team confirmed the calculations that were previously made, and elaborated on them by showing that the vibrational bonds were most likely to occur during symmetric reactions, but stated that vibrational bonds may also be possible with asymmetric reactions. Their team explained that although vibrational bonding theories proved to be correct they found some inconsistencies with the 'classic model' and found that symmetric reactions will show resonance, but only in certain transition states. However, the classic model would still be viable to use to predict vibrational bonds.
In 1989, Donald Fleming noticed that a reaction between bromine and muonium slowed down as temperature increased. This phenomenon was known as a "vibrational bond" and would capture the attention of Donald Fleming again in 2014. In 1989 the technology did not exist to collect sufficient data on the reaction, and Donald Fleming and his team moved away from the research.
Discovery
Donald Fleming and his team recently began their investigation of |
https://en.wikipedia.org/wiki/Mobility%20model | Mobility models characterize the movements of mobile users with respect to their location, velocity and direction over a period of time. These models play an vital role in the design of Mobile Ad Hoc Networks(MANET). Most of the times simulators play a significant role in testing the features of mobile ad hoc networks. Simulators like (NS, QualNet, etc.) allow the users to choose the mobility models as these models represent the movements of nodes or users. As the mobile nodes move in different directions, it becomes imperative to characterize their movements vis-à-vis to standard models. The mobility models proposed in literature have varying degrees of realism i.e. from random patterns to realistic patterns. Thus these models contribute significantly while testing the protocols for mobile ad hoc networks.
Background and terminology
The study of large and complex networks is possible by experimenting on a simulator rather than on analytical studies. The relatively new form of networks like Mobile Ad Hoc Networks(MANET), Vehicular Ad Hoc Networks (VANET), etc. are characterized by nodes which are autonomous and dynamic in nature. Thus it becomes very essential to capture their movements so that the corresponding simulations results are nearer to reality. Mobility models are basically classified as stochastic, detailed, Hybrid and Trace based Realistic models.
The stochastic models are based on random movements and the nodes are free to move in any direction. Example include Random waypoint model, Random walk and Random direction model.
The detailed models are tailored for specific scenarios. This could include meetings, library and classroom scenarios. Example includes Street random waypoint (STRAW),
The Hybrid models try to strike a balance between realism (Detailed models) and freedom of movements(Stochastic models). Examples include Reference point group mobility model, Manhattan mobility model and Freeway mobility model.
The Real trace models contai |
https://en.wikipedia.org/wiki/Hamartia%20%28medical%20term%29 | A hamartia is a focal malformation consisting of disorganized arrangement of tissue types that are normally present in the anatomical area. A hamartia is not considered to be a tumor, and is distinct from a hamartoma, which describes a benign neoplasm characterized by tissue misarrangement similar to a hamartia (i.e., tissue types that are typical of the area but arranged in an atypical manner). |
https://en.wikipedia.org/wiki/Riccati%20equation | In mathematics, a Riccati equation in the narrowest sense is any first-order ordinary differential equation that is quadratic in the unknown function. In other words, it is an equation of the form
where and . If the equation reduces to a Bernoulli equation, while if the equation becomes a first order linear ordinary differential equation.
The equation is named after Jacopo Riccati (1676–1754).
More generally, the term Riccati equation is used to refer to matrix equations with an analogous quadratic term, which occur in both continuous-time and discrete-time linear-quadratic-Gaussian control. The steady-state (non-dynamic) version of these is referred to as the algebraic Riccati equation.
Conversion to a second order linear equation
The non-linear Riccati equation can always be converted to a second order linear ordinary differential equation (ODE):
If
then, wherever is non-zero and differentiable, satisfies a Riccati equation of the form
where and , because
Substituting , it follows that satisfies the linear 2nd order ODE
since
so that
and hence
A solution of this equation will lead to a solution of the original Riccati equation.
Application to the Schwarzian equation
An important application of the Riccati equation is to the 3rd order Schwarzian differential equation
which occurs in the theory of conformal mapping and univalent functions. In this case the ODEs are in the complex domain and differentiation is with respect to a complex variable. (The Schwarzian derivative has the remarkable property that it is invariant under Möbius transformations, i.e. whenever is non-zero.) The function
satisfies the Riccati equation
By the above where is a solution of the linear ODE
Since , integration gives
for some constant . On the other hand any other independent solution of the linear ODE
has constant non-zero Wronskian which can be taken to be after scaling.
Thus
so that the Schwarzian equation has solution
Obtaining solutions |
https://en.wikipedia.org/wiki/Pseudoholomorphic%20curve | In mathematics, specifically in topology and geometry, a pseudoholomorphic curve (or J-holomorphic curve) is a smooth map from a Riemann surface into an almost complex manifold that satisfies the Cauchy–Riemann equation. Introduced in 1985 by Mikhail Gromov, pseudoholomorphic curves have since revolutionized the study of symplectic manifolds. In particular, they lead to the Gromov–Witten invariants and Floer homology, and play a prominent role in string theory.
Definition
Let be an almost complex manifold with almost complex structure . Let be a smooth Riemann surface (also called a complex curve) with complex structure . A pseudoholomorphic curve in is a map that satisfies the Cauchy–Riemann equation
Since , this condition is equivalent to
which simply means that the differential is complex-linear, that is, maps each tangent space
to itself. For technical reasons, it is often preferable to introduce some sort of inhomogeneous term and to study maps satisfying the perturbed Cauchy–Riemann equation
A pseudoholomorphic curve satisfying this equation can be called, more specifically, a -holomorphic curve. The perturbation is sometimes assumed to be generated by a Hamiltonian (particularly in Floer theory), but in general it need not be.
A pseudoholomorphic curve is, by its definition, always parametrized. In applications one is often truly interested in unparametrized curves, meaning embedded (or immersed) two-submanifolds of , so one mods out by reparametrizations of the domain that preserve the relevant structure. In the case of Gromov–Witten invariants, for example, we consider only closed domains of fixed genus and we introduce marked points (or punctures) on . As soon as the punctured Euler characteristic is negative, there are only finitely many holomorphic reparametrizations of that preserve the marked points. The domain curve is an element of the Deligne–Mumford moduli space of curves.
Analogy with the classical Cauchy–Riemann equations
The |
https://en.wikipedia.org/wiki/Valery%20Glivenko | Valery Ivanovich Glivenko (, ; 2 January 1897 (Gregorian calendar) / 21 December 1896 (Julian calendar) in Kyiv – 15 February 1940 in Moscow) was a Soviet mathematician. He worked in foundations of mathematics, real analysis, probability theory, and mathematical statistics. He taught at Moscow Industrial Pedagogical Institute until his death at age 43. Most of Glivenko's work was published in French.
See also
Glivenko's double-negation translation
Glivenko's theorem (probability theory)
Glivenko–Cantelli theorem
Glivenko–Stone theorem
Notes
Works
External links
Photograph
Mathematical logicians
1896 births
1940 deaths
Soviet logicians
Soviet mathematicians
Ukrainian mathematicians
20th-century Russian mathematicians
Probability theorists
Mathematical analysts
Moscow State University alumni
Mathematical statisticians |
https://en.wikipedia.org/wiki/Parotid%20fascia | The parotid fascia (or parotid capsule) is a tough fascia enclosing the parotid gland. It has a superficial layer and a deep layer.
Current scientific knowledge regards the superficial layer to be continuous with the fascia of the platysma, and the deep layer to be derived from the deep cervical fascia.
Previously, both layers were thought to derive from the deep cervical fascia which was thought to form the parotid fascia by extending superior-ward and splitting into the superficial layer and deep layer. The superficial layer was traditionally described as attaching superiorly to the zygomatic process of the temporal bone, the cartilaginous portion of the external acoustic meatus, and the mastoid process of the temporal bone; the deep layer was described as attaching superiorly to the mandible, and the tympanic plate, styloid process and mastoid process of the temporal bone.
Anatomy
The parotid fascia reduces in thickness anteroposteriorly; it is thick and fibrous anteriorly, while being thin, translucent and membranous posteriorly.
The parotid fascia extends anteriorly over the masseteric fascia as a separate layer; the two fasciae are separated by a cellular layer enclosing the branches of the facial nerve (CN VII) and the parotid duct.
The fascia issues many septae that passes among the lobules of glandular tissue.
Histology
The parotid fascia is histologically atypical in that it contains muscles fibres parallel to those of the platysma, particularly in its inferior portion.
Innervation
The great auricular nerve provides sensory innervation to the parotid fascia.
Relations
The external carotid artery pierces the deep lamina of the parotid fascia to enter the parotid gland and divide into its terminal branches within its substance of the gland.
The risorius muscle arises from the parotid fascia. |
https://en.wikipedia.org/wiki/List%20of%20astrometric%20solvers | Programs capable of Astrometric solving:
The solvers Elbrus and Charon are obsolete and no longer developed.
External links
Astrometry.net webpage
All sky solver webpage
ANSVR webpage
Astrometry.net API lite
Astrotortilla webpage
CloudMakers webpage
Stellar Solver
ASTAP webpage
Regim webpage
Siril webpage
SIPS webpage
Tetra3 webpage
XParallax viu webpage
Astrometrica webpage
Observatory webpage
PinPoint webpage
PixInsight webpage
PlaneWave Instruments webpage
TheSky Astronomy Software webpage
Logiciel PRISM
PlateSolve3 info
Astronomical imaging
Astronomy software
astrometric solvers |
https://en.wikipedia.org/wiki/4Pi%20microscope | A 4Pi microscope is a laser scanning fluorescence microscope with an improved axial resolution. With it the typical range of the axial resolution of 500–700 nm can be improved to 100–150 nm, which corresponds to an almost spherical focal spot with 5–7 times less volume than that of standard confocal microscopy.
Working principle
The improvement in resolution is achieved by using two opposing objective lenses, which both are focused to the same geometrical location. Also the difference in optical path length through each of the two objective lenses is carefully aligned to be minimal. By this method, molecules residing in the common focal area of both objectives can be illuminated coherently from both sides and the reflected or emitted light can also be collected coherently, i.e. coherent superposition of emitted light on the detector is possible. The solid angle that is used for illumination and detection is increased and approaches its maximum. In this case the sample is illuminated and detected from all sides simultaneously.
The operation mode of a 4Pi microscope is shown in the figure. The laser light is divided by a beam splitter and directed by mirrors towards the two opposing objective lenses. At the common focal point superposition of both focused light beams occurs. Excited molecules at this position emit fluorescence light, which is collected by both objective lenses, combined by the same beam splitter and deflected by a dichroic mirror onto a detector. There superposition of both emitted light pathways can take place again.
In the ideal case each objective lens can collect light from a solid angle of . With two objective lenses one can collect from every direction (solid angle ). The name of this type of microscopy is derived from the maximal possible solid angle for excitation and detection. Practically, one can achieve only aperture angles of about 140° for an objective lens, which corresponds to .
The microscope can be operated in three different w |
https://en.wikipedia.org/wiki/Princeton%20Application%20Repository%20for%20Shared-Memory%20Computers | Princeton Application Repository for Shared-Memory Computers (PARSEC) is a benchmark suite composed of multi-threaded emerging workloads that is used to evaluate and develop next-generation chip-multiprocessors. It was collaboratively created by Intel and Princeton University to drive research efforts on future computer systems. Since its inception the benchmark suite has become a community project that is continued to be improved by a broad range of research institutions. PARSEC is freely available and is used for both academic and non-academic research.
Background
The introduction of chip-multiprocessors required computer manufacturers to rewrite software for the first time to take advantage of parallel processing capabilities, including rewriting existing systems for testing and development. At that time parallel software only existed in very specialized areas. However, before chip-multiprocessors became commonly available software developers were not willing to rewrite any mainstream programs, which means hardware manufacturers did not have access to any programs for test and development purposes that represented the expected real-world program behavior accurately. This posed a hen-and-egg problem that motivated a new type of benchmark suite with parallel programs that could take full advantage of chip-multiprocessors.
PARSEC was created to break this circular dependency. It was designed to fulfill the following five objectives:
Focuses on multithreaded applications
Includes emerging workloads
Has a diverse selection of programs
Workloads employ state-of-art techniques
The suite supports research
Traditional benchmarks that were publicly available before PARSEC were generally limited in their scope of included application domains or typically only available in an unparallelized, serial version. Parallel programs were only prevalent in the domain of High-Performance Computing and on a much smaller scale in business environments. Chip-multiprocessors ho |
https://en.wikipedia.org/wiki/List%20of%20free%20geology%20software | This is a list of free and open-source software for geological data handling and interpretation. The list is split into broad categories, depending on the intended use of the software and its scope of functionality.
Notice that 'free and open-source' requires that the source code is available and users are given a free software license. Simple being 'free of charge' is not sufficient—see gratis versus libre.
Well logging & Borehole visualisation
Geosciences software platforms
Geostatistics
Forward modeling
Geomodeling
Visualization, interpretation & analysis packages
Geographic information systems (GIS)
This important class of tools is already listed in the article List of GIS software.
Not true free and open-source projects
The following projects have unknown licensing, licenses or other conditions which place some restriction on use or redistribution, or which depend on non-open-source software like MATLAB or XVT (and therefore do not meet the Open Source Definition from the Open Source Initiative). |
https://en.wikipedia.org/wiki/Peptidoglycolipid%20addressing%20protein | The Peptidoglycolipid Addressing Protein (GAP) Family is a member of the Lysine Exporter (LysE) Superfamily. It is listed as item 2.A.116 in the Transporter Classification Database. The mechanism of its action is not known, but this family has been shown to be a member of the LsyE superfamily. Therefore, these proteins are most likely secondary carriers.
The proposed generalized reaction catalyzed by members of the GAP family is:
PGL (in) → PGL (outer membrane).
See also
Transport Protein
Glycolipid |
https://en.wikipedia.org/wiki/Digital%20room%20correction | Digital room correction (or DRC) is a process in the field of acoustics where digital filters designed to ameliorate unfavorable effects of a room's acoustics are applied to the input of a sound reproduction system. Modern room correction systems produce substantial improvements in the time domain and frequency domain response of the sound reproduction system.
History
The use of analog filters, such as equalizers, to normalize the frequency response of a playback system has a long history; however, analog filters are very limited in their ability to correct the distortion found in many rooms. Although digital implementations of the equalizers have been available for some time, digital room correction is usually used to refer to the construction of filters which attempt to invert the impulse response of the room and playback system, at least in part. Digital correction systems are able to use acausal filters, and are able to operate with optimal time resolution, optimal frequency resolution, or any desired compromise along the Gabor limit. Digital room correction is a fairly new area of study which has only recently been made possible by the computational power of modern CPUs and DSPs.
Operation
The configuration of a digital room correction system begins with measuring the impulse response of the room at a reference listening position, and sometimes at additional locations for each of the loudspeakers. Then, computer software is used to compute a FIR filter, which reverses the effects of the room and linear distortion in the loudspeakers. In low performance conditions, a few IIR peaking filters are used instead of FIR filters, which require convolution, a relatively computation-heavy operation. Finally, the calculated filter is loaded into a computer or other room correction device which applies the filter in real time. Because most room correction filters are acausal, there is some delay. Most DRC systems allow the operator to control the added delay through |
https://en.wikipedia.org/wiki/Lists%20of%20mathematicians | Lists of mathematicians cover notable mathematicians by nationality, ethnicity, religion, profession and other characteristics. Alphabetical lists are also available (see table to the right).
Lists by nationality, ethnicity or religion
List of American mathematicians
List of African-American mathematicians
List of Bengali mathematicians
List of Brazilian mathematicians
List of Chinese mathematicians
List of German mathematicians
List of Greek mathematicians
Timeline of ancient Greek mathematicians
List of Hungarian mathematicians
List of Indian mathematicians
List of Italian mathematicians
List of Iranian mathematicians
List of Jewish American mathematicians
List of Jewish mathematicians
List of Norwegian mathematicians
List of Muslim mathematicians
List of Polish mathematicians
List of Russian mathematicians
List of Slovenian mathematicians
List of Ukrainian mathematicians
List of Turkish mathematicians
List of Welsh mathematicians
Lists by profession
List of actuaries
List of game theorists
List of geometers
List of logicians
List of mathematical probabilists
List of statisticians
List of quantitative analysts
Other lists of mathematicians
List of amateur mathematicians
List of mathematicians born in the 19th century
List of centenarians (scientists and mathematicians)
List of films about mathematicians
List of women in mathematics
See also
The Mathematics Genealogy Project – Database for the academic genealogy of mathematicians
List of mathematical artists
External links
The MacTutor History of Mathematics archive – Extensive list of detailed biographies
The Oberwolfach Photo Collection – Photographs of mathematicians from all over the world
Photos of mathematicians – Collection of photos of mathematicians (and computer scientists) made by Andrej Bauer.
Famous Mathematicians
Calendar of mathematicians' birthdays and death anniversaries
Lists of people in STEM fields |
https://en.wikipedia.org/wiki/Solaris%20Trusted%20Extensions | Solaris Trusted Extensions is a set of security extensions incorporated in the Solaris 10 operating system by Sun Microsystems, featuring a mandatory access control model. It succeeds Trusted Solaris, a family of security-evaluated operating systems based on earlier versions of Solaris.
Solaris 10 5/09 is Common Criteria certified at Evaluation Assurance Level EAL4+ against the CAPP, RBACPP, and LSPP protection profiles.
Overview
Certain Trusted Solaris features, such as fine-grained privileges, are now part of the standard Solaris 10 release. Beginning with Solaris 10 11/06, Solaris now includes a component called Solaris Trusted Extensions which gives it the additional features necessary to position it as the successor to Trusted Solaris. Inclusion of these features in the mainstream Solaris release marks a significant change from Trusted Solaris, as it is no longer necessary to use a different Solaris release with a modified kernel for labeled security environments. Solaris Trusted Extensions is an OpenSolaris project.
Trusted Extensions additions and enhancements include:
Accounting
Role-Based Access Control
Auditing
Device Allocation
Mandatory Access Control Labeling
Solaris Trusted Extensions enforce a mandatory access control policy on all aspects of the operating system, including device access, file, networking, print and window management services. This is achieved by adding sensitivity labels to objects, thereby establishing explicit relationships between these objects. Only appropriate (and explicit) authorization allows applications and users read and/or write access to the objects.
The component also provides labeled security features in a desktop environment. Apart from extending support for the Common Desktop Environment from the Trusted Solaris 8 release, it delivers the first labeled environment based on GNOME. Solaris Trusted Extensions facilitate the access of data at multiple classification levels through a single desktop environment.
Sol |
https://en.wikipedia.org/wiki/Magnetization%20reversal%20by%20circularly%20polarized%20light | Discovered only as recently as 2006 by C.D. Stanciu and F. Hansteen and published in Physical Review Letters, this effect is generally called all-optical magnetization reversal. This magnetization reversal technique refers to a method of reversing magnetization in a magnet simply by circularly polarized light and where the magnetization direction is controlled by the light helicity. In particular, the direction of the angular momentum of the photons would set the magnetization direction without the need of an external magnetic field. In fact, this process could be seen as similar to magnetization reversal by spin injection (see also spintronics). The only difference is that now, the angular momentum is supplied by the circularly polarized photons instead of the polarized electrons.
Although experimentally demonstrated, the mechanism responsible for this all-optical magnetization reversal is not clear yet and remains a subject of debate. Thus, it is not yet clear whether an Inverse Einstein–de Haas effect is responsible for this switching or a stimulated Raman-like coherent optical scattering process. However, because phenomenologically is the inverse effect of the magneto-optical Faraday effect, magnetization reversal by circularly polarized light is referred to as the inverse Faraday effect.
Early studies in plasmas, paramagnetic solids, dielectric magnetic materials and ferromagnetic semiconductors demonstrated that excitation of a medium with a circularly polarized laser pulse corresponds to the action of an effective magnetic field. Yet, before the experiments of Stanciu and Hansteen, all-optical controllable magnetization reversal in a stable magnetic state was considered impossible.
In quantum field theory and quantum chemistry the effect where the angular momentum associated to the circular motion of the photons induces an angular momentum in the electrons is called photomagneton. This axial magnetic field with the origins in the angular momentum of the p |
https://en.wikipedia.org/wiki/Ratchet%20and%20Clank%20%28characters%29 | Ratchet and Clank are the protagonists of the Ratchet & Clank video game series developed by Insomniac Games, starting with the 2002 Ratchet & Clank. Ratchet is an anthropomorphic alien creature known as a Lombax, while Clank is an escaped robot (real name: XJ-0461 or Defect B5429671) who soon teams up with him.
Appearances
Ratchet
His first appearance was on Planet Veldin, but it is later revealed in the series that Ratchet was originally born on the Lombax home-world of Planet Fastoon in the Polaris Galaxy and later sent to Planet Veldin in the Solana Galaxy by his father Kaden to protect him from Emperor Tachyon. Growing up on Veldin, Ratchet longed to travel to new worlds and even built his own ship.
Shortly after completing his ship, Ratchet met a diminutive robot fugitive whom he dubbed Clank, who helped him to leave Veldin and fulfill his dream of traveling. From this point on, Ratchet and Clank traveled extensively through the Solana, Bogon and Polaris Galaxies, saving them on several occasions.
Clank
After meeting up with Ratchet, they travel to various planets trying to stop the goals of Chairmen Drek, and looking for Captain Qwark to help them. Along the way, Ratchet keeps drifting from the goals that Clank wants to accomplish, causing him to get upset with Ratchet's selfishness. With Clank being the only way Ratchet can pilot his ship, he makes up with him, and gets back on track. In Ratchet & Clank: Going Commando, they are living the lives of heroes and get a call from the CEO of Megacorp, wanting them to help retrieve a dangerous prototype which was stolen. In Ratchet & Clank: Up Your Arsenal, Ratchet and Clank help Captain Qwark defeat his past nemesis, Dr. Nefarious. Meanwhile, Clank is shown to be a movie star, acting as Secret Agent Clank (a PlayStation Portable game was released under the same name, and focuses on the adventures of Clank under this role). A great deal of new information regarding Clank's real origins is shown in the Future |
https://en.wikipedia.org/wiki/Polaris%20Partners | Polaris Partners is a venture capital firm active in the field of healthcare and biotechnology companies. The company has offices in Boston, Massachusetts, New York, NY and San Francisco, California.
History
Polaris Partners was founded in 1996 by Jon Flint, Terry McGuire, Steve Arnold.
The firm has over $5 billion in committed capital and is now making investments through its tenth fund. The current managing partners are Brian Chee, Amy Schulman, and Darren Carroll.
Polaris Partners also has two affiliate funds. Polaris Growth Fund targets investments in profitable, founder-owned technology companies and is led by managing partners Bryce Youngren and Dan Lombard. Polaris Innovation Fund focuses on the commercial and therapeutic potential of early-stage academic research and is led by managing partners Amy Schulman and Ellie McGuire.
See also
Polaris Growth Fund
Polaris Innovation Fund |
https://en.wikipedia.org/wiki/Castor%20%28framework%29 | Castor is a data binding framework for Java with some features like Java to Java-to-XML binding, Java-to-SQL persistence, paths between Java objects, XML documents, relational tables, etc. Castor is one of the oldest data binding projects.
Process flow
Basic process flows include class generation, marshalling, unmarshalling, etc. Marshalling framework includes a set of ClassDescriptors and FieldDescription to describe objects.
Class generation
Class generation is similar to JAXB and Zeus. Castor supports XML Schema instead of DTDs (DTDs are not supported by Castor).
Unmarshalling and marshalling
Unmarshalling and marshalling are dealt with marshall() and unmarshall() methods respectively. During marshalling, conversion process from Java to XML is carried out, and, during unmarshalling, conversion process from XML to Java is carried out. Mapping files are the equivalent of a binding schema, which allows to transforms names from XML to Java and vice versa.
Additional features
Castor offers some additional features which are not present in JAXB. Additional features include:
Database and directory server mappings - mapping between databases and directory servers to Java
JDO - Caster supports Java Data Objects.
Code samples
Code for marshalling may look like as follows:
package javajaxb;
import java.io.File;
import java.io.FileReader;
import java.io.FileWriter;
import java.io.IOException;
// Castor
import org.exolab.castor.xml.MarshalException;
import org.exolab.castor.xml.ValidationException;
// Generated hr.xml classes
import javajaxb.generated.hr.*;
public class EmployeeLister {
// Existing methods
public void modify()
throws IOException, MarshalException, ValidationException {
// Add a new employee
Employee employee = new Employee();
employee.setName("Ben Rochester");
Address address = new Address();
address.setStreet1("708 Teakwood Drive");
address.setCity("Flower Mound");
address.se |
https://en.wikipedia.org/wiki/Boolean%20operations%20on%20polygons | Boolean operations on polygons are a set of Boolean operations (AND, OR, NOT, XOR, ...) operating on one or more sets of polygons in computer graphics. These sets of operations are widely used in computer graphics, CAD, and in EDA (in integrated circuit physical design and verification software).
Algorithms
Greiner–Hormann clipping algorithm
Vatti clipping algorithm
Sutherland–Hodgman algorithm (special case algorithm)
Weiler–Atherton clipping algorithm (special case algorithm)
Uses in software
Early algorithms for Boolean operations on polygons were based on the use of bitmaps. Using bitmaps in modeling polygon shapes has many drawbacks. One of the drawbacks is that the memory usage can be very large, since the resolution of polygons is proportional to the number of bits used to represent polygons. The higher the resolution is desired, the more the number of bits is required.
Modern implementations for Boolean operations on polygons tend to use plane sweep algorithms (or Sweep line algorithms). A list of papers using plane sweep algorithms for Boolean operations on polygons can be found in References below.
Boolean operations on convex polygons and monotone polygons of the same direction may be performed in linear time.
See also
Boolean algebra
Computational geometry
Constructive solid geometry, a method of defining three-dimensional shapes using a similar set of operations
Geometry processing
General Polygon Clipper, a C library which computes the results of clipping operations
Notes
Bibliography
Mark de Berg, Marc van Kreveld, Mark Overmars, and Otfried Schwarzkopf, Computational Geometry - Algorithms and Applications, Second Edition, 2000
Jon Louis Bentley and Thomas A. Ottmann, Algorithms for Reporting and Counting Geometric Intersections, IEEE Transactions on Computers, Vol. C-28, No. 9, September 1979, pp. 643–647
Jon Louis Bentley and Derick Wood, An Optimal Worst Case Algorithm for Reporting Intersections of Rectangles, IEEE Transac |
https://en.wikipedia.org/wiki/Slush%20hydrogen | Slush hydrogen is a combination of liquid hydrogen and solid hydrogen at the triple point with a lower temperature and a higher density than liquid hydrogen. It is commonly formed by repeating a freeze-thaw process. This is most easily done by bringing liquid hydrogen near its boiling point and then reducing pressure using a vacuum pump. The decrease in pressure causes the liquid hydrogen to vaporize/boil - which removes latent heat, and ultimately decreases the temperature of the liquid hydrogen. Solid hydrogen is formed on the surface of the boiling liquid (between the gas/liquid interface) as the liquid is cooled and reaches its triple point. The vacuum pump is stopped, causing an increase of pressure, the solid hydrogen formed on the surface partially melts and begins to sink. The solid hydrogen is agitated in the liquid and the process is repeated. The resulting hydrogen slush has an increased density of 16–20% when compared to liquid hydrogen. It is proposed as a rocket fuel in place of liquid hydrogen in order to use smaller fuel tanks and thus reduce the dry weight of the vehicle.
Production
The continuous freeze technique used for slush hydrogen involves pulling a continuous vacuum
over triple point liquid and using a solid hydrogen mechanical ice-breaker to disrupt the surface of the freezing
hydrogen.
Fuel density: 0.085 g/cm3
Melting point: −259 °C
Boiling point: −253 °C
See also
Compressed hydrogen
Hydrogen safety
Metallic hydrogen
Timeline of hydrogen technologies
Liquefaction of gases |
https://en.wikipedia.org/wiki/Medical%20history | The medical history, case history, or anamnesis (from Greek: ἀνά, aná, "open", and μνήσις, mnesis, "memory") of a patient is a set of information the physicians collect over medical interviews. It involves the patient, and eventually people close to him, so to collect reliable/objective information for managing the medical diagnosis and proposing efficient medical treatments. The medically relevant complaints reported by the patient or others familiar with the patient are referred to as symptoms, in contrast with clinical signs, which are ascertained by direct examination on the part of medical personnel. Most health encounters will result in some form of history being taken. Medical histories vary in their depth and focus. For example, an ambulance paramedic would typically limit their history to important details, such as name, history of presenting complaint, allergies, etc. In contrast, a psychiatric history is frequently lengthy and in depth, as many details about the patient's life are relevant to formulating a management plan for a psychiatric illness.
The information obtained in this way, together with the physical examination, enables the physician and other health professionals to form a diagnosis and treatment plan. If a diagnosis cannot be made, a provisional diagnosis may be formulated, and other possibilities (the differential diagnoses) may be added, listed in order of likelihood by convention. The treatment plan may then include further investigations to clarify the diagnosis.
The method by which doctors gather information about a patient's past and present medical condition in order to make informed clinical decisions is called the history and physical ( the H&P). The history requires that a clinician be skilled in asking appropriate and relevant questions that can provide them with some insight as to what the patient may be experiencing. The standardized format for the history starts with the chief concern (why is the patient in the clinic or hos |
https://en.wikipedia.org/wiki/Isaac%20Newton%20Medal | The Isaac Newton Medal and Prize is a gold medal awarded annually by the Institute of Physics (IOP) accompanied by a prize of £1,000. The award is given to a physicist, regardless of subject area, background or nationality, for outstanding contributions to physics. The award winner is invited to give a lecture at the Institute. It is named in honour of Sir Isaac Newton.
The first medal was awarded in 2008 to Anton Zeilinger, having been announced in 2007. It gained national recognition in the UK in 2013 when it was awarded for technology that could lead to an 'invisibility cloak'. By 2018 it was recognised internationally as the highest honour from the IOP. In 2020, a citation study identified it as one of the five most prestigious prizes in physics.
Recipients
See also
University of Glasgow Isaac Newton Medal
Institute of Physics Awards
List of physics awards
List of awards named after people |
https://en.wikipedia.org/wiki/Ladyzhenskaya%27s%20inequality | In mathematics, Ladyzhenskaya's inequality is any of a number of related functional inequalities named after the Soviet Russian mathematician Olga Aleksandrovna Ladyzhenskaya. The original such inequality, for functions of two real variables, was introduced by Ladyzhenskaya in 1958 to prove the existence and uniqueness of long-time solutions to the Navier–Stokes equations in two spatial dimensions (for smooth enough initial data). There is an analogous inequality for functions of three real variables, but the exponents are slightly different; much of the difficulty in establishing existence and uniqueness of solutions to the three-dimensional Navier–Stokes equations stems from these different exponents. Ladyzhenskaya's inequality is one member of a broad class of inequalities known as interpolation inequalities.
Let be a Lipschitz domain in for and let be a weakly differentiable function that vanishes on the boundary of in the sense of trace (that is, is a limit in the Sobolev space of a sequence of smooth functions that are compactly supported in ). Then there exists a constant depending only on such that, in the case :
and in the case :
Generalizations
Both the two- and three-dimensional versions of Ladyzhenskaya's inequality are special cases of the Gagliardo–Nirenberg interpolation inequality
which holds whenever
Ladyzhenskaya's inequalities are the special cases when and when .
A simple modification of the argument used by Ladyzhenskaya in her 1958 paper (see e.g. Constantin & Seregin 2010) yields the following inequality for , valid for all :
The usual Ladyzhenskaya inequality on , can be generalized (see McCormick & al. 2013) to use the weak "norm" of in place of the usual norm:
See also
Agmon's inequality |
https://en.wikipedia.org/wiki/Rational%20monoid | In mathematics, a rational monoid is a monoid, an algebraic structure, for which each element can be represented in a "normal form" that can be computed by a finite transducer: multiplication in such a monoid is "easy", in the sense that it can be described by a rational function.
Definition
Consider a monoid M. Consider a pair (A,L) where A is a finite subset of M that generates M as a monoid, and L is a language on A (that is, a subset of the set of all strings A∗). Let φ be the map from the free monoid A∗ to M given by evaluating a string as a product in M. We say that L is a rational cross-section if φ induces a bijection between L and M. We say that (A,L) is a rational structure for M if in addition the kernel of φ, viewed as a subset of the product monoid A∗×A∗ is a rational set.
A quasi-rational monoid is one for which L is a rational relation: a rational monoid is one for which there is also a rational function cross-section of L. Since L is a subset of a free monoid, Kleene's theorem holds and a rational function is just one that can be instantiated by a finite state transducer.
Examples
A finite monoid is rational.
A group is a rational monoid if and only if it is finite.
A finitely generated free monoid is rational.
The monoid M4 generated by the set {0,e, a,b, x,y} subject to relations in which e is the identity, 0 is an absorbing element, each of a and b commutes with each of x and y and ax = bx, ay = by = bby, xx = xy = yx = yy = 0 is rational but not automatic.
The Fibonacci monoid, the quotient of the free monoid on two generators {a,b}∗ by the congruence aab = bba.
Green's relations
The Green's relations for a rational monoid satisfy D = J.
Properties
Kleene's theorem holds for rational monoids: that is, a subset is a recognisable set if and only if it is a rational set.
A rational monoid is not necessarily automatic, and vice versa. However, a rational monoid is asynchronously automatic and hyperbolic.
A rational monoid is a regul |
https://en.wikipedia.org/wiki/Performance%20prediction | In computer science, performance prediction means to estimate the execution time or other performance factors (such as cache misses) of a program on a given computer. It is being widely used for computer architects to evaluate new computer designs, for compiler writers to explore new optimizations, and also for advanced developers to tune their programs.
There are many approaches to predict program 's performance on computers. They can be roughly divided into three major categories:
simulation-based prediction
profile-based prediction
analytical modeling
Simulation-based prediction
Performance data can be directly obtained from computer simulators, within which each instruction of the target program is actually dynamically executed given a particular input data set. Simulators can predict program's performance very accurately, but takes considerable time to handle large programs. Examples include the PACE and Wisconsin Wind Tunnel simulators as well as the more recent WARPP simulation toolkit, which attempts to significantly reduce the time required for parallel system simulation.
Another approach, based on trace-based simulation does not run every instruction, but runs a trace file which store important program events only. This approach loses some flexibility and accuracy compared to cycle-accurate simulation mentioned above but can be much faster. The generation of traces often consumes considerable amounts of storage space and can severely impact the runtime of applications if large amount of data are recorded during execution.
Profile-based prediction
The classic approach of performance prediction treats a program as a set of basic blocks connected by execution path. Thus the execution time of the whole program is the sum of execution time of each basic block multiplied by its execution frequency, as shown in the following formula:
The execution frequencies of basic blocks are generated from a profiler, which is why this method is called profile-base |
https://en.wikipedia.org/wiki/Abel%27s%20irreducibility%20theorem | In mathematics, Abel's irreducibility theorem, a field theory result described in 1829 by Niels Henrik Abel, asserts that if ƒ(x) is a polynomial over a field F that shares a root with a polynomial g(x) that is irreducible over F, then every root of g(x) is a root of ƒ(x). Equivalently, if ƒ(x) shares at least one root with g(x) then ƒ is divisible evenly by g(x), meaning that ƒ(x) can be factored as g(x)h(x) with h(x) also having coefficients in F.
Corollaries of the theorem include:
If ƒ(x) is irreducible, there is no lower-degree polynomial (other than the zero polynomial) that shares any root with it. For example, x2 − 2 is irreducible over the rational numbers and has as a root; hence there is no linear or constant polynomial over the rationals having as a root. Furthermore, there is no same-degree polynomial that shares any roots with ƒ(x), other than constant multiples of ƒ(x).
If ƒ(x) ≠ g(x) are two different irreducible monic polynomials, then they share no roots. |
https://en.wikipedia.org/wiki/Dehn%20function | In the mathematical subject of geometric group theory, a Dehn function, named after Max Dehn, is an optimal function associated to a finite group presentation which bounds the area of a relation in that group (that is a freely reduced word in the generators representing the identity element of the group) in terms of the length of that relation (see pp. 79–80 in ). The growth type of the Dehn function is a quasi-isometry invariant of a finitely presented group. The Dehn function of a finitely presented group is also closely connected with non-deterministic algorithmic complexity of the word problem in groups. In particular, a finitely presented group has solvable word problem if and only if the Dehn function for a finite presentation of this group is recursive (see Theorem 2.1 in ). The notion of a Dehn function is motivated by isoperimetric problems in geometry, such as the classic isoperimetric inequality for the Euclidean plane and, more generally, the notion of a filling area function that estimates the area of a minimal surface in a Riemannian manifold in terms of the length of the boundary curve of that surface.
History
The idea of an isoperimetric function for a finitely presented group goes back to the work of Max Dehn in 1910s. Dehn proved that the word problem for the standard presentation of the fundamental group of a closed oriented surface of genus at least two is solvable by what is now called Dehn's algorithm. A direct consequence of this fact is that for this presentation the Dehn function satisfies Dehn(n) ≤ n. This result was extended in 1960s by Martin Greendlinger to finitely presented groups satisfying the C'(1/6) small cancellation condition. The formal notion of an isoperimetric function and a Dehn function as it is used today appeared in late 1980s – early 1990s together with the introduction and development of the theory of word-hyperbolic groups. In his 1987 monograph "Hyperbolic groups" Gromov proved that a finitely presented group is wo |
https://en.wikipedia.org/wiki/Time%20Structured%20Mapping | Time Structured Mapping (TSM) is a score based system created and used by the composer Pete M Wyer. It uses the bar-lines found in conventional musical scores to indicate durational periods during which performers, who may include actors, singers, dancers, poets as well as musicians, are given instructions, which may include conventional musical scoring or improvisational guidelines.
The system allows large and sometimes disparate groups to improvise together coherently, or to combine improvisation with scored music or with other media. It has been used to get orchestras, including the Orchestra of the Swan (see below), to improvise effectively and in educational projects, to combine student musicians with professionals, such as with Welsh National Opera and to combine other media such as dance and poetry with musical improvisation in a structured form, such as with Miro Dance Theatre, Philadelphia.
The flexibility of the system has allowed for the combination of musicians from very different backgrounds, as well as disparate ensembles with players of very different standards. Works generally combine improvisation with conventional scoring and move frequently from one system to the other. The synchronisation using 'clock-time' as a basis has also enabled works made up of players who are spatially separated such as with Four Bridges which was performed simultaneously in Britain, Germany, America and India, it allows for works which alter the conventional relationship of the composer with the musician by involving the performer directly into the creative process.
Beginnings - The Simultaneity Project
In 2004 Wyer began creating Simultaneity works: works that made recordings at the same time in different locations. The first recordings took place around Columbus Circle, New York – Wyer mapped out a circumference that passed through the Time Warner building and around the periphery of Columbus Circle itself and, with the help of a team of volunteer staff from WNYC r |
https://en.wikipedia.org/wiki/Standards%20of%20Fundamental%20Astronomy | The Standards of Fundamental Astronomy (SOFA) software libraries are a collection of subroutines that implement official International Astronomical Union (IAU) algorithms for astronomical computations.
As of February 2009 they are available in both Fortran and C source code format.
Capabilities
The subroutines in the libraries cover the following areas:
Calendars
Time scales
Earth's rotation and sidereal time
Ephemerides (limited precision)
Precession, nutation, polar motion
Proper motion
Star catalog conversions
Astrometric transformations
Galactic Coordinates
Licensing
As of the February 2009 release, SOFA licensing changed to allow use for any purpose, provided certain requirements are met. Previously, commercial usage was specifically excluded and required written agreement of the SOFA board.
See also
Naval Observatory Vector Astrometry Subroutines |
https://en.wikipedia.org/wiki/Su%E2%80%93Schrieffer%E2%80%93Heeger%20model | In condensed matter physics, the Su–Schrieffer–Heeger (SSH) model is a one-dimensional lattice model that presents topological features. It was devised by Wu-Pei Su, John Robert Schrieffer, and Alan J. Heeger in 1979, to describe the increase of electrical conductivity of polyacetylene polymer chain when doped, based on the existence of solitonic defects. It is a quantum mechanical tight binding approach, that describes the hopping of spinless electrons in a chain with two alternating types of bonds. Electrons in a given site can only hop to adjacent sites.
Depending on the ratio between the hopping energies of the two possible bonds, the system can be either in metallic phase (conductive) or in an insulating phase. The finite SSH chain can behave as a topological insulator, depending on the boundary conditions at the edges of the chain. For the finite chain, there exists an insulating phase, that is topologically non-trivial and allows for the existence of edge states that are localized at the boundaries.
Description
The model describes a half-filled one-dimensional lattice, with two sites per unit cell, A and B, which correspond to a single electron per unit cell. In this configuration each electron can either hop inside the unit cell or hop to an adjacent cell through nearest neighbor sites. As with any 1D model, with two sites per cell, there will be two bands in the dispersion relation (usually called optical and acoustic bands). If the bands do not touch, there is a band gap. If the gap lies at the Fermi level, then the system is considered to be an insulator.
The tight binding Hamiltonian in a chain with N sites can be written as
where h.c. denotes the Hermitian conjugate, v is the energy required to hop from a site A to B inside the unit cell, and w is the energy required to hop between unit cells. Here the Fermi energy is fixed to zero.
Bulk solution
The dispersion relation for the bulk can be obtained through a Fourier transform. Taking periodic bou |
https://en.wikipedia.org/wiki/The%20Unscrambler | The Unscrambler X is a commercial software product for multivariate data analysis, used for calibration of multivariate data which is often in the application of analytical data such as near infrared spectroscopy and Raman spectroscopy, and development of predictive models for use in real-time spectroscopic analysis of materials. The software was originally developed in 1986 by Harald Martens and later by CAMO Software.
Functionality
The Unscrambler X was an early adaptation of the use of partial least squares (PLS). Other techniques supported include principal component analysis (PCA), 3-way PLS, multivariate curve resolution, design of experiments, supervised classification, unsupervised classification
and cluster analysis.
The software is used in spectroscopy (IR, NIR, Raman, etc.), chromatography, and process applications in research and non-destructive quality control systems in pharmaceutical manufacturing, sensory analysis and the chemical industry. |
https://en.wikipedia.org/wiki/End-to-end%20principle | The end-to-end principle is a design framework in computer networking. In networks designed according to this principle, guaranteeing certain application-specific features, such as reliability and security, requires that they reside in the communicating end nodes of the network. Intermediary nodes, such as gateways and routers, that exist to establish the network, may implement these to improve efficiency but cannot guarantee end-to-end correctness.
The essence of what would later be called the end-to-end principle was contained in the work of Paul Baran and Donald Davies on packet-switched networks in the 1960s. Louis Pouzin pioneered the use of the end-to-end strategy in the CYCLADES network in the 1970s. The principle was first articulated explicitly in 1981 by Saltzer, Reed, and Clark. The meaning of the end-to-end principle has been continuously reinterpreted ever since its initial articulation. Also, noteworthy formulations of the end-to-end principle can be found before the seminal 1981 Saltzer, Reed, and Clark paper.
A basic premise of the principle is that the payoffs from adding certain features required by the end application to the communication subsystem quickly diminish. The end hosts have to implement these functions for correctness. Implementing a specific function incurs some resource penalties regardless of whether the function is used or not, and implementing a specific function in the network adds these penalties to all clients, whether they need the function or not.
Concept
The fundamental notion behind the end-to-end principle is that for two processes communicating with each other via some communication means, the reliability obtained from that means cannot be expected to be perfectly aligned with the reliability requirements of the processes. In particular, meeting or exceeding very high-reliability requirements of communicating processes separated by networks of nontrivial size is more costly than obtaining the required degree of relia |
https://en.wikipedia.org/wiki/Cheryl%27s%20Birthday | "Cheryl's Birthday" is a logic puzzle, specifically a knowledge puzzle. The objective is to determine the birthday of a girl named Cheryl using a handful of clues given to her friends Albert and Bernard. Written by Dr Joseph Yeo Boon Wooi of Singapore's National Institute of Education, the question was posed as part of the Singapore and Asian Schools Math Olympiad (SASMO) in 2015, and was first posted online by Singapore television presenter Kenneth Kong. It went viral in a matter of days and also hit national television in all major cities globally. Henry Ong, the Founder of SASMO was interviewed by Singapore's Mediacorp program FIVE hosts Chua En Lai and Yasmine Yonkers.
Origin
An early version of Cheryl's Birthday, with different names and dates, appeared in an online forum in 2006.
The SASMO version of the question was posted on Facebook by Singapore television presenter Kenneth Kong on April 10, 2015, and quickly went viral. Kong posted the puzzle following a debate with his wife, and he incorrectly thought it to be part of a mathematics question for a primary school examination, aimed at 10- to 11-year-old students, although it was actually part of the 2015 Singapore and Asian Schools Math Olympiad meant for 14-year-old students, a fact later acknowledged by Kong. The competition was held on 8 April 2015, with 28,000 participants from Singapore, Thailand, Vietnam, China and the UK. According to SASMO's organisers, the quiz was aimed at the top 40 per cent of the contestants and aimed to "sift out the better students". SASMO's executive director told the BBC that "there was a place for some kind of logical and analytical thinking in the workplace and in our daily lives".
The question
The question is number 24 in a list of 25 questions, and reads as follows:
Albert and Bernard just became friends with Cheryl, and they want to know when her birthday is. Cheryl gives them a list of 10 possible dates:
May 15, May 16, May 19
June 17, June 18
July 14, July 16 |
https://en.wikipedia.org/wiki/Extrinsic%20Geometric%20Flows | Extrinsic Geometric Flows is an advanced mathematics textbook that overviews geometric flows, mathematical problems in which a curve or surface moves continuously according to some rule. It focuses on extrinsic flows, in which the rule depends on the embedding of a surface into space, rather than intrinsic flows such as the Ricci flow that depend on the internal geometry of the surface and can be defined without respect to an embedding.
Extrinsic Geometric Flows was written by Ben Andrews, Bennett Chow, Christine Guenther, and Mat Langford, and published in 2020 as volume 206 of Graduate Studies in Mathematics, a book series of the American Mathematical Society.
Topics
The book consists of four chapters, roughly divided into four sections:
Chapters 1 through 4 concern the heat equation and the curve-shortening flow defined from it, in which a curve moves in the Euclidean plane, perpendicularly to itself, at a speed proportional to its curvature. It includes material on curves that remain self-similar as they flow, such as circles and the grim reaper curve , the Gage–Hamilton–Grayson theorem according to which every simple closed curve converges to a circle until eventually collapsing to a point, without ever self-intersecting, and the classification of ancient solutions of the flow.
Chapters 5 through 14 concern the mean curvature flow, a higher dimensional generalization of the curve-shortening flow that uses the mean curvature of a surface to control the speed of its perpendicular motion. After an introductory chapter on the geometry of hypersurfaces, It includes results of Ecker and Huisken concerning "locally Lipschitz entire graphs", and Huisken's theorem that uniformly convex surfaces remain smooth and convex, converging to a sphere, before they collapse to a point. Huisken's monotonicity formula is covered, as are the regularity theorems of Brakke and White according to which the flow is almost-everywhere smooth. Several chapters in this section concern th |
https://en.wikipedia.org/wiki/Optical%20path%20length | In optics, optical path length (OPL, denoted Λ in equations), also known as optical length or optical distance, is the length that light needs to travel through a vacuum to create the same phase difference as it would have when traveling through a given medium. It is calculated by taking the product of the geometric length of the optical path followed by light and the refractive index of the homogeneous medium through which the light ray propagates; for inhomogeneous optical media, the product above is generalized as a path integral as part of the ray tracing procedure. A difference in OPL between two paths is often called the optical path difference (OPD). OPL and OPD are important because they determine the phase of the light and governs interference and diffraction of light as it propagates.
Formulation
In a medium of constant refractive index, n, the OPL for a path of geometrical length s is just
If the refractive index varies along the path, the OPL is given by a line integral
where n is the local refractive index as a function of distance along the path C.
An electromagnetic wave propagating along a path C has the phase shift over C as if it was propagating a path in a vacuum, length of which, is equal to the optical path length of C. Thus, if a wave is traveling through several different media, then the optical path length of each medium can be added to find the total optical path length. The optical path difference between the paths taken by two identical waves can then be used to find the phase change. Finally, using the phase change, the interference between the two waves can be calculated.
Fermat's principle states that the path light takes between two points is the path that has the minimum optical path length.
Optical path difference
The OPD corresponds to the phase shift undergone by the light emitted from two previously coherent sources when passed through mediums of different refractive indices. For example, a wave passing through air appe |
https://en.wikipedia.org/wiki/VSI%20mill | A VSI mill (vertical shaft impactor mill) is a mill that comminutes particles of material into smaller (finer) particles by throwing them against a hard surface inside the mill (called the wear plate). Any hard or friable materials can be ground with low value of metal waste. This type of mill is combined with a classifier for fine tuning of a product size.
Operating characteristic
Strength of material - up to 200 MPa
Mohs hardness - up to 7
Absolute humidity - up to 1% (strong condition)
Feed size - up to 40 mm
Product size - less than 0.5 mm
Capacity - up to 20 t/h
Mill uses for hard and friable materials. Wood, most metals and plastics are inoperable. Raw material must be dry. Capacity strongly depends on characteristics of material and a product size.
Grinding principle
A schematic drawing of a VSI mill is shown at Fig. 1. Raw material particles transports via hopper (1) into the accelerator (2). An accelerator (2) rotates with a high speed and particles increase their speed by a centrifugal force. After leaving channels of the accelerator particles impact with a wear plate (3) in the grinding chamber. A high speed impact leads to destruction of particles into pieces of different size. Big particles (greater than 1 mm) fall down to outlet (5) and later they transport to hopper by an elevator. Other particles (less than 1 mm) lift by air stream into a classifier where blades (4) make rotating dust flow. Middle size particles shift to wall by centrifugal force in a big chamber of a classifier and fall down to cone (6) and later they move into an accelerator (2). Small particles move by air stream to outlet (8). Fine tune of a product size is achieved by changing of blade angle.
Accelerator
A schematic drawing of a VSI mill accelerator is shown at Fig. 2. Very high speed of particle motion needs to obtain good grinding but high speed must lead to high metal waste. Really it doesn't happen because in an accelerator particles move along same material in special |
https://en.wikipedia.org/wiki/Digital%20image%20correlation%20for%20electronics | Digital image correlation analyses have applications in material property characterization, displacement measurement, and strain mapping. As such, DIC is becoming an increasingly popular tool when evaluating the thermo-mechanical behavior of electronic components and systems.
CTE measurements and glass transition temperature identification
The most common application of DIC in the electronics industry is the measurement of coefficient of thermal expansion (CTE). Because it is a non-contact, full-field surface technique, DIC is ideal for measuring the effective CTE of printed circuit boards (PCB) and individual surfaces of electronic components. It is especially useful for characterizing the properties of complex integrated circuits, as the combined thermal expansion effects of the substrate, molding compound, and die make effective CTE difficult to estimate at the substrate surface with other experimental methods. DIC techniques can be used to calculate average in-plane strain as a function of temperature over an area of interest during a thermal profile. Linear curve-fitting and slope calculation can then be used to estimate an effective CTE for the observed area. Because the driving factor in solder fatigue is most often the CTE mismatch between a component and the PCB it is soldered to, accurate CTE measurements are vital for calculating printed circuit board assembly (PCBA) reliability metrics.
DIC is also useful for characterizing the thermal properties of polymers. Polymers are often used in electronic assemblies as potting compounds, conformal coatings, adhesives, molding compounds, dielectrics, and underfills. Because the stiffness of such materials can vary widely, accurately determining their thermal characteristics with contact techniques that transfer load to the specimen, such as dynamic mechanical analysis (DMA) and thermomechanical analysis (TMA), is difficult to do with consistency. Accurate CTE measurements are important for these materials becau |
https://en.wikipedia.org/wiki/Thermanaerovibrio%20velox | Thermanaerovibrio velox is a Gram-negative, moderately thermophilic, organotrophic and anaerobic bacterium from the genus of Thermanaerovibrio which has been isolated from cyanobacterial mat from Uzon caldera in Russia. |
https://en.wikipedia.org/wiki/Eirpac | EIRPAC is Ireland's packet switched X.25 data network. It replaced Euronet in 1984. Eirpac uses the DNIC 2724. HEAnet was first in operation via X.25 4.8Kb Eirpac connections back in 1985. By 1991 most Universities in Ireland used 64k Eirpac VPN connections. Today Eirpac is owned and operated by Eircom but does not accept new applications for Eirpac: no reference is made on the products-offering on their website They began the process of migrating existing customers using more capable forms of telecommunications back in late April 2004.
In 2001 Eirpac had approximately 5,000 customers dialing in daily via switched virtual circuits although those numbers have been declining rapidly. Eirpac is still an important element for data transfer in Ireland with numerous banks (automatic teller machines), telecoms switches, pager systems and other networks that utilise permanent virtual circuits.
Connecting to Eirpac can be done using a simple AT compatible modem. The dial in number is 1511 + baud rate. So for example to connect at 28,800 bit/s would be ATDT 15112880. The user would then have to authenticate with their Eirpac NUI. The NUI (Network User Identification) consists of a name and password provided by Eir.
Sources
External links
Official website
Computer networking
Internet in Ireland |
https://en.wikipedia.org/wiki/Funga | In life sciences, funga is a recent term for the kingdom fungi similar to the longstanding fauna for animals and flora for plants. The term was considered to be needed in order to simplify projects oriented toward implementation of educational and conservation goals. An informal meeting held during the IX Congreso Latinoamericano de Micología resulted in a proposal for the term in 2018; alternative terms were also proposed and discussed. Funga was recommended by the IUCN in 2021. The term highlights parallel terminology referring to treatments of these macroorganisms of particular geographical areas.
Funga refers to the fungi of a particular region, habitat, or geological period.
The Species Survival Commission (SSC) of the International Union for Conservation of Nature (IUCN) in August 2021 called for the recognition of fungi as one of three kingdoms of life, and critical to protecting and restoring Earth. They ask that the phrase animals and plants be replaced by animals, fungi, and plants, and fauna and flora by fauna, flora, and funga.
The term funga had been used in the scientific literature before the later recommendation. |
https://en.wikipedia.org/wiki/String%20group | In topology, a branch of mathematics, a string group is an infinite-dimensional group introduced by as a -connected cover of a spin group. A string manifold is a manifold with a lifting of its frame bundle to a string group bundle. This means that in addition to being able to define holonomy along paths, one can also define holonomies for surfaces going between strings. There is a short exact sequence of topological groupswhere is an Eilenberg–MacLane space and is a spin group. The string group is an entry in the Whitehead tower (dual to the notion of Postnikov tower) for the orthogonal group:It is obtained by killing the homotopy group for , in the same way that is obtained from by killing . The resulting manifold cannot be any finite-dimensional Lie group, since all finite-dimensional compact Lie groups have a non-vanishing . The fivebrane group follows, by killing .
More generally, the construction of the Postnikov tower via short exact sequences starting with Eilenberg–MacLane spaces can be applied to any Lie group G, giving the string group String(G).
Intuition for the string group
The relevance of the Eilenberg-Maclane space lies in the fact that there are the homotopy equivalencesfor the classifying space , and the fact . Notice that because the complex spin group is a group extensionthe String group can be thought of as a "higher" complex spin group extension, in the sense of higher group theory since the space is an example of a higher group. It can be thought of the topological realization of the groupoid whose object is a single point and whose morphisms are the group . Note that the homotopical degree of is , meaning its homotopy is concentrated in degree , because it comes from the homotopy fiber of the mapfrom the Whitehead tower whose homotopy cokernel is . This is because the homotopy fiber lowers the degree by .
Understanding the geometry
The geometry of String bundles requires the understanding of multiple constructions in homotopy |
https://en.wikipedia.org/wiki/Samson%20Shatashvili | Samson Lulievich Shatashvili ( Russian: Самсон Лулиевич Шаташвили, born February 1960) is a theoretical and mathematical physicist who has been working at Trinity College Dublin, Ireland, since 2002. He holds the Trinity College Dublin Chair of Natural Philosophy and is the director of the Hamilton Mathematics Institute. He is also affiliated with the Institut des Hautes Études Scientifiques (IHÉS), where he held the Louis Michel Chair from 2003 to 2013 and the Israel Gelfand Chair from 2014 to 2019. Prior to moving to Trinity College, he was a professor of physics at Yale University from 1994.
Background
Shatashvili received his PhD in 1984 at the Steklov Institute of Mathematics in Saint Petersburg under the supervision of Ludwig Faddeev (and Vladimir Korepin). The topic of his thesis was on gauge theories and had the title "Modern Problems in Gauge Theories". In 1989 he received D.S. degree (doctor of science, 2nd degree in Russia) also at the Steklov Institute of Mathematics in Saint Petersburg.
Contributions and awards
Shatashvili has made several discoveries in the fields of theoretical and mathematical physics. He is mostly known for his work with Ludwig Faddeev on quantum anomalies, with Anton Alekseev on geometric methods in two-dimensional conformal field theories, for his work on background independent open string field theory, with Cumrun Vafa on superstrings and manifolds of exceptional holonomy, with Anton Gerasimov on tachyon condensation, with Andrei Losev, Nikita Nekrasov and Greg Moore on instantons and supersymmetric gauge theories, as well as for his work with Nikita Nekrasov on quantum integrable systems. In particular, Shatashvili and Nikita Nekrasov discovered the gauge/Bethe correspondence. In 1995 he received an Outstanding Junior Investigator Award of the Department of Energy (DOE) and a NSF Career Award and from 1996 to 2000 he was a Sloan Fellow. Shatashvili is the member of the Royal Irish Academy and the recipient of the 2010 Royal I |
https://en.wikipedia.org/wiki/TheTVDB | TheTVDB.com is a community driven database of television shows. All content and images on the site have been contributed by the site's users; the site uses moderated editing to maintain its own standards.
Purpose
The stated aim to be the most complete and accurate source of information on TV series from many languages and countries. It provides a repository of series, season and episode images that can be used in various types of Home theater PC software to make the visual interface experience more appealing.
Applications
The site has a full JSON API that allows other software and websites to use this information. The API is currently being used by the myTV add-in for Windows Media Center, Jellyfin, Zappiti, Kodi (formerly XBMC); Plex; the meeTVshows and TVNight plugins for Meedio (a digital recorder acquired by Yahoo.com); the MP-TVSeries plugin for MediaPortal, Numote (iPhone/Android app and set-top device), and more.
History
In 2019, TheTVDB was acquired by TV Time, who used TheTVDB's database as a source for all the TV shows and episode descriptions in it.
As of 2020, the API now requires a license subscription from either application developers or end users. |
https://en.wikipedia.org/wiki/Distributed%20tree%20search | Distributed tree search (DTS) algorithm is a class of algorithms for searching values in an efficient and distributed manner. Their purpose is to iterate through a tree by working along multiple branches in parallel and merging the results of each branch into one common solution, in order to minimize time spent searching for a value in a tree-like data structure.
The original paper was written in 1988 by Chris Ferguson and Richard E. Korf, from the University of California's Computer Science Department. They used multiple other chess AIs to develop this wider range algorithm.
Overview
The Distributed Tree Search Algorithm (also known as Korf–Ferguson algorithm) was created to solve the following problem: "Given a tree with non-uniform branching factor and depth, search it in parallel with an arbitrary number of processors as fast as possible."
The top-level part of this algorithm is general and does not use a particular existing type of tree-search, but it can be easily specialized to fit any type of non-distributed tree-search.
DTS consists of using multiple processes, each with a node and a set of processors attached, with the goal of searching the sub-tree below the said node. Each process then divides itself into multiple coordinated sub-processes which recursively divide themselves again until an optimal way to search the tree has been found based on the number of processors available to each process. Once a process finishes, DTS dynamically reassigns the processors to other processes as to keep the efficiency to a maximum through good load-balancing, especially in irregular trees.
Once a process finishes searching, it recursively sends and merges a resulting signal to its parent-process, until all the different sub-answers have been merged and the entire problem has been solved.
Applications
DTS is only applicable under two major conditions: the data structure to search through is a tree, and the algorithm can make use of at least one computation unit |
https://en.wikipedia.org/wiki/Wireless%20device%20radiation%20and%20health | The antennas contained in mobile phones, including smartphones, emit radiofrequency (RF) radiation (non-ionizing "radio waves" such as microwaves); the parts of the head or body nearest to the antenna can absorb this energy and convert it to heat. Since at least the 1990s, scientists have researched whether the now-ubiquitous radiation associated with mobile phone antennas or cell phone towers is affecting human health. Mobile phone networks use various bands of RF radiation, some of which overlap with the microwave range. Other digital wireless systems, such as data communication networks, produce similar radiation.
In response to public concern, the World Health Organization (WHO) established the International EMF (Electric and Magnetic Fields) Project in 1996 to assess the scientific evidence of possible health effects of EMF in the frequency range from 0 to 300 GHz. They have stated that although extensive research has been conducted into possible health effects of exposure to many parts of the frequency spectrum, all reviews conducted so far have indicated that, as long as exposures are below the limits recommended in the ICNIRP (1998) EMF guidelines, which cover the full frequency range from 0–300 GHz, such exposures do not produce any known adverse health effect. In 2011, International Agency for Research on Cancer (IARC), an agency of the WHO, classified wireless radiation as Group 2B – possibly carcinogenic. That means that there "could be some risk" of carcinogenicity, so additional research into the long-term, heavy use of wireless devices needs to be conducted. The WHO states that "A large number of studies have been performed over the last two decades to assess whether mobile phones pose a potential health risk. To date, no adverse health effects have been established as being caused by mobile phone use."
International guidelines on exposure levels to microwave frequency EMFs such as ICNIRP limit the power levels of wireless devices and it is uncommon |
https://en.wikipedia.org/wiki/Canadian%20Trusted%20Computer%20Product%20Evaluation%20Criteria | The Canadian Trusted Computer Product Evaluation Criteria (CTCPEC) is a computer security standard published in 1993 by the Communications Security Establishment to provide an evaluation criterion on IT products. It is a combination of the TCSEC (also called Orange Book) and the European ITSEC approaches.
CTCPEC led to the creation of the Common Criteria standard.
The Canadian System Security Centre, part of the Communications Security Establishment was founded in 1988 to establish a Canadian computer security standard.
The Centre published a draft of the standard in April 1992. The final version was published in January 1993. |
https://en.wikipedia.org/wiki/Saporin | Saporin is a protein that is useful in biological research applications, especially studies of behavior. Saporins are so-called ribosome inactivating proteins (RIPs), due to its N-glycosidase activity, from the seeds of Saponaria officinalis (common name: soapwort). It was first described by Fiorenzo Stirpe and his colleagues in 1983 in an article that illustrated the unusual stability of the protein.
Among the RIPs are some of the most toxic molecules known, such as ricin and abrin. Each of these toxins contain a second protein subunit, which inserts the RIP into a cell, enabling it to enzymatically inactivate the ribosomes, shutting down protein synthesis, stopping basic cell functions, resulting in cell death, and eventually causing death of the victim. Saporin has no chain capable of inserting it into the cell. Thus it and the soapwort plant are safe to handle. This has aided its use in research.
If given a method of entry into the cell, saporin becomes a very potent toxin, since its enzymatic activity is among the highest of all RIPs. The enzymatic activity of RIPs is unusually specific: a single adenine base is removed from the ribosomal RNA of the large subunit of the ribosome. This is the Achilles’ heel of the ribosome; the removal of this base completely inhibits the ability of that ribosome to participate in protein synthesis. The fungal toxin alpha-sarcin cuts the ribosomal RNA at the adjacent base, also causing protein synthesis inhibition.
The conversion of saporin into a toxin has been used to create a series of research molecules. Attachment of saporin to something that enters the cell will convert it into a toxin for that cell. If the agent is specific for a single cell type, by being an antibody specific for some molecule that is only presented on the surface of the target cell type, then a set group of cells can be removed. This has many applications, some more successful than others. Saporin is not the only molecule that is used in this way; t |
https://en.wikipedia.org/wiki/IBM%20RAD6000 | The RAD6000 radiation-hardened single-board computer, based on the IBM RISC Single Chip CPU, was manufactured by IBM Federal Systems. IBM Federal Systems was sold to Loral, and by way of acquisition, ended up with Lockheed Martin and is currently a part of BAE Systems Electronic Systems. RAD6000 is mainly known as the onboard computer of numerous NASA spacecraft.
History
The radiation-hardening of the original RSC 1.1 million-transistor processor to make the RAD6000's CPU was done by IBM Federal Systems Division working with the Air Force Research Laboratory.
, there are 200 RAD6000 processors in space on a variety of NASA, United States Department of Defense and commercial spacecraft, including:
Mars Exploration Rovers (Spirit and Opportunity)
Deep Space 1 probe
Mars Polar Lander and Mars Climate Orbiter
Mars Odyssey orbiter
Spitzer Infrared Telescope Facility
MESSENGER probe to Mercury
STEREO Spacecraft
IMAGE/Explorer 78 MIDEX spacecraft
Genesis and Stardust sample return missions
Phoenix Mars Polar Lander
Dawn Mission to the asteroid belt using ion propulsion
Solar Dynamics Observatory, Launched Feb 11, 2010 (flying both RAD6000 and RAD750)
Burst Alert Telescope Image Processor on board the Swift Gamma-Ray Burst Mission
DSCOVR Deep Space Climate Observatory spacecraft
The computer has a maximum clock rate of 33 MHz and a processing speed of about 35 MIPS. In addition to the CPU itself, the RAD6000 has 128 MB of ECC RAM. A typical real-time operating system running on NASA's RAD6000 installations is VxWorks. The Flight boards in the above systems have switchable clock rates of 2.5, 5, 10, or 20 MHz.
Reported to have a unit cost somewhere between US$200,000 and US$300,000, RAD6000 computers were released for sale in the general commercial market in 1996.
The RAD6000's successor is the RAD750 processor, based on IBM's PowerPC 750.
See also
IBM RS/6000
PowerPC 601, a consumer chip with similar computing capabilities to the RAD6000 |
https://en.wikipedia.org/wiki/Occupational%20safety%20and%20health | Occupational safety and health (OSH) or occupational health and safety (OHS), also known simply as occupational health or occupational safety, is a multidisciplinary field concerned with the safety, health, and welfare of people at work (i.e. in an occupation). These terms also refer to the goals of this field, so their use in the sense of this article was originally an abbreviation of occupational safety and health program/department etc.
OSH is related to the fields of occupational medicine and occupational hygiene.
The goal of an occupational safety and health program is to foster a safe and healthy occupational environment. OSH also protects all the general public who may be affected by the occupational environment.
According to the official estimates of the United Nations, the WHO/ILO Joint Estimate of the Work-related Burden of Disease and Injury, almost 2 million people die each year attributable to exposure to occupational risk factors. Globally, more than 2.78 million people die annually as a result of workplace-related accidents or diseases, corresponding to one death every fifteen seconds. There are an additional 374 million non-fatal work-related injuries annually. It is estimated that the economic burden of occupational-related injury and death is nearly four per cent of the global gross domestic product each year. The human cost of this adversity is enormous.
In common-law jurisdictions, employers have the common law duty (also called duty of care) to take reasonable care of the safety of their employees. Statute law may, in addition, impose other general duties, introduce specific duties, and create government bodies with powers to regulate occupational safety issues: details of this vary from jurisdiction to jurisdiction.
Definition
As defined by the World Health Organization (WHO) "occupational health deals with all aspects of health and safety in the workplace and has a strong focus on primary prevention of hazards." Health has been defined as |
https://en.wikipedia.org/wiki/Idea%20networking | Idea networking is a qualitative method of doing a cluster analysis of any collection of statements, developed by Mike Metcalfe at the University of South Australia. Networking lists of statements acts to reduce them into a handful of clusters or categories. The statements might be source from interviews, text, websites, focus groups, SWOT analysis or community consultation. Idea networking is inductive as it does not assume any prior classification system to cluster the statements. Rather keywords or issues in the statements are individually linked (paired). These links can then be entered into network software to be displayed as a network with clusters. When named, these clusters provide emergent categories, meta themes, frames or concepts which represent, structure or sense-make the collection of statements.
Method
An idea network can be constructed in the following way:
60 to 200 statements are listed and assigned reference numbers.
A table is constructed showing which statements (by reference number) are linked (paired) and why. For example, statement 1 maybe linked to statements 4, 23, 45, 67, 89 and 107 because they all are about the weather (see table).
The number of links per statement should be from 1 to 7; many more will result in a congested network diagram. This means choosing why the statements are linked may need grading as strong or weak, or by sub sets. For example, statements linked as being about weather conditions may be further subdivided into those about good weather, wet weather or bad weather, etc.). This linking is sometimes called 'coding' in thematic analysis which highlights that the statements can be linked for several and different reasons (source, context, time, etc.). There maybe many tens of reasons why statements are linked. The same statements may be linked for different reasons. The number of reasons should not be restricted to low number as so anticipate the resultant clustering.
The reference numbers are put into a networ |
https://en.wikipedia.org/wiki/Pfeilstorch | , ; plural , ) is a stork that gets injured by an arrow while wintering in Africa and returns to Europe with the arrow stuck in its body. As of 2003, about 25 have been documented in Germany.
The first and most famous was a white stork found in 1822 near the German village of Klütz, in the state of Mecklenburg-Vorpommern. It was carrying a spear from central Africa in its neck. The specimen was stuffed and can be seen today in the zoological collection of the University of Rostock. It is therefore referred to as the .
This was crucial in understanding the migration of European birds. Before migration was understood, people struggled to explain the sudden annual disappearance of birds like the white stork and barn swallow. Besides migration, some theories of the time held that they turned into other kinds of birds, mice, or hibernated underwater during the winter, and such theories were even propagated by zoologists of the time. The in particular proved that birds migrate long distances to wintering grounds. |
https://en.wikipedia.org/wiki/Maize%20flour | Maize flour or corn flour is a flour ground from dried maize (corn). It is a common staple food, and is ground to coarse, medium, and fine consistencies. Coarsely ground corn flour (meal) is known as cornmeal. When maize flour is made from maize that has been soaked in an alkaline solution, e.g., limewater (a process known as nixtamalization), it is called masa harina (or masa flour), which is used for making arepas, tamales and tortillas.
See also
Semolina
List of maize dishes |
https://en.wikipedia.org/wiki/Astrobiophysics | Astrobiophysics is a field of intersection between astrophysics and biophysics concerned with the influence of the astrophysical phenomena upon life on planet Earth or some other planet in general. It differs from astrobiology which is concerned with the search of extraterrestrial life. Examples of the topics covered by this branch of science include the effect of supernovae on life on Earth and the effects of cosmic rays on irradiation at sea level. |
https://en.wikipedia.org/wiki/Jon%20T.%20Pitts | Jon T. Pitts (born 1948) is an American mathematician working on geometric analysis and variational calculus. He is a professor at Texas A&M University.
Pitts obtained his Ph.D. from Princeton University in 1974 under the supervision of Frederick Almgren, Jr., with the thesis Every Compact Three-Dimensional Manifold Contains Two-Dimensional Minimal Submanifolds.
He received a Sloan Fellowship in 1981.
The Almgren–Pitts min-max theory is named after his teacher and him.
Selected publications
"Existence and regularity of minimal surfaces on Riemannian manifolds"
"Applications of minimax to minimal surfaces and the topology of 3-manifolds"
"Existence of minimal surfaces of bounded topological type in three-manifolds" |
https://en.wikipedia.org/wiki/Sherman%E2%80%93Morrison%20formula | In mathematics, in particular linear algebra, the Sherman–Morrison formula, named after Jack Sherman and Winifred J. Morrison, computes the inverse of the sum of an invertible matrix and the outer product, , of vectors and . The Sherman–Morrison formula is a special case of the Woodbury formula. Though named after Sherman and Morrison, it appeared already in earlier publications.
Statement
Suppose is an invertible square matrix and are column vectors. Then is invertible iff . In this case,
Here, is the outer product of two vectors and . The general form shown here is the one published by Bartlett.
Proof
() To prove that the backward direction is invertible with inverse given as above) is true, we verify the properties of the inverse. A matrix (in this case the right-hand side of the Sherman–Morrison formula) is the inverse of a matrix (in this case ) if and only if .
We first verify that the right hand side () satisfies .
To end the proof of this direction, we need to show that in a similar way as above:
(In fact, the last step can be avoided since for square matrices and , is equivalent to .)
() Reciprocally, if , then via the matrix determinant lemma, , so is not invertible.
Application
If the inverse of is already known, the formula provides a numerically cheap way to compute the inverse of corrected by the matrix (depending on the point of view, the correction may be seen as a perturbation or as a rank-1 update). The computation is relatively cheap because the inverse of does not have to be computed from scratch (which in general is expensive), but can be computed by correcting (or perturbing) .
Using unit columns (columns from the identity matrix) for or , individual columns or rows of may be manipulated and a correspondingly updated inverse computed relatively cheaply in this way. In the general case, where is a -by- matrix and and are arbitrary vectors of dimension , the whole matrix is updated and the computation takes |
https://en.wikipedia.org/wiki/Scissor%20%28fish%29 | Scissor macrocephalus is a species of characin endemic to Suriname. It is the only member of its genus. |
https://en.wikipedia.org/wiki/Piophila | Piophila is a genus of small flies which includes the species known as the cheese fly. Both Piophila species feed on carrion, including human corpses.
Description
Piophila are small dark flies with unmarked wings. The setulae (fine hairs) on the thorax are confined to three distinct rows.
Species
There are two species in the genus Piophila:
Piophila casei (Linnaeus, 1758), the cheese fly
Piophila megastigmata J. McAlpine, 1978 |
https://en.wikipedia.org/wiki/Marine%20snow | In the deep ocean, marine snow (also known as "ocean dandruff") is a continuous shower of mostly organic detritus falling from the upper layers of the water column. It is a significant means of exporting energy from the light-rich photic zone to the aphotic zone below, which is referred to as the biological pump. Export production is the amount of organic matter produced in the ocean by primary production that is not recycled (remineralised) before it sinks into the aphotic zone. Because of the role of export production in the ocean's biological pump, it is typically measured in units of carbon (e.g. mg C m−2 d−1). The term was coined by explorer William Beebe as observed from his bathysphere. As the origin of marine snow lies in activities within the productive photic zone, the prevalence of marine snow changes with seasonal fluctuations in photosynthetic activity and ocean currents. Marine snow can be an important food source for organisms living in the aphotic zone, particularly for organisms that live very deep in the water column.
Composition
Marine snow is made up of a variety of mostly organic matter, including dead or dying animals and phytoplankton, protists, fecal matter, sand, and other inorganic dust. Most trapped particles are more vulnerable to grazers than they would be as free-floating individuals. Aggregates can form through abiotic processes (i.e. extrapolymeric substances). These are natural polymers exuded as waste products mostly by phytoplankton and bacteria. Mucus secreted by zooplankton (mostly salps, appendicularians, and pteropods) also contribute to the constituents of marine snow aggregates. These aggregates grow over time and may reach several centimeters in diameter, traveling for weeks before reaching the ocean floor.
Marine snow often forms during algal blooms. As phytoplankton accumulate, they aggregate or get captured in other aggregates, both of which accelerate the sinking rate. Aggregation and sinking is actually thought to be |
https://en.wikipedia.org/wiki/Blame | Blame is the act of censuring, holding responsible, or making negative statements about an individual or group that their actions or inaction are socially or morally irresponsible, the opposite of praise. When someone is morally responsible for doing something wrong, their action is blameworthy. By contrast, when someone is morally responsible for doing something right, it may be said that their action is praiseworthy. There are other senses of praise and blame that are not ethically relevant. One may praise someone's good dress sense, and blame their own sense of style for their own dress sense.
Neurology
Blaming appears to relate to include brain activity in the temporoparietal junction (TPJ). The amygdala has been found to contribute when we blame others, but not when we respond to their positive actions.
Sociology and psychology
Humans—consciously and unconsciously—constantly make judgments about other people. The psychological criteria for judging others may be partly ingrained, negative, and rigid, indicating some degree of grandiosity.
Blaming provides a way of devaluing others, with the end result that the blamer feels superior, seeing others as less worthwhile and/or making the blamer "perfect". Off-loading blame means putting the other person down by emphasizing their flaws.
Victims of manipulation and abuse frequently feel responsible for causing negative feelings in the manipulator/abuser towards them and the resultant anxiety in themselves. This self-blame often becomes a major feature of victim status.
The victim gets trapped into a self-image of victimization. The psychological profile of victimization includes a pervasive sense of helplessness, passivity, loss of control, pessimism, negative thinking, strong feelings of guilt, shame, remorse, self-blame, and depression. This way of thinking can lead to hopelessness and despair.
Self-blame
Two main types of self-blame exist:
behavioral self-blame – undeserved blame based on actions. Victims w |
https://en.wikipedia.org/wiki/Outline%20of%20television%20broadcasting | The following outline is provided as an overview of and topical guide to television broadcasting:
Television broadcasting: form of broadcasting in which a television signal is transmitted by radio waves from a terrestrial (Earth based) transmitter of a television station to TV receivers having an antenna.
Nature of television broadcasting
Television broadcasting can be described as all of the following:
Technology
Electronics technology
Telecommunication technology
Broadcasting technology
Types of television broadcasting
Terrestrial television
Closed-circuit television
Outside broadcasting
Direct broadcast satellite (DBS)
History of television broadcasting
History of television
Television broadcasting technology
Infrastructure and broadcasting system
Television set
List of television manufacturers
Satellite television
Microwave link
Television receive-only
Television transmitter
Transposer
Transmitter station
System standards
System A the 405 line system
441 line system
Broadcast television systems
System B
System G
System H
System I
System M
Terrestrial television
Television signals
Video signal
Analogue television synchronization
Back porch
Black level
Blanking level
Chrominance
Composite video
Frame (video)
Front porch
Horizontal blanking interval
Horizontal scan rate
Luma (video)
Overscan
Raster scan
Television lines
White clipper
Vertical blanking interval
VF bandwidth
VIT signals
The sound signal
Multichannel television sound
NICAM
Pre-emphasis
Sound in syncs
Zweikanalton
Broadcast signal
Beam tilt
Downlink CNR
Earth bulge
Frequency offset
Field strength in free space
Knife-edge effect
Null fill
Output power of an analog TV transmitter
Path loss
Radio propagation
Radiation pattern
Skew
Television interference
Modulation and frequency conversion
Amplitude modulation
Frequency mixer
Frequency modulation
Quadrature amplitude modulation
Vestigial sideband modulation (VSBF)
IF and RF signal
Differential gain
Differential phase
Distortio |
https://en.wikipedia.org/wiki/84%20%28number%29 | 84 (eighty-four) is the natural number following 83 and preceding 85.
In mathematics
84 is a semiperfect number, being thrice a perfect number, and the sum of the sixth pair of twin primes .
It is the third (or second) dodecahedral number, and the sum of the first seven triangular numbers (1, 3, 6, 10, 15, 21, 28, 36), which makes it the sixth tetrahedral number.
The twenty-second unique prime in decimal, with notably different digits than its preceding (and known following) terms in the same sequence, contains a total of 84 digits.
A hepteract is a seven-dimensional hypercube with 84 penteract 5-faces.
84 is the limit superior of the largest finite subgroup of the mapping class group of a genus surface divided by .
Under Hurwitz's automorphisms theorem, a smooth connected Riemann surface of genus will contain an automorphism group whose order is classically bound to .
There are 84 zero divisors in the 16-dimensional sedenions .
In astronomy
Messier object M84, a magnitude 11.0 lenticular galaxy in the constellation Virgo
The New General Catalogue object NGC 84, a single star in the constellation Andromeda
In other fields
Eighty-four is also:
The year AD 84, 84 BC, or 1984.
The number of years in the , a cycle used in the past by Celtic peoples, equal to 3 cycles of the Julian Calendar and to 4 Metonic cycles and 1 octaeteris
The atomic number of polonium
The model number of Harpoon missile
WGS 84 - The latest revision of the World Geodetic System, a fixed global reference frame for the Earth.
The house number of 84 Avenue Foch
The number of the French department Vaucluse
The code for international direct dial phone calls to Vietnam
The town of Eighty Four, Pennsylvania
The company 84 Lumber
The ISBN Group Identifier for books published in Spain
A variation of the game 42 played with two sets of dominoes.
The film 84 Charing Cross Road (1987) starring Anne Bancroft and Anthony Hopkins
KKNX Radio 84 in Eugene, Oregon
The B-Side to |
https://en.wikipedia.org/wiki/Lune%20of%20Hippocrates | In geometry, the lune of Hippocrates, named after Hippocrates of Chios, is a lune bounded by arcs of two circles, the smaller of which has as its diameter a chord spanning a right angle on the larger circle. Equivalently, it is a non-convex plane region bounded by one 180-degree circular arc and one 90-degree circular arc. It was the first curved figure to have its exact area calculated mathematically.
History
Hippocrates wanted to solve the classic problem of squaring the circle, i.e. constructing a square by means of straightedge and compass, having the same area as a given circle. He proved that the lune bounded by the arcs labeled E and F in the figure has the same area as triangle ABO. This afforded some hope of solving the circle-squaring problem, since the lune is bounded only by arcs of circles. Heath concludes that, in proving his result, Hippocrates was also the first to prove that the area of a circle is proportional to the square of its diameter.
Hippocrates' book on geometry in which this result appears, Elements, has been lost, but may have formed the model for Euclid's Elements. Hippocrates' proof was preserved through the History of Geometry compiled by Eudemus of Rhodes, which has also not survived, but which was excerpted by Simplicius of Cilicia in his commentary on Aristotle's Physics.
Not until 1882, with Ferdinand von Lindemann's proof of the transcendence of π, was squaring the circle proved to be impossible.
Proof
Hippocrates' result can be proved as follows: The center of the circle on which the arc lies is the point , which is the midpoint of the hypotenuse of the isosceles right triangle . Therefore, the diameter of the larger circle is times the diameter of the smaller circle on which the arc lies. Consequently, the smaller circle has half the area of the larger circle, and therefore the quarter circle is equal in area to the semicircle . Subtracting the crescent-shaped area from the quarter circle gives triangle and subtr |
https://en.wikipedia.org/wiki/Lajos%20P%C3%B3sa%20%28mathematician%29 | Lajos Pósa (born 9 December 1947 in Budapest) is a Hungarian mathematician working in the topic of combinatorics, and one of the most prominent mathematics educators of Hungary, best known for his mathematics camps for gifted students. He is a winner of the Széchenyi Prize.
Paul Erdős's favorite "child", he discovered theorems at the age of 13. Since 2002, he has worked at the Rényi Institute of the Hungarian Academy of Sciences; earlier he was at the Eötvös Loránd University, at the Departments of Mathematical Analysis, Computer Science.
Biography
He was born in Budapest, Hungary on 9 December 1947. His father was a chemist, his mother a mathematics teacher. He was a child prodigy. While still in elementary school, the educator Rózsa Péter, friend of his mother introduced him to Paul Erdős, who invited him
for lunch in a restaurant, and bombarded him with mathematical questions. Pósa finished the problems sooner than his soup, which impressed Erdős, who himself had been a child prodigy, and who supported young talents with much care and competence. That is how Pósa’s first paper was born, co-authored with Erdős (hence his Erdős number is 1).
He went to the first special mathematics class of the country at Fazekas Mihály Secondary School from 1962 to 1966, where his classmates included Miklós Laczkovich, László Lovász, , Zsolt Baranyai, István Berkes, Katalin Vesztergombi, Péter Major. He won the first prize on the International Mathematical Olympiad in 1966 (Bulgaria) and second prize in 1965 (Germany).
He started his Mathematics studies at ELTE University in 1966, and graduated in 1971. From 1971 to 1982 he worked at the Department of Mathematical Analysis at ELTE University, and he obtained a doctorate in 1983 with his dissertation about Hamiltonian circuits of random graphs. From 1984 to 2002 he worked at the Department of Computer Science at ELTE University, and since 2002 he has been a member of the Rényi Mathematical Institute.
Despite his significant re |
https://en.wikipedia.org/wiki/Open%20Inventor | Open Inventor, originally IRIS Inventor, is a C++ object-oriented retained mode 3D graphics toolkit designed by SGI to provide a higher layer of programming for OpenGL. Its main goals are better programmer convenience and efficiency. Open Inventor exists as both proprietary software and free and open-source software, subject to the requirements of the GNU Lesser General Public License (LGPL), version 2.1.
The primary objective was to make 3D programming accessible by introducing an object-oriented API, allowing developers to create complex scenes without the intricacies of low-level OpenGL. The toolkit incorporated features like scene graphs, pre-defined shapes, and automatic occlusion culling to streamline scene management. While Open Inventor focused on ease of use, the OpenGL Performer project, spawned from the same context, emphasized performance optimization. The two projects later converged in an attempt to strike a balance between accessibility and performance, culminating in initiatives like Cosmo 3D and OpenGL++. These projects underwent various stages of development and refinement, contributing to the evolution of 3D graphics programming paradigms.
Early history
Around 1988–1989, Wei Yen asked Rikk Carey to lead the IRIS Inventor project. Their goal was to create a toolkit that made developing 3D graphics applications easier to do. The strategy was based on the premise that people were not developing enough 3D applications with IRIS GL because it was too time-consuming to do so with the low-level interface provided by IRIS GL. If 3D programming were made easier, through the use of an object oriented API, then more people would create 3D applications and SGI would benefit. Therefore, the credo was always “ease of use” before “performance”, and soon the tagline “3D programming for humans” was being used widely.
Use
OpenGL (OGL) is a low level application programming interface that takes lists of simple polygons and renders them as quickly as possible. To |
https://en.wikipedia.org/wiki/Sarcoidosis | Sarcoidosis (also known as Besnier–Boeck–Schaumann disease) is a disease involving abnormal collections of inflammatory cells that form lumps known as granulomata. The disease usually begins in the lungs, skin, or lymph nodes. Less commonly affected are the eyes, liver, heart, and brain, though any organ can be affected. The signs and symptoms depend on the organ involved. Often, no, or only mild, symptoms are seen. When it affects the lungs, wheezing, coughing, shortness of breath, or chest pain may occur. Some may have Löfgren syndrome with fever, large lymph nodes, arthritis, and a rash known as erythema nodosum.
The cause of sarcoidosis is unknown. Some believe it may be due to an immune reaction to a trigger such as an infection or chemicals in those who are genetically predisposed. Those with affected family members are at greater risk. Diagnosis is partly based on signs and symptoms, which may be supported by biopsy. Findings that make it likely include large lymph nodes at the root of the lung on both sides, high blood calcium with a normal parathyroid hormone level, or elevated levels of angiotensin-converting enzyme in the blood. The diagnosis should only be made after excluding other possible causes of similar symptoms such as tuberculosis.
Sarcoidosis may resolve without any treatment within a few years. However, some people may have long-term or severe disease. Some symptoms may be improved with the use of anti-inflammatory drugs such as ibuprofen. In cases where the condition causes significant health problems, steroids such as prednisone are indicated. Medications such as methotrexate, chloroquine, or azathioprine may occasionally be used in an effort to decrease the side effects of steroids. The risk of death is 1–7%. The chance of the disease returning in someone who has had it previously is less than 5%.
In 2015, pulmonary sarcoidosis and interstitial lung disease affected 1.9 million people globally and they resulted in 122,000 deaths. It is m |
https://en.wikipedia.org/wiki/Middle%20East%20Treaty%20Organization | The Middle East Treaty Organization (METO) is a non-governmental organization founded in 2017 by a coalition of civil-society activists and disarmament practitioners, with the aim to rid the Middle East of all weapons of mass destruction (WMD). This proposal is in line with the 1970s proposal for a Middle East nuclear weapon free zone, albeit with broader scope following the 1990 Mubarak Initiative to include chemical and biological as well as nuclear weapons.
Working toward the broader vision of regional security and peace, METO defines its purpose as the establishment of a zone free of weapons of mass destruction (WMDFZ) in the Middle East. To achieve that end, the organization embraces a traditional treaty-based approach relying on diplomatic mechanisms and civil society campaigns. This strategy is supported through programming and events centered around policy debates, advocacy and education.
Three strategic pillars underlie METO's treaty-based approach for achieving the Middle East WMDFZ:
A WMDFZ Treaty, based on a text negotiated, agreed and adopted by regional governments and relevant stakeholders through an inclusive, multilateral track I and track II diplomatic process facilitated by METO and partner organizations (including formal United Nations negotiations).
A regional organization, which must be established to oversee and carry out functions necessary to the treaty’s eventual implementation, verification and compliance.
Engagement with civil society, in particular to foster a civil society movement that can formulate demands to regional and international governments to advance the goals of the proposed treaty.
METO is an international partner of International Campaign to Abolish Nuclear Weapons, International Physicians for the Prevention of Nuclear War, Geneva Centre for Security Policy, British American Security Information Council, Abolition 2000, and Geneva Disarmament Platform.
Draft Treaty and Annual UN Conference
METO began facilitating t |
https://en.wikipedia.org/wiki/Pediatric%20and%20Developmental%20Pathology | Pediatric and Developmental Pathology is a bimonthly peer-reviewed medical journal covering clinical pathology as it relates to pediatrics. It was established in 1998 and is published by SAGE Publications. It is the official journal of the Society for Pediatric Pathology and the Paediatric Pathology Society. The editor-in-chief is Pierre Russo (Children's Hospital of Philadelphia). According to the Journal Citation Reports, the journal has a 2017 impact factor of 1.250, ranking it 61st out of 79 in the category "Pathology" and 85th out of 124 journals in the category "Pediatrics". |
https://en.wikipedia.org/wiki/Penetrance | Penetrance in genetics is the proportion of individuals carrying a particular variant (or allele) of a gene (the genotype) that also expresses an associated trait (the phenotype). In medical genetics, the penetrance of a disease-causing mutation is the proportion of individuals with the mutation that exhibit clinical symptoms among all individuals with such mutation. For example, if a mutation in the gene responsible for a particular autosomal dominant disorder has 95% penetrance, then 95% of those with the mutation will develop the disease, while 5% will not.
A condition, most commonly inherited in an autosomal dominant manner, is said to show complete penetrance if clinical symptoms are present in all individuals who have the disease-causing mutation. A condition which shows complete penetrance is neurofibromatosis type 1 – every person who has a mutation in the gene will show symptoms of the condition. The penetrance is 100%.
Common examples used to show degrees of penetrance are often highly penetrant. There are several reasons for this:
Highly penetrant alleles, and highly heritable symptoms, are easier to demonstrate, because if the allele is present, the phenotype is generally expressed. Mendelian genetic concepts such as recessiveness, dominance, and co-dominance are fairly simple additions to this principle.
Alleles which are highly penetrant are more likely to be noticed by clinicians and geneticists, and alleles for symptoms which are highly heritable are more likely to be inferred to exist, and then are more easily tracked down.
Degrees
Complete and incomplete or reduced penetrance: An allele is said to have complete penetrance if all individuals who have the disease-causing mutation have clinical symptoms of the disease. In incomplete or reduced penetrance, some individuals will not express the trait even though they carry the allele. An example of an autosomal dominant condition showing incomplete penetrance is familial breast cancer due to muta |
https://en.wikipedia.org/wiki/Comminution | Comminution is the reduction of solid materials from one average particle size to a smaller average particle size, by crushing, grinding, cutting, vibrating, or other processes. In geology, it occurs naturally during faulting in the upper part of the Earth's crust. In industry, it is an important unit operation in mineral processing, ceramics, electronics, and other fields, accomplished with many types of mill. In dentistry, it is the result of mastication of food. In general medicine, it is one of the most traumatic forms of bone fracture.
Within industrial uses, the purpose of comminution is to reduce the size and to increase the surface area of solids. It is also used to free useful materials from matrix materials in which they are embedded, and to concentrate minerals.
Energy requirements
The comminution of solid materials consumes energy, which is being used to break up the solid into smaller pieces. The comminution energy can be estimated by:
Rittinger's law, which assumes that the energy consumed is proportional to the newly generated surface area;
Kick's law, which related the energy to the sizes of the feed particles and the product particles;
Bond's law, which assumes that the total work useful in breakage is inversely proportional to the square root of the diameter of the product particles, [implying] theoretically that the work input varies as the length of the new cracks made in breakage.
Holmes's law, which modifies Bond's law by substituting the square root with an exponent that depends on the material.
Forces
There are three forces which typically are used to effect the comminution of particles: impact, shear, and compression.
Methods
There are several methods of comminution. Comminution of solid materials requires different types of crushers and mills depending on the feed properties such as hardness at various size ranges and application requirements such as throughput and maintenance. The most common machines for the comminution of coarse |
https://en.wikipedia.org/wiki/Elmer%20Rees | Elmer Gethin Rees, (19 November 1941 – 4 October 2019) was a Welsh mathematician with publications in areas ranging from topology, differential geometry, algebraic geometry, linear algebra and Morse theory to robotics. He held the post of Director of the Heilbronn Institute for Mathematical Research, a partnership between the University of Bristol and the British signals intelligence agency GCHQ, from its creation in 2005 until 2009.
Biography
Rees was born in Llandybie and grew up in Wales. He studied at St Catharine's College, Cambridge gaining a BA before moving on to the University of Warwick, where he completed his PhD in 1967. His thesis on Projective Spaces and Associated Maps, was written under the supervision of David B. A. Epstein.
Rees's career had taken him to University of Hull, the Institute for Advanced Study in Princeton, New Jersey, Swansea University and St Catherine's College, Oxford, before becoming a professor at the University of Edinburgh in 1979, where he remained until retiring from the post in 2005.
He was elected as a fellow of the Royal Society of Edinburgh in 1982. One of his most notable legacies was the establishment of the International Centre for Mathematical Sciences.
Rees was appointed Commander of the Order of the British Empire (CBE) in the 2009 Birthday Honours.
While at the universities of Oxford and Edinburgh, he supervised at least 15 PhD students, including Anthony Bahri, John D. S. Jones, Gregory Lupton, Jacob Mostovoy, Simon Willerton and Richard Hepworth.
Footnotes
External links and references
Senatus Academicus, University of Edinburgh. "Special Minute - Professor Elmer Gethin Rees MA PhD FRSE". Retrieved 2006-10-21.
University of Edinburgh Honorary Degree (24 June 2008)
Heilbronn Institute for Mathematical Research
Elmer Rees 70th birthday conference
70th birthday conference poster
Welsh mathematicians
20th-century British mathematicians
21st-century British mathematicians
Algebraic geometers
Differential ge |
https://en.wikipedia.org/wiki/Inductive%20tensor%20product | The finest locally convex topological vector space (TVS) topology on the tensor product of two locally convex TVSs, making the canonical map (defined by sending to ) continuous is called the inductive topology or the -topology. When is endowed with this topology then it is denoted by and called the inductive tensor product of and
Preliminaries
Throughout let and be locally convex topological vector spaces and be a linear map.
is a topological homomorphism or homomorphism, if it is linear, continuous, and is an open map, where the image of has the subspace topology induced by
If is a subspace of then both the quotient map and the canonical injection are homomorphisms. In particular, any linear map can be canonically decomposed as follows: where defines a bijection.
The set of continuous linear maps (resp. continuous bilinear maps ) will be denoted by (resp. ) where if is the scalar field then we may instead write (resp. ).
We will denote the continuous dual space of by and the algebraic dual space (which is the vector space of all linear functionals on whether continuous or not) by
To increase the clarity of the exposition, we use the common convention of writing elements of with a prime following the symbol (e.g. denotes an element of and not, say, a derivative and the variables and need not be related in any way).
A linear map from a Hilbert space into itself is called positive if for every In this case, there is a unique positive map called the square-root of such that
If is any continuous linear map between Hilbert spaces, then is always positive. Now let denote its positive square-root, which is called the absolute value of Define first on by setting for and extending continuously to and then define on by setting for and extend this map linearly to all of The map is a surjective isometry and
A linear map is called compact or completely continuous if there is a neighborhood of the origi |
https://en.wikipedia.org/wiki/Maria%20New | Maria Iandolo New is a professor of Pediatrics, Genomics and Genetics at Icahn School of Medicine at Mount Sinai in New York City. She is an expert in congenital adrenal hyperplasia (CAH), a genetic condition affecting the adrenal gland that can affect sexual development.
Medical education
New received her undergraduate degree from Cornell University in Ithaca, New York, in 1950, and her M. D. from the Perelman School of Medicine at the University of Pennsylvania in Philadelphia, in 1954. She completed an internship in medicine at Bellevue Hospital in New York, followed by a residency in pediatrics at the New York Hospital. From 1957 to 1958 she studied renal functioning under a fellowship from the National Institutes of Health (NIH). She was a research pediatrician to the Diabetic Study Group of the Comprehensive Care Teaching Program at the New York Hospital-Cornell Medical Center from 1958 to 1961, and had a second NIH fellowship under Ralph E. Peterson from 1961 to 1964, to study specific steroid hormone production during infancy, childhood and adolescence.
Career
In 1964, New was appointed Chief of Pediatric Endocrinology at Cornell University Medical College, a position she held for 40 years. In 1978, she was named Harold and Percy Uris Professor of Pediatric Endocrinology and Metabolism. In 1980, New was appointed Chairman of the Department of Pediatrics at Cornell University Medical College and Pediatrician-in-Chief of the Department of Pediatrics at New York-Presbyterian Hospital. She was one of the few women in the country to serve as Chair of a major division of a medical college, and her tenure lasted for 22 years. While Chairman, New founded and directed the 8-bed Children's Clinical Research Center, a clinical research center in pediatrics with groundbreaking research in pediatric endocrinology, hematology, and immunology, during the emergence of AIDS. In 2004, New was recruited to the Mount Sinai School of Medicine as Professor of Pediatrics and H |
https://en.wikipedia.org/wiki/Removable%20media | In computing, a removable media is a data storage media that is designed to be readily inserted and removed from a system. Most early removable media, such as floppy disks and optical discs, require a dedicated read/write device (i.e. a drive) to be installed in the computer, while others, such as USB flash drives, are plug-and-play with all the hardware required to read them built into the device, so only need a driver software to be installed in order to communicate with the device. Some removable media readers/drives are integrated into the computer case, while others are standalone devices that need to be additionally installed or connected.
Examples of removable media that require a dedicated reader drive include:
Optical discs, e.g. Blu-rays (both standard and UHD versions), DVDs, CDs
Flash memory-based memory cards, e.g. CompactFlash, Secure Digital, Memory Stick
Magnetic storage media
Floppy and Zip disks (now obsolete)
Disk packs (now obsolete)
Magnetic tapes (now obsolete)
Paper data storage, e.g. punched cards, punched tapes (now obsolete)
Examples of removable media that are standalone plug-and-play devices that carry their own reader hardwares include:
USB flash drives
Portable storage devices
Dedicated external solid state drives (SSD)
Enclosured mass storage drives, i.e. modified hard disk drives (HDD)/internal SSDs
Peripheral devices that have integrated data storage capability
Digital cameras
Mobile devices such as smartphones, tablets and handheld game consoles
Portable media players
Other external or dockable peripherals that have expandable removable media capabilities, usually via a USB port or memory card reader
USB hubs
Wired or wireless printers
Network routers, access points and switches
Using removable media can pose some computer security risks, including viruses, data theft and the introduction of malware.
History
The earliest form of removable media, punched cards and tapes, predates the electronic computer by cen |
https://en.wikipedia.org/wiki/Lyapunov%20exponent | In mathematics, the Lyapunov exponent or Lyapunov characteristic exponent of a dynamical system is a quantity that characterizes the rate of separation of infinitesimally close trajectories. Quantitatively, two trajectories in phase space with initial separation vector diverge (provided that the divergence can be treated within the linearized approximation) at a rate given by
where is the Lyapunov exponent.
The rate of separation can be different for different orientations of initial separation vector. Thus, there is a spectrum of Lyapunov exponents—equal in number to the dimensionality of the phase space. It is common to refer to the largest one as the maximal Lyapunov exponent (MLE), because it determines a notion of predictability for a dynamical system. A positive MLE is usually taken as an indication that the system is chaotic (provided some other conditions are met, e.g., phase space compactness). Note that an arbitrary initial separation vector will typically contain some component in the direction associated with the MLE, and because of the exponential growth rate, the effect of the other exponents will be obliterated over time.
The exponent is named after Aleksandr Lyapunov.
Definition of the maximal Lyapunov exponent
The maximal Lyapunov exponent can be defined as follows:
The limit ensures the validity of the linear approximation
at any time.
For discrete time system (maps or fixed point iterations) ,
for an orbit starting with this translates into:
Definition of the Lyapunov spectrum
For a dynamical system with evolution equation in an n–dimensional phase space, the spectrum of Lyapunov exponents
in general, depends on the starting point . However, we will usually be interested in the attractor (or attractors) of a dynamical system, and there will normally be one set of exponents associated with each attractor. The choice of starting point may determine which attractor the system ends up on, if there is more than one. (For Hamiltonian s |
https://en.wikipedia.org/wiki/Theca | In biology, a theca (plural thecae) is a sheath or a covering.
Botany
In botany, the theca is related to plant's flower anatomy. The theca of an angiosperm consists of a pair of microsporangia that are adjacent to each other and share a common area of dehiscence called the stomium. Any part of a microsporophyll that bears microsporangia is called an anther. Most anthers are formed on the apex of a filament. An anther and its filament together form a typical (or filantherous) stamen, part of the male floral organ.
The typical anther is bilocular, i.e. it consists of two thecae. Each theca contains two microsporangia, also known as pollen sacs. The microsporangia produce the microspores, which for seed plants are known as pollen grains.
If the pollen sacs are not adjacent, or if they open separately, then no thecae are formed. In Lauraceae, for example, the pollen sacs are spaced apart and open independently.
The tissue between the locules and the cells is called the connective and the parenchyma. Both pollen sacs are separated by the stomium. When the anther is dehiscing, it opens at the stomium.
The outer cells of the theca form the epidermis. Below the epidermis, the somatic cells form the tapetum. These support the development of microspores into mature pollen grains. However, little is known about the underlying genetic mechanisms, which play a role in male sporo- and gametogenesis.
The thecal arrangement of a typical stamen can be as follows:
Divergent: both thecae in line, and forming an acute angle with the filament
Transverse (or explanate): both thecae exactly in line, at right angles with the filament
Oblique: the thecae fixed to each other in an oblique way
Parallel: the thecae fixed to each other in a parallel way
Zoology
In biology, the theca of follicle can also refer to the site of androgen production in females. The theca of the spinal cord is called the thecal sac, and intrathecal injections are made there or in the subarachnoid space o |
https://en.wikipedia.org/wiki/142%20%28number%29 | 142 (one hundred [and] forty-two) is the natural number following 141 and preceding 143.
In mathematics
There are 142 connected functional graphs on four labeled vertices, 142 planar graphs with 6 unlabeled vertices, and 142 partial involutions on five elements.
See also
The year AD 142 or 142 BC
List of highways numbered 142 |
https://en.wikipedia.org/wiki/Reaction%20%28physics%29 | As described by the third of Newton's laws of motion of classical mechanics, all forces occur in pairs such that if one object exerts a force on another object, then the second object exerts an equal and opposite reaction force on the first. The third law is also more generally stated as: "To every action there is always opposed an equal reaction: or the mutual actions of two bodies upon each other are always equal, and directed to contrary parts." The attribution of which of the two forces is the action and which is the reaction is arbitrary. Either of the two can be considered the action, while the other is its associated reaction.
Examples
Interaction with ground
When something is exerting force on the ground, the ground will push back with equal force in the opposite direction. In certain fields of applied physics, such as biomechanics, this force by the ground is called 'ground reaction force'; the force by the object on the ground is viewed as the 'action'.
When someone wants to jump, he or she exerts additional downward force on the ground ('action'). Simultaneously, the ground exerts upward force on the person ('reaction'). If this upward force is greater than the person's weight, this will result in upward acceleration. When these forces are perpendicular to the ground, they are also called a normal force.
Likewise, the spinning wheels of a vehicle attempt to slide backward across the ground. If the ground is not too slippery, this results in a pair of friction forces: the 'action' by the wheel on the ground in backward direction, and the 'reaction' by the ground on the wheel in forward direction. This forward force propels the vehicle.
Gravitational forces
The Earth, among other planets, orbits the Sun because the Sun exerts a gravitational pull that acts as a centripetal force, holding the Earth to it, which would otherwise go shooting off into space. If the Sun's pull is considered an action, then Earth simultaneously exerts a reaction as a gravi |
https://en.wikipedia.org/wiki/Arabic%20diacritics | Arabic script has numerous diacritics, which include consonant pointing known as (), and supplementary diacritics known as (). The latter include the vowel marks termed (; singular: , ).
The Arabic script is a modified abjad, where short consonants and long vowels are represented by letters but short vowels and consonant length are not generally indicated in writing. is optional to represent missing vowels and consonant length. Modern Arabic is always written with the i‘jām—consonant pointing, but only religious texts, children's books and works for learners are written with the full tashkīl—vowel guides and consonant length. It is however not uncommon for authors to add diacritics to a word or letter when the grammatical case or the meaning is deemed otherwise ambiguous. In addition, classical works and historic documents rendered to the general public are often rendered with the full tashkīl, to compensate for the gap in understanding resulting from stylistic changes over the centuries.
Tashkil (marks used as phonetic guides)
The literal meaning of is 'forming'. As the normal Arabic text does not provide enough information about the correct pronunciation, the main purpose of (and ) is to provide a phonetic guide or a phonetic aid; i.e. show the correct pronunciation for children who are learning to read or foreign learners.
The bulk of Arabic script is written without (or short vowels). However, they are commonly used in texts that demand strict adherence to exact pronunciation. This is true, primarily, of the Qur'an () and poetry. It is also quite common to add to hadiths (; plural: ) and the Bible. Another use is in children's literature. Moreover, are used in ordinary texts in individual words when an ambiguity of pronunciation cannot easily be resolved from context alone. Arabic dictionaries with vowel marks provide information about the correct pronunciation to both native and foreign Arabic speakers. In art and calligraphy, might be used sim |
https://en.wikipedia.org/wiki/R%C3%A9nyi%20entropy | In information theory, the Rényi entropy is a quantity that generalizes various notions of entropy, including Hartley entropy, Shannon entropy, collision entropy, and min-entropy. The Rényi entropy is named after Alfréd Rényi, who looked for the most general way to quantify information while preserving additivity for independent events. In the context of fractal dimension estimation, the Rényi entropy forms the basis of the concept of generalized dimensions.
The Rényi entropy is important in ecology and statistics as index of diversity. The Rényi entropy is also important in quantum information, where it can be used as a measure of entanglement. In the Heisenberg XY spin chain model, the Rényi entropy as a function of can be calculated explicitly because it is an automorphic function with respect to a particular subgroup of the modular group. In theoretical computer science, the min-entropy is used in the context of randomness extractors.
Definition
The Rényi entropy of order , where and , is defined as
It is further defined at as
Here, is a discrete random variable with possible outcomes in the set and corresponding probabilities for . The resulting unit of information is determined by the base of the logarithm, e.g. shannon for base 2, or nat for base e.
If the probabilities are for all , then all the Rényi entropies of the distribution are equal: .
In general, for all discrete random variables , is a non-increasing function in .
Applications often exploit the following relation between the Rényi entropy and the p-norm of the vector of probabilities:
.
Here, the discrete probability distribution is interpreted as a vector in with and .
The Rényi entropy for any is Schur concave.
Special cases
As approaches zero, the Rényi entropy increasingly weighs all events with nonzero probability more equally, regardless of their probabilities. In the limit for , the Rényi entropy is just the logarithm of the size of the support of . The lim |
https://en.wikipedia.org/wiki/Double%20hashing | Double hashing is a computer programming technique used in conjunction with open addressing in hash tables to resolve hash collisions, by using a secondary hash of the key as an offset when a collision occurs. Double hashing with open addressing is a classical data structure on a table .
The double hashing technique uses one hash value as an index into the table and then repeatedly steps forward an interval until the desired value is located, an empty location is reached, or the entire table has been searched; but this interval is set by a second, independent hash function. Unlike the alternative collision-resolution methods of linear probing and quadratic probing, the interval depends on the data, so that values mapping to the same location have different bucket sequences; this minimizes repeated collisions and the effects of clustering.
Given two random, uniform, and independent hash functions and , the th location in the bucket sequence for value in a hash table of buckets is:
Generally, and are selected from a set of universal hash functions; is selected to have a range of and to have a range of . Double hashing approximates a random distribution; more precisely, pair-wise independent hash functions yield a probability of that any pair of keys will follow the same bucket sequence.
Selection of h2(k)
The secondary hash function should have several characteristics:
it should never yield an index of zero
it should cycle through the whole table
it should be very fast to compute
it should be pair-wise independent of
The distribution characteristics of are irrelevant. It is analogous to a random-number generator.
All be relatively prime to |T|.
In practice:
If division hashing is used for both functions, the divisors are chosen as primes.
If the T is a power of 2, the first and last requirements are usually satisfied by making always return an odd number. This has the side effect of doubling the chance of collision due to one wasted bit.
Analysi |
https://en.wikipedia.org/wiki/TURBOchannel | TURBOchannel is an open computer bus developed by DEC by during the late 1980s and early 1990s. Although it is open for any vendor to implement in their own systems, it was mostly used in Digital's own systems such as the MIPS-based DECstation and DECsystem systems, in the VAXstation 4000, and in the Alpha-based DEC 3000 AXP. Digital abandoned the use of TURBOchannel in favor of the EISA and PCI buses in late 1994, with the introduction of their AlphaStation and AlphaServer systems.
History
TURBOchannel was developed in the late 1980s by Digital and was continuously revised through the early 1990s by the TURBOchannel Industry Group, an industry group set up by Digital to develop promote the bus. TURBOchannel has been an open bus from the beginning, the specification was publicly available at an initial purchase cost for the reproduction of material for third-party implementation, as were the mechanical specifications, for both implementation in both systems and in options. TURBOchannel was selected by the failed ACE (Advanced Computing Environment) for use as the industry standard bus in ARC (Advanced RISC Computing) compliant machines. Digital initially expected TURBOchannel to gain widespeard industry acceptance due to its status as an ARC standard, although ultimately Digital was the only major user of the TURBOchannel in their own DEC 3000 AXP, DECstation 5000 Series, DECsystem and VAXstation 4000 systems. While no third parties implemented TURBOchannel in systems, they did implement numerous TURBOchannel option modules for Digital's systems.
Although the main developer and promoter of TURBOchannel was the TURBOchannel Industry Group, Digital's TRI/ADD Program, an initiative to provide technical and marketing support to third parties implementing peripherals based on open interfaces such as FutureBus+, SCSI, VME and TURBOchannel for Digital's systems, was also involved in promoting TURBOchannel implementation and sales. The TRI/ADD Program was discontinued |
https://en.wikipedia.org/wiki/Montonen%E2%80%93Olive%20duality | Montonen–Olive duality or electric–magnetic duality is the oldest known example of strong–weak duality or S-duality according to current terminology. It generalizes the electro-magnetic symmetry of Maxwell's equations by stating that magnetic monopoles, which are usually viewed as emergent quasiparticles that are "composite" (i.e. they are solitons or topological defects), can in fact be viewed as "elementary" quantized particles with electrons playing the reverse role of "composite" topological solitons; the viewpoints are equivalent and the situation dependent on the duality. It was later proven to hold true when dealing with a N = 4 supersymmetric Yang–Mills theory. It is named after Finnish physicist Claus Montonen and British physicist David Olive after they proposed the idea in their academic paper Magnetic monopoles as gauge particles? where they state:
S-duality is now a basic ingredient in topological quantum field theories and string theories, especially since the 1990s with the advent of the second superstring revolution. This duality is now one of several in string theory, the AdS/CFT correspondence which gives rise to the holographic principle, being viewed as amongst the most important. These dualities have played an important role in condensed matter physics, from predicting fractional charges of the electron, to the discovery of the magnetic monopole.
Electric–magnetic duality
The idea of a close similarity between electricity and magnetism, going back to the time of André-Marie Ampère and Michael Faraday, was first made more precise with James Clerk Maxwell's formulation of his famous equations for a unified theory of electric and magnetic fields:
The symmetry between and in these equations is striking. If one ignores the sources, or adds magnetic sources, the equations are invariant under and .
Why should there be such symmetry between and ? In 1931 Paul Dirac was studying the quantum mechanics of an electric charge moving in a magnetic |
https://en.wikipedia.org/wiki/Auramine%E2%80%93rhodamine%20stain | The auramine–rhodamine stain (AR), also known as the Truant auramine–rhodamine stain, is a histological technique used to visualize acid-fast bacilli using fluorescence microscopy, notably species in the Mycobacterium genus. Acid-fast organisms display a reddish-yellow fluorescence. Although the auramine–rhodamine stain is not as specific for acid-fast organisms (e.g. Mycobacterium tuberculosis or Nocardia) as the Ziehl–Neelsen stain, it is more affordable and more sensitive, therefore it is often utilized as a screening tool.
AR stain is a mixture of auramine O and rhodamine B. It is carcinogenic.
See also
Auramine phenol stain (AP stain)
Biological stains |
https://en.wikipedia.org/wiki/Japan%20Society%20for%20Cell%20Biology | The Japan Society for Cell Biology is a professional society for cell biology that was founded in 1950. It has published the journal Cell Structure and Function since 1975. It also organises an annual cell biology symposium. |
https://en.wikipedia.org/wiki/Crataegus%20tanacetifolia | Crataegus tanacetifolia, the tansy-leaved thorn, is a species of hawthorn. It is native to Turkey where it occurs on dry slopes or in rocky places, usually on calcareous rocks.
It is a deciduous tree that grows up to 10 metres in height and 8 metres in width The fruit, is 10–14 mm or up to 25 mm in diameter, orange or rarely red in colour. It can be consumed fresh or cooked.
See also
List of hawthorn species with yellow fruit |
https://en.wikipedia.org/wiki/Interdimensional%20UFO%20hypothesis | The interdimensional hypothesis is a proposal that unidentified flying object (UFO) sightings are the result of experiencing other "dimensions" that coexist separately alongside our own in contrast with either the extraterrestrial hypothesis that suggests UFO sightings are caused by visitations from outside the Earth or the psychosocial hypothesis that argues UFO sightings are best explained as psychological or social phenomenon.
The hypothesis has been advanced by ufologists such as Meade Layne, John Keel, J. Allen Hynek, and Jacques Vallée. Proponents of the interdimensional hypothesis argue that UFOs are a modern manifestation of a phenomenon that has occurred throughout recorded human history, which in prior ages were ascribed to mythological or supernatural creatures.
Jeffrey J. Kripal, Chair in Philosophy and Religious Thought at Rice University, writes: "this interdimensional reading, long a staple of Spiritualism through the famous 'fourth dimension', would have a very long life within ufology and is still very much with us today".
History
In the 19th century, various spiritualists believed in "other dimensions". During the Summer of 1947, spiritualists adapted the "other dimensions" folklore to explain recent tales of "flying discs".
Background
In the late 19th century, the metaphysical term "planes" was popularized by H. P. Blavatsky, who propounded a complex cosmology consisting of seven "planes". The term aether ("ether") was adopted from Ancient Greek via Victorian physics that would later be discredited. The term "ether" was then incorporated into the writings of 19th-century occultists.
The "etheric plane" and the "etheric body" were introduced into Theosophy by Charles Webster Leadbeater and Annie Besant to represent a hypothetical 'fourth plane', above the "planes" of solids, liquids, and gases. The term "etheric" was later used by popular occult authors such as Alice Bailey, Rudolf Steiner, and numerous others.
Theorists
Meade Layne an |
https://en.wikipedia.org/wiki/Superspreading%20event | A superspreading event (SSEV) is an event in which an infectious disease is spread much more than usual, while an unusually contagious organism infected with a disease is known as a superspreader. In the context of a human-borne illness, a superspreader is an individual who is more likely to infect others, compared with a typical infected person. Such superspreaders are of particular concern in epidemiology.
Some cases of superspreading conform to the 80/20 rule, where approximately 20% of infected individuals are responsible for 80% of transmissions, although superspreading can still be said to occur when superspreaders account for a higher or lower percentage of transmissions. In epidemics with such superspreader events, the majority of individuals infect relatively few secondary contacts. The degree to which superspreading contributes to an epidemic is often quantified by the t20 metric, which denotes the proportion of infections attributable to the most infectious 20% of the population.
SSEVs are shaped by multiple factors including a decline in herd immunity, nosocomial infections, virulence, viral load, misdiagnosis, airflow dynamics, immune suppression, and co-infection with another pathogen.
Definition
Although loose definitions of superspreader events exist, some effort has been made at defining what qualifies as a superspreader event (SSEV). Lloyd-Smith et al. (2005) define a protocol to identify a superspreader event as follows:
estimate the effective reproductive number, R, for the disease and population in question;
construct a Poisson distribution with mean R, representing the expected range of Z due to stochasticity without individual variation;
define an SSEV as any infected person who infects more than Z(n) others, where Z(n) is the nth percentile of the Poisson(R) distribution.
This protocol defines a 99th-percentile SSEV as a case which causes more infections than would occur in 99% of infectious histories in a homogeneous population.
Du |
https://en.wikipedia.org/wiki/Sanitary%20and%20phytosanitary%20measures%20and%20agreements | Sanitary and phytosanitary (SPS) measures are measures to protect humans, animals, and plants from diseases, pests, or contaminants.
Overview
The Agreement on the Application of Sanitary and Phytosanitary Measures is one of the final documents approved at the conclusion of the Uruguay Round of the Multilateral Trade Negotiations. It applies to all sanitary (relating to animals) and phytosanitary (relating to plants) (SPS) measures that may have a direct or indirect impact on international trade. The SPS agreement includes a series of understandings (trade disciplines) on how SPS measures will be established and used by countries when they establish, revise, or apply their domestic laws and regulations.
Countries agree to base their SPS standards on science, and as guidance for their actions, the agreement encourages countries to use standards set by international standard setting organizations. The SPS agreement seeks to ensure that SPS measures will not arbitrarily or unjustifiably discriminate against trade of certain other members nor be used to disguise trade restrictions. In this SPS agreement, countries maintain the sovereign right to provide the level of health protection they deem appropriate, but agree that this right will not be misused for protectionist purposes nor result in unnecessary trade barriers. A rule of equivalency rather than equality applies to the use of SPS measures.
The 2012 classification of non-tariff measures (NTMs) developed by the Multi-Agency Support Team (MAST), a working group of eight international organisations, classifies SPS measures as one of 16 non-tariff measure (NTM) chapters. In this classification, SPS measures are classified as chapter A and defined as "Measures that are applied to protect human or animal life from risks arising from additives, contaminants, toxins or disease-causing organisms in their food; to protect human life from plant- or animal-carried diseases; to protect animal or plant life from pests, disea |
https://en.wikipedia.org/wiki/Peirce%27s%20law | In logic, Peirce's law is named after the philosopher and logician Charles Sanders Peirce. It was taken as an axiom in his first axiomatisation of propositional logic. It can be thought of as the law of excluded middle written in a form that involves only one sort of connective, namely implication.
In propositional calculus, Peirce's law says that ((P→Q)→P)→P. Written out, this means that P must be true if there is a proposition Q such that the truth of P follows from the truth of "if P then Q". In particular, when Q is taken to be a false formula, the law says that if P must be true whenever it implies falsity, then P is true. In this way Peirce's law implies the law of excluded middle.
Peirce's law does not hold in intuitionistic logic or intermediate logics and cannot be deduced from the deduction theorem alone.
Under the Curry–Howard isomorphism, Peirce's law is the type of continuation operators, e.g. call/cc in Scheme.
History
Here is Peirce's own statement of the law:
A fifth icon is required for the principle of excluded middle and other propositions connected with it. One of the simplest formulae of this kind is:
This is hardly axiomatical. That it is true appears as follows. It can only be false by the final consequent x being false while its antecedent (x → y) → x is true. If this is true, either its consequent, x, is true, when the whole formula would be true, or its antecedent x → y is false. But in the last case the antecedent of x → y, that is x, must be true. (Peirce, the Collected Papers 3.384).
Peirce goes on to point out an immediate application of the law:
From the formula just given, we at once get:
where the a is used in such a sense that (x → y) → a means that from (x → y) every proposition follows. With that understanding, the formula states the principle of excluded middle, that from the falsity of the denial of x follows the truth of x. (Peirce, the Collected Papers 3.384).
Warning: ((x→y)→a)→x is not a tautology. How |
https://en.wikipedia.org/wiki/Peano%20curve | In geometry, the Peano curve is the first example of a space-filling curve to be discovered, by Giuseppe Peano in 1890. Peano's curve is a surjective, continuous function from the unit interval onto the unit square, however it is not injective. Peano was motivated by an earlier result of Georg Cantor that these two sets have the same cardinality. Because of this example, some authors use the phrase "Peano curve" to refer more generally to any space-filling curve.
Construction
Peano's curve may be constructed by a sequence of steps, where the ith step constructs a set Si of squares, and a sequence Pi of the centers of the squares, from the set and sequence constructed in the previous step. As a base case, S0 consists of the single unit square, and P0 is the one-element sequence consisting of its center point.
In step i, each square s of Si − 1 is partitioned into nine smaller equal squares, and its center point c is replaced by a contiguous subsequence of the centers of these nine smaller squares.
This subsequence is formed by grouping the nine smaller squares into three columns, ordering the centers contiguously within each column, and then ordering the columns from one side of the square to the other, in such a way that the distance between each consecutive pair of points in the subsequence equals the side length of the small squares. There are four such orderings possible:
Left three centers bottom to top, middle three centers top to bottom, and right three centers bottom to top
Right three centers bottom to top, middle three centers top to bottom, and left three centers bottom to top
Left three centers top to bottom, middle three centers bottom to top, and right three centers top to bottom
Right three centers top to bottom, middle three centers bottom to top, and left three centers top to bottom
Among these four orderings, the one for s is chosen in such a way that the distance between the first point of the ordering and its predecessor in Pi also equals the si |
https://en.wikipedia.org/wiki/Rye%20rust | There are several rusts (Pucciniales syn. Uredinales) which affect rye (Secale cereale) including:
Puccinia spp.:
Stem rust (Puccinia graminis)
Leaf rust (Puccinia triticina)
Crown rust (Puccinia coronata)
See also
List of rye diseases
Rye diseases
Fungal plant pathogens and diseases |
https://en.wikipedia.org/wiki/1q21.1%20deletion%20syndrome | 1q21.1 deletion syndrome is a rare aberration of chromosome 1. A human cell has one pair of identical chromosomes on chromosome 1. With the 1q21.1 deletion syndrome, one chromosome of the pair is not complete, because a part of the sequence of the chromosome is missing. One chromosome has the normal length and the other is too short.
In 1q21.1, the '1' stands for chromosome 1, the 'q' stands for the long arm of the chromosome and '21.1' stands for the part of the long arm in which the deletion is situated.
The syndrome is a form of the 1q21.1 copy number variations, and it is a deletion in the distal area of the 1q21.1 part. The CNV leads to a very variable phenotype, and the manifestations in individuals are quite variable. Some people who have the syndrome can function in a normal way, while others have symptoms of intellectual impairment and various physical anomalies.
1q21.1 microdeletion is a very rare chromosomal condition. Only 46 individuals with this deletion have been reported in medical literature as of August 2011.
Signs and symptoms
Approximately 75% of all children with a 1q21.1 microdeletion exhibit delayed development, notably in motor skills such as sitting, standing, and walking. Individuals may have generalized mild learning difficulties; about 30% of those diagnosed with 1q21.1 deletion syndrome are affected.
Dysmorphic craniofacial traits are common, however, they are highly varied and thus difficult to identify. Microcephaly has been reported in 39% of those with the 1q21.1 deletion.
It is not clear whether the list of symptoms is complete. Very little information is known about the syndrome. The syndrome can have completely different effects on members of the same family.
Genitourinary abnormalities include vesicoureteral reflux, hydronephrosis, inguinal hernia, cryptorchidism, and genital malformations. There have been two reported cases of Mayer-Rokitansky-Kuster-Hauser syndrome alongside 1q21.1 deletion syndrome.
The majority of a |
https://en.wikipedia.org/wiki/Frenulum%20of%20labia%20minora | The frenulum of labia minora (fourchette or posterior commissure of the labia minora) is a frenulum where the labia minora meet posteriorly.
Pathology
The fourchette may be torn during delivery due to the sudden stretching of the vulval orifice, or during copulation. To prevent this tearing in a haphazard manner, obstetricians and, less frequently, midwives may perform an episiotomy, which is a deliberate cut made in the perineum starting from the fourchette and continuing back along the perineum toward the anus. Sometimes this surgical cut may extend to involve the perineal body and thus reduce anal sphincter function. Thus some obstetricians have opted to perform a posterior-lateral cut in the perineum to prevent this potential complication from occurring.
The fourchette may also be torn in acts of violence wherein forced entry occurs such as rape. When the fourchette is torn, the bleeding which ensues sometimes requires surgical suturing for containment.
Etymology
The fourchette is named with the French word for "fork" or "wishbone", owing to its shape. (See frenulum for its etymology.)
See also
Clitoral frenulum
Fourchette piercing
Perineal tear classification |
https://en.wikipedia.org/wiki/Pyroelectricity | Pyroelectricity (from Greek: pyr (πυρ), "fire" and electricity) is a property of certain crystals which are naturally electrically polarized and as a result contain large electric fields. Pyroelectricity can be described as the ability of certain materials to generate a temporary voltage when they are heated or cooled. The change in temperature modifies the positions of the atoms slightly within the crystal structure, so that the polarization of the material changes. This polarization change gives rise to a voltage across the crystal. If the temperature stays constant at its new value, the pyroelectric voltage gradually disappears due to leakage current. The leakage can be due to electrons moving through the crystal, ions moving through the air, or current leaking through a voltmeter attached across the crystal.
Explanation
Pyroelectric charge in minerals develops on the opposite faces of asymmetric crystals. The direction in which the propagation of the charge tends is usually constant throughout a pyroelectric material, but, in some materials, this direction can be changed by a nearby electric field. These materials are said to exhibit ferroelectricity.
All known pyroelectric materials are also piezoelectric. Despite being pyroelectric, novel materials such as boron aluminum nitride (BAlN) and boron gallium nitride (BGaN) have zero piezoelectric response for strain along the c-axis at certain compositions, the two properties being closely related. However, note that some piezoelectric materials have a crystal symmetry that does not allow pyroelectricity.
Pyroelectric materials are mostly hard and crystals; however, soft pyroelectricity can be achieved by using electrets.
Pyroelectricity is measured as the change in net polarization (a vector) proportional to a change in temperature. The total pyroelectric coefficient measured at constant stress is the sum of the pyroelectric coefficients at constant strain (primary pyroelectric effect) and the piezoelectric |
https://en.wikipedia.org/wiki/Opportunistic%20breeder | Flexible or opportunistic breeders mate whenever the conditions of their environment become favorable. Their ability and motivation to mate are primarily independent of day-length (photoperiod) and instead rely on cues from short-term changes in local conditions like rainfall, food abundance and temperature. Another factor is the presence of suitable breeding sites, which may only form with heavy rain or other environmental changes.
Thus, they are distinct from seasonal breeders that rely on changes in day length to induce entry into estrus and to cue mating, and continuous breeders like humans that can mate year-round. Other categories of breeders that perhaps can be subdivided under the heading "opportunistic" have been used to describe many species, such as many that are anurans like frogs. These include sporadic wet and sporadic dry, describing animals that breed sporadically not always under favorable conditions of rain or lack thereof.
Many opportunistic breeders are non-mammals. Those that are mammals tend to be small rodents.
Since changes in season can coincide with favorable changes in environment, the distinction between seasonal breeder and opportunistic can be muddled. In equatorial climes, the change in seasons is not always perceptible and thus, changes in day length not remarkable. Thus, the tree kangaroo (Dendrolagus) previously categorized as a seasonal breeder is now suspected to be an opportunistic breeder.
Additionally, opportunists can have qualities of seasonal breeders. The red crossbill exhibits a preference (not a requirement) for long-day seasonality, but requires other factors, especially food abundance and social interactions, in order to breed. Conversely, food availability by itself incompletely promotes reproductive development.
Physiology
Opportunistic breeders are typically capable of breeding at any time or becoming fertile within a short period of time. An example is the golden spiny mouse where changes in dietary salt in its |
https://en.wikipedia.org/wiki/Pauling%27s%20principle%20of%20electroneutrality | Pauling's principle of electroneutrality states that each atom in a stable substance has a charge close to zero. It was formulated by Linus Pauling in 1948 and later revised. The principle has been used to predict which of a set of molecular resonance structures would be the most significant, to explain the stability of inorganic complexes and to explain the existence of π-bonding in compounds and polyatomic anions containing silicon, phosphorus or sulfur bonded to oxygen; it is still invoked in the context of coordination complexes. However, modern computational techniques indicate many stable compounds have a greater charge distribution than the principle predicts (they contain bonds with greater ionic character).
History
Pauling first stated his "postulate of the essential electroneutrality of atoms" in his 1948 Liversidge lecture (in a broad-ranging paper that also included his ideas on the calculation of oxidation states in molecules):
“...the electronic structure of substances is such as to cause each atom to have essentially zero resultant electrical charge, the amount of leeway being not greater than about +/- ½ , and these resultant charges are possessed mainly by the most electropositive and electronegative atoms and are distributed in such a way as to correspond to electrostatic stability."
A slightly revised version was published in 1970:
“Stable molecules and crystals have electronic structures such that the electric charge of each atom is close to zero. Close to zero means between -1 and +1.”
Pauling said in his Liversidge lecture in 1948 that he had been led to the principle by a consideration of ionic bonding. In the gas phase, molecular caesium fluoride has a polar covalent bond. The large difference in electronegativity gives a calculated covalent character of 9%. In the crystal (CsF has the NaCl structure with both ions being 6-coordinate) if each bond has 9% covalent character the total covalency of Cs and F would be 54%. This would be repres |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.