source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Genomic%20phylostratigraphy | Genomic phylostratigraphy is a novel genetic statistical method developed in order to date the origin of specific genes by looking at its homologs across species. It was first developed by Ruđer Bošković Institute in Zagreb, Croatia. The system links genes to their founder gene, allowing us to then determine their age. This could help us better understand many evolutionary processes such as patterns of gene birth throughout evolution, or the relationship between the age of a transcriptome throughout embryonic development. Bioinformatic tools like GenEra have been developed to calculate relative gene ages based on genomic phylostratigraphy.
Method
This technique relies on the assumption that the diversity of the genome is not only due to gene duplications but also to continuous frequent de novo gene births. These genes (called "founder genes") would form from non-genic DNA sequences, as well as from changes in reading frame (or other ways of arising from within existing genes), or even from very rapid evolution of the protein that would modify the sequence beyond recognition. These new genes would at first have high evolutionary rates that would then slow down with time, allowing us to recognise their lineage in their descendants. The founder genes can then be put in a specific phylostratum. The phylostratum is represented as the clade that includes all the genes that derive from the same founder gene, signifying that this gene was formed in the common ancestor of this clade (e.g. Arthropoda, Mammalia, Metazoa, etc.). Positioning these founder genes and their descendants on different phylostrata can allow us to age them. This can then be used to analyse the origin of certain functions of proteins and developmental processes on a macroevolutionary scale, by observing connections between certain genes as well.
The original method for genomic phylostratigraphy involves the use of a BLAST sequence similarity search with a 10−3 E-value cut off. The genes deemed simila |
https://en.wikipedia.org/wiki/Pairing | In mathematics, a pairing is an R-bilinear map from the Cartesian product of two R-modules, where the underlying ring R is commutative.
Definition
Let R be a commutative ring with unit, and let M, N and L be R-modules.
A pairing is any R-bilinear map . That is, it satisfies
,
and
for any and any and any . Equivalently, a pairing is an R-linear map
where denotes the tensor product of M and N.
A pairing can also be considered as an R-linear map
, which matches the first definition by setting
.
A pairing is called perfect if the above map is an isomorphism of R-modules.
A pairing is called non-degenerate on the right if for the above map we have that for all implies ; similarly, is called non-degenerate on the left if for all implies .
A pairing is called alternating if and for all m. In particular, this implies , while bilinearity shows . Thus, for an alternating pairing, .
Examples
Any scalar product on a real vector space V is a pairing (set , in the above definitions).
The determinant map (2 × 2 matrices over k) → k can be seen as a pairing .
The Hopf map written as is an example of a pairing. For instance, Hardie et al. present an explicit construction of the map using poset models.
Pairings in cryptography
In cryptography, often the following specialized definition is used:
Let be additive groups and a multiplicative group, all of prime order . Let be generators of and respectively.
A pairing is a map:
for which the following holds:
Bilinearity:
Non-degeneracy:
For practical purposes, has to be computable in an efficient manner
Note that it is also common in cryptographic literature for all groups to be written in multiplicative notation.
In cases when , the pairing is called symmetric. As is cyclic, the map will be commutative; that is, for any , we have . This is because for a generator , there exist integers , such that and . Therefore .
The Weil pairing is an important concept in elliptic curve cryptograp |
https://en.wikipedia.org/wiki/Embedded%20Java | Embedded Java refers to versions of the Java program language that are designed for embedded systems. Since 2010 embedded Java implementations have come closer to standard Java, and are now virtually identical to the Java Standard Edition. Since Java 9 customization of the Java Runtime through modularization removes the need for specialized Java profiles targeting embedded devices.
History
Although in the past some differences existed between embedded Java and traditional PC based Java, the only difference now is that embedded Java code in these embedded systems is mainly contained in constrained memory, such as flash memory. A complete convergence has taken place since 2010, and now Java software components running on large systems can run directly with no recompilation at all on design-to-cost mass-production devices (such as consumers, industrial, white goods, healthcare, metering, smart markets in general)
CORE embedded Java API for a unified Embedded Java ecosystem
In order for a software component to run on any Java system, it must target the core minimal API provided by the different providers of the embedded Java ecosystem. Companies share the same eight packages of pre-written programs. The packages (java.lang, java.io, java.util, ... ) form the CORE Embedded Java API, which means that embedded programmers using the Java language can use them in order to make any worthwhile use of the Java language.
Old distinctions between SE embedded API and ME embedded API from ORACLE
Java SE embedded is based on desktop Java Platform, Standard Edition. It is designed to be used on systems with at least 32 MB of RAM, and can work on Linux ARM, x86, or Power ISA, and Windows XP and Windows XP Embedded architectures.
Java ME embedded used to be based on the Connected Device Configuration subset of Java Platform, Micro Edition. It is designed to be used on systems with at least 8 MB of RAM, and can work on Linux ARM, PowerPC, or MIPS architecture.
See also
Excelsio |
https://en.wikipedia.org/wiki/Recombinase%20polymerase%20amplification | Recombinase polymerase amplification (RPA) is a single tube, isothermal alternative to the polymerase chain reaction (PCR). By adding a reverse transcriptase enzyme to an RPA reaction it can detect RNA as well as DNA, without the need for a separate step to produce cDNA,. Because it is isothermal, RPA can use much simpler equipment than PCR, which requires a thermal cycler. Operating best at temperatures of 37–42 °C and still working, albeit more slowly, at room temperature means RPA reactions can in theory be run quickly simply by holding a tube. This makes RPA an excellent candidate for developing low-cost, rapid, point-of-care molecular tests. An international quality assessment of molecular detection of Rift Valley fever virus performed as well as the best RT-PCR tests, detecting less concentrated samples missed by some PCR tests and an RT-LAMP test.
RPA was developed and launched by TwistDx Ltd. (formerly known as ASM Scientific Ltd), a biotechnology company based in Cambridge, UK.
Technique
The RPA process employs three core enzymes – a recombinase, a single-stranded DNA-binding protein (SSB) and strand-displacing polymerase.
Recombinases are capable of pairing oligonucleotide primers with homologous sequence in duplex DNA.
SSB bind to displaced strands of DNA and prevent the primers from being displaced.
Finally, the strand displacing polymerase begins DNA synthesis where the primer has bound to the target DNA.
By using two opposing primers, much like PCR, if the target sequence is indeed present, an exponential DNA amplification reaction is initiated. No other sample manipulation such as thermal or chemical melting is required to initiate amplification. At optimal temperatures (37–42 °C), the reaction progresses rapidly and results in specific DNA amplification from just a few target copies to detectable levels, typically within 10 minutes, for rapid detection of viral genomic DNA or RNA, pathogenic bacterial genomic DNA, as well as short length aptame |
https://en.wikipedia.org/wiki/Taste%20confusion%20matrix | Taste Confusion Matrix (TCM) is a method in which many compounds are tested at the same time. It is a study of human taste perception. It characterizes the quality of taste with identification patterns of some 10 stimuli which are analyzed. |
https://en.wikipedia.org/wiki/Keith%20Devlin | Keith James Devlin (born 16 March 1947) is a British mathematician and popular science writer. Since 1987 he has lived in the United States. He has dual British-American citizenship.
Biography
He was born and grew up in England, in Kingston upon Hull. There he attended a local primary school followed by Greatfield High School in Hull. In the last school year he was appointed head boy.
Devlin earned a BSc (special) in mathematics at King's College London in 1968, and a PhD in mathematics at the University of Bristol in 1971 under the supervision of Frederick Rowbottom.
Career
Later he got a position as a scientific assistant in mathematics at the University of Oslo, Norway, from August till December 1972. In 1974 he became a scientific assistant in mathematics at the University of Heidelberg, Germany. In 1976 he was an assistant professor of mathematics at the University of Toronto, Canada.
From 1977 till 1987 he served as a lecturer, then reader, in mathematics at the University of Lancaster, England. From 1987 to 1989 he was a visiting professor of mathematics at Stanford University in California. From 1989 to 1993 he was the Carter Professor of Mathematics and Chair of Department at Colby College in Maine. From 1993 to 2000 he was Dean of Science and a professor of mathematics at St. Mary's College of California. From 2001 until he retired he was a senior researcher at Stanford University.
He is co-founder and executive director of Stanford University's Human-Sciences and Technologies Advanced Research Institute (2006), a co-founder of Stanford Media X university-industry research partnership program, and a senior researcher in the Center for the Study of Language and Information (CSLI). He is a commentator on National Public Radio's Weekend Edition Saturday, where he is known as "The Math Guy."
His current research is mainly focused on the use of different media to teach mathematics to different audiences. He is also co-founder and president of the company B |
https://en.wikipedia.org/wiki/SLUB%20%28software%29 | SLUB (the unqueued slab allocator) is a memory management mechanism intended for the efficient memory allocation of kernel objects which displays the desirable property of eliminating fragmentation caused by allocations and deallocations. The technique is used to retain allocated memory that contains a data object of a certain type for reuse upon subsequent allocations of objects of the same type. It is used in Linux and became the default allocator since 2.6.23.
See also
Slab allocation (SLAB)
SLOB
Notes
External links
The SLUB allocator
SLUB: The unqueued slab allocator V6
Memory management algorithms
Linux kernel |
https://en.wikipedia.org/wiki/EWRC-results.com | eWRC-results.com is a Czech online database website founded in 2006. The website features data and statistics in the motorsport of rallying that ranges from World Rally Championship to national rally events dating back to 1911.
Shutdown
In early 2022, the website was once forced to shut down due to financial issue.
Partnership
On 18 May 2022, DirtFish announced partnership with the website, which would ensure greater visibility and useability in terms of rallying result. Former Hyundai Motorsport team principal Andrea Adamo oversaw the deal. |
https://en.wikipedia.org/wiki/Robin%20Thomas%20%28mathematician%29 | Robin Thomas (August 22, 1962 – March 26, 2020) was a mathematician working in graph theory at the Georgia Institute of Technology.
Thomas received his doctorate in 1985 from Charles University in Prague, Czechoslovakia (now the Czech Republic), under the supervision of Jaroslav Nešetřil. He joined the faculty at Georgia Tech in 1989, and became a Regents' Professor there,
briefly serving as the department Chair.
On March 26, 2020, he died of Amyotrophic Lateral Sclerosis at the age of 57 after 12 years of struggle with the illness.
Awards
Thomas was awarded the Fulkerson Prize for outstanding papers in discrete mathematics twice, in 1994 as co-author of a paper on the Hadwiger conjecture, and in 2009 for the proof of the strong perfect graph theorem.
In 2011 he was awarded the Karel Janeček Foundation Neuron Prize for Lifetime Achievement in Mathematics. In 2012 he became a fellow of the American Mathematical Society.
He was named a SIAM Fellow in 2018. The January 2023 issue of the Journal of Combinatorial Theory, Series B was a tribute to his work. |
https://en.wikipedia.org/wiki/Sergei%20B.%20Kuksin | Sergei Borisovich Kuksin (Сергей Борисович Куксин, born 2 March 1955) is a French and Russian mathematician, specializing in partial differential equations (PDEs).
Kuksin received his doctorate under the supervision of Mark Vishik at Moscow State University in 1981. He was at the Steklov Institute in Moscow and at the Heriot-Watt University and is a directeur de recherché (senior researcher) at the Institut Mathématiques de Jussieu of the Paris Diderot University (Paris VII).
His research deals with KAM theory in partial differential equations (i.e. infinite dimensional Hamiltonian systems); partial differential equations involved with random perturbations, turbulence and statistical hydrodynamics; and elliptic PDEs for functions between compact manifolds.
In 1992 he was an invited speaker with talk KAM theory for partial differential equations at the European Congress of European Mathematicians in Paris. In 1998 he was an invited speaker at International Congress of Mathematicians in Berlin. In 2016 he received the Lyapunov Prize from the Russian Academy of Sciences.
Selected publications
Articles
Hamiltonian perturbations of infinite-dimensional linear systems with an imaginary spectrum, Functional Analysis and Applications, Vol. 21, 1987, pp. 192–205
with Jürgen Pöschel: Invariant Cantor manifolds of quasi-periodic oscillations for a nonlinear Schrödinger equation, Annals of Mathematics, Vol. 143, 1996, pp. 149–179
A KAM-theorem for equations of the Korteweg-de Vries type, Rev. Math. Phys., Vol. 10, 1998, pp. 1–64
with Armen Shirikyan: Stochastic Dissipative PDE's and Gibbs measures, Communications in Mathematical Physics, Vol. 213, 2000, pp. 291–330
with A. Shirikyan: A Coupling Approach to Randomly Forced Nonlinear PDEs. I, Communications in Mathematical Physics, Vol. 221, 2001, pp. 351–366
with A. Shirikyan: Ergodicity for the randomly forced 2D Navier-Stokes equations, Mathematical Physics, Analysis and Geometry, Vol. 4, 2001, pp. 147–195
wit |
https://en.wikipedia.org/wiki/Vertical%20ecosystem | A vertical ecosystem is an architectural gardening system developed by Ignacio Solano from the mur vegetal created by Patrick Blanc. This new approach enhances the previous archetype of mur vegetal and considers the relationship that exists between a set of living organisms, biocenosis, inhabiting a physical component, biotope. The system is based on the automated control of nutrients and plant parameters of the original wall, adding strains of bacteria, mycorrhizal fungi and interspecific symbiosis in plant selection, creating an artificial ecosystem from inert substrates. The system was created in 2007 and patented in 2010.
Amongst abiotic factors that influence vertical ecosystems, namely the substrate and its environmental conditions, the physio-chemical characteristics possessed by the means are decisive. The texture, porosity and depth of the substrate, those that in a natural ecosystem are edaphic factors, have been tested to the point of finding fitogenerate materials with perfect levels of absorption and humidity for the development of more than forty living families of plants represented by around 120 species. Moreover, the substrate used in the system developed by Ignacio Solano provides the ecosystem with the necessary resistance to serve as a high-durability biotope.
Environmental factors such as light, temperature and humidity are controlled by automated systems in interior vertical ecosystems. Ecosystems that are situated outside require study and analysis of the natural variables of their particular area, combined with a study of the behaviour of the numerous plant species in the biotope in each location. The selection and combination of species is one of the key factors for correct development. This is known as positive allelopathy.
Hydrological factors, such as pH levels, the conductivity of the water, dissolved gases and salinity, are balanced with precision so that the hydroponic system functions at its maximum capacity. The objective requir |
https://en.wikipedia.org/wiki/Oxodipine | Oxodipine is a calcium channel blocker.
Calcium channel blockers
Dihydropyridines
Carboxylate esters
Benzodioxoles |
https://en.wikipedia.org/wiki/Dolby | Dolby Laboratories, Inc. (often shortened to Dolby Labs and known simply as Dolby) is a company specializing in audio noise reduction, audio encoding/compression, spatial audio, and HDR imaging. Dolby licenses its technologies to consumer electronics manufacturers.
History
Dolby Labs was founded by Ray Dolby (1933–2013) in London, England, in 1965. In the same year, he invented the Dolby Noise Reduction system, a form of audio signal processing for reducing the background hissing sound on cassette tape recordings. His first U.S. patent on the technology was filed in 1969, four years later. The method was first used by Decca Records in the UK. After this, other companies began purchasing Dolby’s A301 technology, which was the professional noise reduction system used in recording, motion picture, broadcasting stations and communications networks. These companies include BBC, Pye, IBC, CBS Studios, RCA, and Granada.
He moved the company headquarters to the United States (San Francisco, California) in 1976. The first product Dolby Labs produced was the Dolby 301 unit which incorporated Type A Dolby Noise Reduction, a compander-based noise reduction system. These units were intended for use in professional recording studios.
Dolby was persuaded by Henry Kloss of KLH to manufacture a consumer version of his noise reduction. Dolby worked more on companding systems and introduced Type B in 1968.
Dolby also sought to improve film sound. As the corporation's history explains:
Upon investigation, Dolby found that many of the limitations in optical sound stemmed directly from its significantly high background noise. To filter this noise, the high-frequency response of theatre playback systems was deliberately curtailed… To make matters worse, to increase dialogue intelligibility over such systems, sound mixers were recording soundtracks with so much high-frequency pre-emphasis that high distortion resulted.
The first film with Dolby sound was A Clockwork Orange (1971). The |
https://en.wikipedia.org/wiki/Air%20Sickness%20Bag%20Virtual%20Museum | The Air Sickness Bag Virtual Museum is a collection of 3,112 air sickness bags collected by museum curator Steven J. Silberberg. The museum is entirely online, with photographs of the various air sickness bags; however, the actual collection is stored at Silberberg's residence. Silberberg himself has stated that he has never flown long distances. |
https://en.wikipedia.org/wiki/Engel%20expansion | The Engel expansion of a positive real number x is the unique non-decreasing sequence of positive integers such that
For instance, Euler's number e has the Engel expansion
1, 1, 2, 3, 4, 5, 6, 7, 8, ...
corresponding to the infinite series
Rational numbers have a finite Engel expansion, while irrational numbers have an infinite Engel expansion. If x is rational, its Engel expansion provides a representation of x as an Egyptian fraction. Engel expansions are named after Friedrich Engel, who studied them in 1913.
An expansion analogous to an Engel expansion, in which alternating terms are negative, is called a Pierce expansion.
Engel expansions, continued fractions, and Fibonacci
observe that an Engel expansion can also be written as an ascending variant of a continued fraction:
They claim that ascending continued fractions such as this have been studied as early as Fibonacci's Liber Abaci (1202). This claim appears to refer to Fibonacci's compound fraction notation in which a sequence of numerators and denominators sharing the same fraction bar represents an ascending continued fraction:
If such a notation has all numerators 0 or 1, as occurs in several instances in Liber Abaci, the result is an Engel expansion. However, Engel expansion as a general technique does not seem to be described by Fibonacci.
Algorithm for computing Engel expansions
To find the Engel expansion of x, let
and
where is the ceiling function (the smallest integer not less than r).
If for any i, halt the algorithm.
Iterated functions for computing Engel expansions
Another equivalent method is to consider the map
and set
where
and
Yet another equivalent method, called the modified Engel expansion calculated by
and
The transfer operator of the Engel map
The Frobenius–Perron transfer operator of the Engel map acts on functions with
since
and the inverse of the n-th component is which is found by solving for .
Relation to the Riemann ζ function
The Mellin tran |
https://en.wikipedia.org/wiki/Structural%20acoustics | Structural acoustics is the study of the mechanical waves in structures and how they interact with and radiate into adjacent media. The field of structural acoustics is often referred to as vibroacoustics in Europe and Asia. People that work in the field of structural acoustics are known as structural acousticians. The field of structural acoustics can be closely related to a number of other fields of acoustics including noise, transduction, underwater acoustics, and physical acoustics.
Vibrations in structures
Compressional and shear waves (isotropic, homogeneous material)
Compressional waves (often referred to as longitudinal waves) expand and contract in the same direction (or opposite) as the wave motion. The wave equation dictates the motion of the wave in the x direction.
where is the displacement and is the longitudinal wave speed. This has the same form as the acoustic wave equation in one-dimension. is determined by properties (bulk modulus and density ) of the structure according to
When two dimensions of the structure are small with respect to wavelength (commonly called a beam), the wave speed is dictated by Youngs modulus instead of the and are consequently slower than in infinite media.
Shear waves occur due to the shear stiffness and follows a similar equation, but with the displacement occurring in the transverse direction, perpendicular to the wave motion.
The shear wave speed is governed by the shear modulus which is less than and , making shear waves slower than longitudinal waves.
Bending waves in beams and plates
Most sound radiation is caused by bending (or flexural) waves, that deform the structure transversely as they propagate. Bending waves are more complicated than compressional or shear waves and depend on material properties as well as geometric properties. They are also dispersive since different frequencies travel at different speeds.
Modeling vibrations
Finite element analysis can be used to predict the vibrat |
https://en.wikipedia.org/wiki/Ignaz%20Semmelweis | Ignaz Philipp Semmelweis (; ; 1 July 1818 – 13 August 1865) was a Hungarian physician and scientist of German descent, who was an early pioneer of antiseptic procedures, and was described as the "saviour of mothers". Postpartum infection, also known as puerperal fever or childbed fever, consists of any bacterial infection of the reproductive tract following birth, and in the 19th century was common and often fatal. Semmelweis discovered that the incidence of infection could be drastically reduced by requiring healthcare workers in obstetrical clinics to disinfect their hands. In 1847, he proposed hand washing with chlorinated lime solutions at Vienna General Hospital's First Obstetrical Clinic, where doctors' wards had three times the mortality of midwives' wards. The maternal mortality rate dropped from 18% to less than 2%, and he published a book of his findings, Etiology, Concept and Prophylaxis of Childbed Fever in 1861.
Despite his research, Semmelweis's observations conflicted with the established scientific and medical opinions of the time and his ideas were rejected by the medical community. He could offer no theoretical explanation for his findings of reduced mortality due to hand-washing, and some doctors were offended at the suggestion that they should wash their hands and mocked him for it. In 1865, the increasingly outspoken Semmelweis allegedly suffered a nervous breakdown and was committed to an asylum by his colleagues. In the asylum, he was beaten by the guards. He died 14 days later from a gangrenous wound on his right hand that may have been caused by the beating.
His findings earned widespread acceptance only years after his death, when Louis Pasteur confirmed the germ theory, giving Semmelweis' observations a theoretical explanation, and Joseph Lister, acting on Pasteur's research, practised and operated using hygienic methods, with great success.
Family and early life
Ignaz Semmelweis was born on 1 July 1818 in the Tabán neighbourhood of |
https://en.wikipedia.org/wiki/Chain%20rule%20%28probability%29 | In probability theory, the chain rule (also called the general product rule) describes how to calculate the probability of the intersection of, not necessarily independent, events or the joint distribution of random variables respectively, using conditional probabilities. The rule is notably used in the context of discrete stochastic processes and in applications, e.g. the study of Bayesian networks, which describe a probability distribution in terms of conditional probabilities.
Chain rule for events
Two events
For two events and , the chain rule states that
,
where denotes the conditional probability of given .
Example
An Urn A has 1 black ball and 2 white balls and another Urn B has 1 black ball and 3 white balls. Suppose we pick an urn at random and then select a ball from that urn. Let event be choosing the first urn, i.e. , where is the complementary event of . Let event be the chance we choose a white ball. The chance of choosing a white ball, given that we have chosen the first urn, is The intersection then describes choosing the first urn and a white ball from it. The probability can be calculated by the chain rule as follows:
Finitely many events
For events whose intersection has not probability zero, the chain rule states
Example 1
For , i.e. four events, the chain rule reads
.
Example 2
We randomly draw 4 cards without replacement from deck with 52 cards. What is the probability that we have picked 4 aces?
First, we set . Obviously, we get the following probabilities
.
Applying the chain rule,
.
Statement of the theorem and proof
Let be a probability space. Recall that the conditional probability of an given is defined as
Then we have the following theorem.
Chain rule for discrete random variables
Two random variables
For two discrete random variables , we use the eventsand in the definition above, and find the joint distribution as
or
where is the probability distribution of and conditional probability distribution o |
https://en.wikipedia.org/wiki/Position%20angle | In astronomy, position angle (usually abbreviated PA) is the convention for measuring angles on the sky. The International Astronomical Union defines it as the angle measured relative to the north celestial pole (NCP), turning positive into the direction of the right ascension. In the standard (non-flipped) images, this is a counterclockwise measure relative to the axis into the direction of positive declination.
In the case of observed visual binary stars, it is defined as the angular offset of the secondary star from the primary relative to the north celestial pole.
As the example illustrates, if one were observing a hypothetical binary star with a PA of 135°, that means an imaginary line in the eyepiece drawn from the north celestial pole to the primary (P) would be offset from the secondary (S) such that the angle would be 135°.
When graphing visual binaries, the NCP is, as in the illustration, normally drawn from the center point (origin) that is the Primary downward–that is, with north at bottom–and PA is measured counterclockwise. Also, the direction of the proper motion can, for example, be given by its position angle.
The definition of position angle is also applied to extended objects like galaxies, where it refers to the angle made by the major axis of the object with the NCP line.
Nautics
The concept of the position angle is inherited from nautical navigation on the oceans, where the optimum compass course is the course from a known position to a target position with minimum effort. Setting aside the influence of winds and ocean currents, the optimum course is the course of smallest distance between the two positions on the ocean surface. Computing the compass course is known as the inverse geodetic problem.
This article considers only the abstraction of minimizing the distance between and traveling on the surface of a sphere with some radius : In which direction angle relative to North should the ship steer to reach the target position?
Se |
https://en.wikipedia.org/wiki/Radical%2072 | Radical 72 or radical sun () meaning "sun" or "day" is one of the 34 Kangxi radicals (214 radicals in total) composed of 4 strokes.
In the Kangxi Dictionary, there are 453 characters (out of 49,030) to be found under this radical.
is also the 75th indexing component in the Table of Indexing Chinese Character Components predominantly adopted by Simplified Chinese dictionaries published in mainland China, with (formerly Kangxi Radical 73 "say") and being its associated indexing components.
Evolution
Derived characters
Sinogram
The radical is also used as an independent Chinese character. It is one of the Kyōiku kanji or Kanji taught in elementary school in Japan. It is a first grade kanji |
https://en.wikipedia.org/wiki/Burusho%20people | The Burusho, or Brusho, also known as the Hunzukuch, are an ethnolinguistic group indigenous to the Yasin, Hunza, Nagar, and other valleys of Gilgit–Baltistan in northern Pakistan, with a tiny minority of around 350 Burusho people residing in Jammu and Kashmir, India. Their language, Burushaski, has been classified as a language isolate.
History
Although their origins are unknown, it is claimed that the Burusho people "were indigenous to northwestern India and were pushed higher into the mountains by the movements of the Indo-Aryans, who traveled southward sometime around 1800 B.C."
Prior to the modern era, the area in which most Burusho now live was part of the princely state of Hunza under the British Raj, until becoming part of Pakistan.
Culture
The Burusho are known for their love of music and dance, along with their progressive views towards education and women.
Longevity myth
A widely repeated claim of remarkable longevity of the Hunza people has been refuted as a longevity myth, citing a life expectancy of 53 years for men and 52 for women, although with a high standard deviation. There is no evidence that Hunza life expectancy is significantly above the average of poor, isolated regions of Pakistan. Claims of health and long life were almost always based solely on the statements by the local mir (king). An author who had significant and sustained contact with Burusho people, John Clark, reported that they were overall unhealthy.
Jammu and Kashmir
A group of 350 Burusho people also reside in the Indian union territory of Jammu and Kashmir, being mainly concentrated in Batamalu, as well as in Botraj Mohalla, which is southeast of Hari Parbat. This Burusho community is descended from two former princes of the British Indian princely states of Hunza and Nagar, who with their families, migrated to this region in the 19th century A.D. They are known as the Botraj by other ethnic groups in the state, and practice Shiite Islam. Arranged marriages are custom |
https://en.wikipedia.org/wiki/H-object | In mathematics, specifically homotopical algebra, an H-object is a categorical generalization of an H-space, which can be defined in any category with a product and an initial object . These are useful constructions because they help export some of the ideas from algebraic topology and homotopy theory into other domains, such as in commutative algebra and algebraic geometry.
Definition
In a category with a product and initial object , an H-object is an object together with an operation called multiplication together with a two sided identity. If we denote , the structure of an H-object implies there are mapswhich have the commutation relations
Examples
Magmas
All magmas with units are secretly H-objects in the category .
H-spaces
Another example of H-objects are H-spaces in the homotopy category of topological spaces .
H-objects in homotopical algebra
In homotopical algebra, one class of H-objects considered were by Quillen while constructing André–Quillen cohomology for commutative rings. For this section, let all algebras be commutative, associative, and unital. If we let be a commutative ring, and let be the undercategory of such algebras over (meaning -algebras), and set be the associatived overcategory of objects in , then an H-object in this category is an algebra of the form where is a -module. These algebras have the addition and multiplication operationsNote that the multiplication map given above gives the H-object structure . Notice that in addition we have the other two structure maps given bygiving the full H-object structure. Interestingly, these objects have the following property:giving an isomorphism between the -derivations of to and morphisms from to the H-object . In fact, this implies is an abelian group object in the category since it gives a contravariant functor with values in Abelian groups.
See also
André–Quillen cohomology
Cotangent complex
H-space |
https://en.wikipedia.org/wiki/Conquest%20of%20Pangea | Conquest of Pangea is a strategy board game, where players control evolving species battling to control sections of the mega-continent Pangea. It was released by Winning Moves Games USA in 2006 as the second game in its Immortal Eyes line. It has one expansion, Conquest of Pangea: Atlantis, which adds a new piece to the board (Atlantis) and some additional rules, as well as a few rule revisions. Neither game is in production.
Gameplay
The point of the game is to gain the most "dominance points" before the continent of Pangea has broken apart. These are earned by controlling different areas on the board, each of which has a terrain type that is randomly assigned at the beginning of the game. The game is played over a number of turns, where each turn, a player is given five "power points". The player spends these points to take different actions, such as increasing the units in a controlled area, moving to an area he controls, and battling with an opponent for control of an adjacent area. When a player gains control of two or more areas with a similar terrain type, his species gains a new power, represented on a power card. Additionally, a player gains power cards based on the actions performed during the turn. These power cards can be used to help in battles or to gain additional "power points" towards more actions on a turn. At the end of each turn, the player flips over an event card and determines what event occurs as well as how much time has passed. Once the action occurs, the card is place in a row and the total amount of time on the cards in the row is added together. If 25 million or more years have passed (are in the row), the piece of the continent shown on the last card breaks away from Pangea. Once all 6 smaller continents have broken apart, the game ends and the person with the most "dominance points" wins.
External links
Board games introduced in 2006
Biology-themed board games
Winning Moves games |
https://en.wikipedia.org/wiki/Clinician | A clinician is a health care professional typically employed at a skilled nursing facility or clinic. Clinicians work directly with patients rather than in a laboratory, community health setting or in research. A clinician may diagnose, treat and care for patients as a psychologist, clinical pharmacist, clinical scientist, nurse, physiotherapist, dentist, optometrist, physician assistant, clinical officer, physician, or paramedic. Clinicians undergo and take comprehensive training and exams to be licensed and some complete graduate degrees (master's or doctorates) in their field of expertise.
The main function of a clinician is to manage a sick person in order to cure their illness, reduce pain and suffering, and extend life considering the impact of illness upon the patient and their family as well as other social factors.
See also
List of healthcare occupations |
https://en.wikipedia.org/wiki/ELFV%20dehydrogenase | In molecular biology, the ELFV dehydrogenase family of enzymes include glutamate, leucine, phenylalanine and valine dehydrogenases. These enzymes are structurally and functionally related. They contain a Gly-rich region containing a conserved Lys residue, which has been implicated in the catalytic activity, in each case a reversible oxidative deamination reaction.
Glutamate dehydrogenases , and (GluDH) are enzymes that catalyse the NAD- and/or NADP-dependent reversible deamination of L-glutamate into alpha-ketoglutarate. GluDH isozymes are generally involved with either ammonia assimilation or glutamate catabolism. Two separate enzymes are present in yeasts: the NADP-dependent enzyme, which catalyses the amination of alpha-ketoglutarate to L-glutamate; and the NAD-dependent enzyme, which catalyses the reverse reaction - this form links the L-amino acids with the Krebs cycle, which provides a major pathway for metabolic interconversion of alpha-amino acids and alpha-keto acids.
Leucine dehydrogenase (LeuDH) is a NAD-dependent enzyme that catalyses the reversible deamination of leucine and several other aliphatic amino acids to their keto analogues. Each subunit of this octameric enzyme from Bacillus sphaericus contains 364 amino acids and folds into two domains, separated by a deep cleft. The nicotinamide ring of the NAD+ cofactor binds deep in this cleft, which is thought to close during the hydride transfer step of the catalytic cycle.
Phenylalanine dehydrogenase (PheDH) is an NAD-dependent enzyme that catalyses the reversible deamidation of L-phenylalanine into phenyl-pyruvate.
Valine dehydrogenase (ValDH) is an NADP-dependent enzyme that catalyses the reversible deamidation of L-valine into 3-methyl-2-oxobutanoate.
These enzymes contain two domains, an N-terminal dimerisation domain, and a C-terminal domain. |
https://en.wikipedia.org/wiki/Spinmechatronics | Spinmechatronics is neologism referring to an emerging field of research concerned with the exploitation of spin-dependent phenomena and established spintronic methodologies and technologies in conjunction with electro-mechanical, magno-mechanical, acousto-mechanical and opto-mechanical systems. Most especially, spinmechatronics (or spin mechatronics) concerns the integration of micro- and nano- mechatronic systems with spin physics and spintronics.
History and origins
While spinmechatronics has been recognised only recently (2008) as an independent field, hybrid spin-mechanical system development dates back to the early nineteen-nineties, with devices combining spintronics and micromechanics emerging at the turn of the twenty-first century.
One of the longest established spinmechatronic systems is the Magnetic Resonance Force Microscope or MRFM. First proposed by J. A. Sidles in a seminal paper of 1991 – and since extensively developed both theoretically and experimentally by a number of international research groups – the MRFM operates by coupling a magnetically loaded micro-mechanical cantilever to an excited nuclear, proton or electron spin system. The MRFM concept effectively combines scanning atomic force microscopy (AFM) with magnetic resonance spectroscopy to provide a spectroscopic tool of unparalleled sensitivity. Nanometre resolution is possible, and the technique potentially forms the basis for ultra-high sensitivity, ultra-high resolution magnetic, biochemical, biomedical, and clinical diagnostics.
The synergy of micromechanics and established spintronic technologies for sensing applications is one of the most significant spinmechatronic developments of the last decade. At the beginning of this century, strain sensors incorporating magnetoresistive technologies emerged and a wide range of devices exploiting similar principles are likely to realize research and commercial potential by 2015.
Contemporary innovation in spinmechatronics drives forwa |
https://en.wikipedia.org/wiki/Natural%20killer%20cell | Natural killer cells, also known as NK cells or large granular lymphocytes (LGL), are a type of cytotoxic lymphocyte critical to the innate immune system that belong to the rapidly expanding family of known innate lymphoid cells (ILC) and represent 5–20% of all circulating lymphocytes in humans. The role of NK cells is analogous to that of cytotoxic T cells in the vertebrate adaptive immune response. NK cells provide rapid responses to virus-infected cell and other intracellular pathogens acting at around 3 days after infection, and respond to tumor formation. Typically, immune cells detect the antigen presented on major histocompatibility complex (MHC) on infected cell surfaces, triggering cytokine release, causing the death of the infected cell by lysis or apoptosis. NK cells are unique, however, as they have the ability to recognize and kill stressed cells in the absence of antibodies and MHC, allowing for a much faster immune reaction. They were named "natural killers" because of the notion that they do not require activation to kill cells that are missing "self" markers of MHC class I. This role is especially important because harmful cells that are missing MHC I markers cannot be detected and destroyed by other immune cells, such as T lymphocyte cells.
NK cells can be identified by the presence of CD56 and the absence of CD3 (CD56+, CD3−). NK cells differentiate from CD127+ common innate lymphoid progenitor, which is downstream of the common lymphoid progenitor from which B and T lymphocytes are also derived. NK cells are known to differentiate and mature in the bone marrow, lymph nodes, spleen, tonsils, and thymus, where they then enter into the circulation. NK cells differ from natural killer T cells (NKTs) phenotypically, by origin and by respective effector functions; often, NKT cell activity promotes NK cell activity by secreting interferon gamma. In contrast to NKT cells, NK cells do not express T-cell antigen receptors (TCR) or pan T marker CD3 or su |
https://en.wikipedia.org/wiki/Atomistix%20ToolKit | Atomistix ToolKit (ATK) is a commercial software for atomic-scale modeling and simulation of nanosystems. The software was originally developed by Atomistix A/S, and was later acquired by QuantumWise following the Atomistix bankruptcy. QuantumWise was then acquired by Synopsys in 2017.
Atomistix ToolKit is a further development of TranSIESTA-C, which in turn in based on the technology, models, and algorithms developed in the academic codes TranSIESTA, and McDCal, employing localized basis sets as developed in SIESTA.
Features
Atomistix ToolKit combines density functional theory with non-equilibrium Green's functions for first principles electronic structure and transport calculations of
electrode—nanostructure—electrode systems (two-probe systems)
molecules
periodic systems (bulk crystals and nanotubes)
The key features are
Calculation of transport properties of two-probe systems under an applied bias voltage
Calculation of energy spectra, wave functions, electron densities, atomic forces, effective potentials etc.
Calculation of spin-polarized physical properties
Geometry optimization
A Python-based NanoLanguage scripting environment
See also
Atomistix Virtual NanoLab — a graphical user interface
NanoLanguage
Atomistix
Quantum chemistry computer programs
Molecular mechanics programs |
https://en.wikipedia.org/wiki/Longitudinal%20redundancy%20check | In telecommunication, a longitudinal redundancy check (LRC), or horizontal redundancy check, is a form of redundancy check that is applied independently to each of a parallel group of bit streams. The data must be divided into transmission blocks, to which the additional check data is added.
The term usually applies to a single parity bit per bit stream, calculated independently of all the other bit streams (BIP-8).
This "extra" LRC word at the end of a block of data is very similar to checksum and cyclic redundancy check (CRC).
Optimal rectangular code
While simple longitudinal parity can only detect errors, it can be combined with additional error-control coding, such as a transverse redundancy check (TRC), to correct errors. The transverse redundancy check is stored on a dedicated "parity track".
Whenever any single-bit error occurs in a transmission block of data, such two-dimensional parity checking, or "two-coordinate parity checking",
enables the receiver to use the TRC to detect which byte the error occurred in, and the LRC to detect exactly which track the error occurred in, to discover exactly which bit is in error, and then correct that bit by flipping it.
Pseudocode
International standard ISO 1155 states that a longitudinal redundancy check for a sequence of bytes may be computed in software by the following algorithm:
lrc := 0
for each byte b in the buffer do
lrc := (lrc + b) and 0xFF
lrc := (((lrc XOR 0xFF) + 1) and 0xFF)
which can be expressed as "the 8-bit two's-complement value of the sum of all bytes modulo 28" (x AND 0xFF is equivalent to x MOD 28).
Other forms
Many protocols use an XOR-based longitudinal redundancy check byte (often called block check character or BCC), including the serial line interface protocol (SLIP, not to be confused with the later and well-known Serial Line Internet Protocol),
the IEC 62056-21 standard for electrical-meter reading, smart cards as defined in ISO/IEC 7816, and the ACCESS.bus protocol.
|
https://en.wikipedia.org/wiki/Japanese%20wolf | The Japanese wolf (, , or , [see below]; Canis lupus hodophilax), also known as the Honshū wolf, is an extinct subspecies of the gray wolf that was once endemic to the islands of Honshū, Shikoku and Kyūshū in the Japanese archipelago.
It was one of two subspecies that were once found in the Japanese archipelago, the other being the Hokkaido wolf. Phylogenetic evidence indicates that Japanese wolf was the last surviving wild member of the Pleistocene wolf lineage (in contrast to the Hokkaido wolf which belonged to the lineage of the modern day gray wolf), and may have been the closest wild relative of the domestic dog. Many dog breeds originating from Japan also have Japanese wolf DNA from past hybridization.
Despite long being revered in Japan, the introduction of rabies and canine distemper to Japan led to the decimation of the population, and policies enacted during the Meiji Restoration led to the persecution and eventual total extermination of the subspecies by the early 20th century. Well-documented observations of similar canids have been made throughout the 20th and 21st centuries, and have been suggested to be surviving wolves. However, due to environmental and behavioral factors, doubts persist over their identity.
Etymology
C. hodopylax'''s binomial name derives from the Greek Hodos (path) and phylax (guardian), in reference to Okuri-inu from Japanese folklore, which portrayed wolves or weasels as the protectors of travelers.
There had been numerous other aliases referring to Japanese wolf, and the name ōkami (wolf) is derived from the Old Japanese öpö-kamï, meaning either "great-spirit" where wild animals were associated with the mountain spirit Yama-no-kami in the Shinto religion, or "big dog", or "big bite" (ōkami or ōkame), and "big mouth"; Ōkuchi-no-Makami (Japanese) was an old and deified alias for Japanese wolf where it was both worshipped and feared, and it meant "a true god with big-mouth" based on several theories; either referring to wolf's |
https://en.wikipedia.org/wiki/Statistics%20Surveys | Statistics Surveys is an open-access electronic journal, founded in 2007, that is jointly sponsored by the American Statistical Association, the Bernoulli Society, the Institute of Mathematical Statistics and the Statistical Society of Canada. It publishes review articles on topics of interest in statistics. Wendy L. Martinez serves as the coordinating editor.
External links
Official page
Mathematics journals
Statistics journals
Academic journals established in 2007
Institute of Mathematical Statistics academic journals
American Statistical Association academic journals |
https://en.wikipedia.org/wiki/Radioactive%20%28film%29 | Radioactive is a 2019 British biographical drama film written by Jack Thorne, directed by Marjane Satrapi and starring Rosamund Pike as Marie Curie. The film is based on the 2010 graphic novel Radioactive: Marie & Pierre Curie: A Tale of Love and Fallout by the American artist Lauren Redniss.
The film premiered as the Closing Night Gala at the 2019 Toronto International Film Festival. The film was scheduled to be released in cinemas in 2020, but its opening was cancelled due to the COVID-19 pandemic. It was released digitally in the United Kingdom on 15 June 2020 by StudioCanal and began streaming on Amazon Prime Video in the United States on 24 July 2020.
Plot
In 1934, Marie Curie collapses in her laboratory in Paris. As she is rushed to the hospital, she remembers her life. In 1893 she was frequently rejected for funding due to her attitude, which she had in common with Pierre Curie. This joint attitude issue with the leading academic authorities led her to share a laboratory with Pierre Curie.
After Marie discovered polonium and radium, the two fell in love, were married, and had two children. Soon, Marie announces the discovery of radioactivity, revolutionizing physics and chemistry. Radium is soon used in a series of commercial products. Pierre takes Marie to a séance where radium is used to attempt to contact the dead, but Marie disapproves of spiritualism and the idea of an afterlife after the death of her mother in Poland.
Although Pierre rejects the Légion d'honneur for not nominating Marie and insists that the two jointly share their Nobel Prize in Physics, Marie becomes agitated that he accepted the Prize in Stockholm without her. Soon afterwards Pierre becomes increasingly sick with anemia as a result of his research and is trampled to death by a horse. Although she initially dismisses concerns that her elements are toxic, increasing numbers of people die from serious health conditions after exposure to radium. Depressed, she begins an affair with he |
https://en.wikipedia.org/wiki/Effective%20field%20goal%20percentage | In basketball, effective field goal percentage (abbreviated eFG%) is a statistic that adjusts field goal percentage to account for the fact that three-point field goals count for three points while field goals only count for two points. Its goal is to show what field goal percentage a two-point shooter would have to shoot at to match the output of a player who also shoots three-pointers.
It is calculated by:
where:
FG = field goals made
3P = 3-point field goals made
FGA = field goal attempts
A common criticism of this formula is that shooters with very high percentage success rates, which favor 3 point shots, would arrive at an eFG\% above 100%.
It can also be calculated by:
where:
PPG = points per game
FT = the free throws made
FGA = field goal attempts
The advantage of this second formula is that it highlights the aforementioned logic behind the statistic, where it is pretended that a player only shot two-point shots (hence the division of non-free-throw points by 2).
An additional formula that seems to be more in use by the statistics actually displayed on websites (but less cited by said websites) is:
where:
2FG = 2-point field goals made
3FG = 3-point field goals made
FGA = field goal attempts
All three equations yield the same result. |
https://en.wikipedia.org/wiki/Jonathan%20Zenneck | Jonathan Adolf Wilhelm Zenneck (15 April 1871 – 8 April 1959) was a German physicist and electrical engineer who contributed to researches in radio circuit performance and to the scientific and educational contributions to the literature of the pioneer radio art. Zenneck improved the Braun cathode ray tube by adding a second deflection structure at right angles to the first, which allowed two-dimensional viewing of a waveform. This two-dimensional display is fundamental to the oscilloscope.
Early years
Zenneck was born in Ruppertshofen, Württemberg.
In 1885, Zenneck entered the Evangelical-Theological Seminary in Maulbronn. In 1887, while in a Blaubeuren seminary, Zenneck learned Latin, Greek, French, and Hebrew. In 1889, Zenneck enrolled in the University of Tübingen. At the Tübingen Seminary, he studied mathematics and natural sciences. In 1894, Zenneck took the state examination in mathematics and natural sciences and the examination for his doctor's degree. His dissertation, supervised by Theodor Eimer, was on grass snake embryos.
In 1894, Zenneck conducted zoological research (Natural History Museum, London). Between 1894 and 1895, he served in the military.
Middle years
In 1895, Zenneck left zoology and turned over to the new field of radio science, He became assistant to Ferdinand Braun and lecturer at "Physikalisches Institut" in Strasbourg, Alsace. Nikola Tesla's lectures introduced him to the wireless sciences. In 1899, Zenneck started propagation studies of wireless telegraphy, first over land, but then became more interested in the larger ranges that were reached over sea. In 1900 he started ship-to-coast experiments in the North Sea near Cuxhaven, Germany. in 1902 he conducted tests of directional antennas. In 1905, Zenneck left Strasbourg since he was appointed assistant-professor at the Danzig Technische Hochschule and in 1906, he became professor of experimental physics in the Braunschweig Technische Hochschule. Also in 1906, Zenneck wrote "Elect |
https://en.wikipedia.org/wiki/Henry%20Greenly | Henry Greenly (1876–1947) was amongst the foremost miniature railway engineers of the 20th century, remembered as a master of engineering design.
Miniature railways
Greenly is perhaps best remembered for his miniature locomotive designs. He worked closely with many engineering companies, including Bassett-Lowke and its various engineering subsidiary companies. In 1909, along with Wenman Joseph Bassett-Lowke, Henry Greenly started and edited Model Railways and Locomotives Magazine.
He worked with Captain J E P Howey on the designs for the world-famous Romney, Hythe and Dymchurch Railway in Kent, England, and served as that railway's Chief Engineer. He was also involved with innovative locomotive design work at the nearby Saltwood Miniature Railway.
Greenly Engineering Models
Greenly established his own miniature railway engineering company Greenly Engineering Models in Hounslow, Middlesex, and his renowned engineering design skills were well matched with the practical engineering skills of his workshop engineering manager Jock Campbell.
Legacy
Greenly's designs have been celebrated in countless periodicals and books, but the greatest testimony to his skill is the enormous number of his locomotives that are still operating today.
1¾ inch gauge
GWR 3600 Class 2-4-2T, tank engine, single inside cylinder, Slide valves
LMS Midland compound style 4-4-0 tender engine, two outside cylinders, valve chests between the frames, slip eccentric valve gear
SR Schools Class 4-4-0 tender engine, two outside cylinders, piston-style slide valves, Greenly-Joy or Walschaerts valve gear
LMS Royal Scot Class 4-6-0 tender engine, three cylinders, piston-style slide valves, Walschaerts valve gear
American style 4-6-2 Pacific tender engine, two outside cylinders, Baker valve gear, slide valves
Express 4-4-4T tank engine, two outside cylinders, piston-style slide valves, Greenly valve gear
LNER style 4-6-2 Pacific tender engine, two outside cylinders, piston-style slide valves, Wa |
https://en.wikipedia.org/wiki/CDYL2 | Chromodomain protein, Y-like 2 is a protein in humans that is encoded by the CDYL2 gene. It localizes to the nucleus, where it acts as a chromatin reader recognizing trimethylation of Histone H3 lysine 9 and repressing transcription. |
https://en.wikipedia.org/wiki/Regional%20Atmospheric%20Modeling%20System | The Regional Atmospheric Modeling System (RAMS) is a set of computer programs that simulate the atmosphere for weather and climate research and for numerical weather prediction (NWP). Other components include a data analysis and a visualization package.
RAMS was developed in the 1980s at Colorado State University (CSU), spearheaded by William R. Cotton and Roger A. Pielke, for mesoscale meteorological modeling. Subsequent development is primarily done by Robert L. Walko and Craig J. Tremback under the supervision of Cotton and Pielke. It is a comprehensive non-hydrostatic model. It is written primarily in Fortran with some C code and it runs best under the Unix operating system. Version 6 was released in 2009.
RAMS is the basis for a system simulating the Martian atmosphere that is named MRAMS.
See also
Downscaling |
https://en.wikipedia.org/wiki/F-test%20of%20equality%20of%20variances | In statistics, an F-test of equality of variances is a test for the null hypothesis that two normal populations have the same variance.
Notionally, any F-test can be regarded as a comparison of two variances, but the specific case being discussed in this article is that of two populations, where the test statistic used is the ratio of two sample variances. This particular situation is of importance in mathematical statistics since it provides a basic exemplar case in which the F-distribution can be derived. For application in applied statistics, there is concern that the test is so sensitive to the assumption of normality that it would be inadvisable to use it as a routine test for the equality of variances. In other words, this is a case where "approximate normality" (which in similar contexts would often be justified using the central limit theorem), is not good enough to make the test procedure approximately valid to an acceptable degree.
The test
Let X1, ..., Xn and Y1, ..., Ym be independent and identically distributed samples from two populations which each has a normal distribution. The expected values for the two populations can be different, and the hypothesis to be tested is that the variances are equal. Let
be the sample means. Let
be the sample variances. Then the test statistic
has an F-distribution with n − 1 and m − 1 degrees of freedom if the null hypothesis of equality of variances is true. Otherwise it follows an F-distribution scaled by the ratio of true variances. The null hypothesis is rejected if F is either too large or too small based on the desired alpha level (i.e., statistical significance).
Properties
This F-test is known to be extremely sensitive to non-normality, so Levene's test, Bartlett's test, or the Brown–Forsythe test are better tests for testing the equality of two variances. (However, all of these tests create experiment-wise type I error inflations when conducted as a test of the assumption of homoscedast |
https://en.wikipedia.org/wiki/Learner-generated%20context | The term learner-generated context originated in the suggestion that an educational context might be described as a learner-centric ecology of resources and that a learner generated context is one in which a group of users collaboratively marshall available resources to create an ecology that meets their needs.
There are many discussions about user-generated content (UGC), open educational resources (OER), distributed cognition and communities of practice but, although acknowledging the importance of the learning process, there has been little focus on learner-generated contexts or the impact of new technologies on the role of teacher, learner and institution.
Background
The term learner-generated context (LGC) is grounded in the premise that learning and teaching should not start with the embracing of new technologies, but rather that it is a matter of contextualising the learning first before supporting it with technology. The concept finds its roots in the affordances and potentials of a range of disruptive technologies and practice; web 2.0 and participative media, mobile learning, learning design and learning space design. It is also concerned with related issues in social interactions with technology around roles, expertise, knowledge, pedagogy, accreditation, power, participation and democracy.
The learner-generated context concept is concerned with examining the rapid increase in the variety and availability of resources and tools that enable people to easily create and publish their own materials and to access those created by others, and ways in which this extends the capacity for learning context creation beyond the traditional contexts of, inter alia, teachers, academics, designers and policymakers. It is also a concept which challenges existing pedagogies insofar as it sees a new generation of read/write, participatory technologies as enabling learners to take ownership of both their learning and their actions in the real world and to contribute to t |
https://en.wikipedia.org/wiki/Giganten%20%28board%20game%29 | Giganten , also named as "Dinosaures Giganti" is a 2-player board game designed by Herbert Pinthus and first published in 1981 by Carlit. Gameplay is inspired by another game "Stratego" and created in prehistoric setting. There were two editions.
Small edition, added variable terrain (land, swamp, lakes). Lakes were not terrain but obstacles.
Large edition: added variable board consisting of 4 parts and added 2 more dinosaurs.
The large won the Essen Feather-prize (prize for the best rules) in 1983.
Gameplay
Each player has a set of 23 saurians (dinosaurs of various kinds, pterodactyls, plesiosaurs, etc) which are stand-up cardboard pieces with plain backs, so the opponent cannot tell which piece is which. Pieces move and attack each other Stratego-fashion, with the goal being to find your opponent's eggs. Pieces have numbers to indicate their strength.
Awards
1983 Essen Feather |
https://en.wikipedia.org/wiki/Suffix%20automaton | In computer science, a suffix automaton is an efficient data structure for representing the substring index of a given string which allows the storage, processing, and retrieval of compressed information about all its substrings. The suffix automaton of a string is the smallest directed acyclic graph with a dedicated initial vertex and a set of "final" vertices, such that paths from the initial vertex to final vertices represent the suffixes of the string.
In terms of automata theory, a suffix automaton is the minimal partial deterministic finite automaton that recognizes the set of suffixes of a given string . The state graph of a suffix automaton is called a directed acyclic word graph (DAWG), a term that is also sometimes used for any deterministic acyclic finite state automaton.
Suffix automata were introduced in 1983 by a group of scientists from the University of Denver and the University of Colorado Boulder. They suggested a linear time online algorithm for its construction and showed that the suffix automaton of a string having length at least two characters has at most states and at most transitions. Further works have shown a close connection between suffix automata and suffix trees, and have outlined several generalizations of suffix automata, such as compacted suffix automaton obtained by compression of nodes with a single outgoing arc.
Suffix automata provide efficient solutions to problems such as substring search and computation of the largest common substring of two and more strings.
History
The concept of suffix automaton was introduced in 1983 by a group of scientists from University of Denver and University of Colorado Boulder consisting of Anselm Blumer, Janet Blumer, Andrzej Ehrenfeucht, David Haussler and Ross McConnell, although similar concepts had earlier been studied alongside suffix trees in the works of Peter Weiner, Vaughan Pratt and Anatol Slissenko. In their initial work, Blumer et al. showed a suffix automaton built for the |
https://en.wikipedia.org/wiki/Prosthetic%20testicle | A prosthetic testicle is an artificial replacement for a testicle lost through surgery or injury. Consisting of a plastic ovoid manufactured from silicone rubber, and either solid, or filled with a salt solution and implanted in the scrotum, a prosthetic testicle provides the appearance and feel of a testis and prevents scrotum shrinkage. It is also commonly used in female-to-male sex reassignment surgery.
The list shows testicular implants available in the market in 2020.
See also
Prosthesis
Reconstructive surgery |
https://en.wikipedia.org/wiki/Institut%20f%C3%BCr%20Rundfunktechnik | The GmbH (IRT) (Institute for Broadcasting Technology Ltd.) was a research centre of German broadcasters (ARD / ZDF / DLR), Austria's broadcaster (ORF) and the Swiss public broadcaster (SRG / SSR). It was responsible for research on broadcasting technology. It was founded in 1956 and was located in Munich, Germany.
They invented or were influential in the research, development and field-testing of important standards such as ARI, RDS, VPS, DSR, DAB and DVB-T.
was a founding member of the Hybrid Broadcast Broadband TV (HbbTV) consortium of broadcasting and Internet industry companies that established an open European standard (called HbbTV) for hybrid set-top boxes for the reception of broadcast TV and broadband multimedia applications with a single user interface.
In 2020, ZDF and then other supporters indicated that they planned to withdraw from the organization, so the IRT was closed by the end of 2020.
Former members
Bayerischer Rundfunk
Deutsche Welle
Deutschlandradio
Hessischer Rundfunk
Mitteldeutscher Rundfunk
Norddeutscher Rundfunk
Radio Bremen
Rundfunk Berlin-Brandenburg
Saarländischer Rundfunk
SRG SSR
Südwestrundfunk
Westdeutscher Rundfunk Köln
ZDF
See also
BBC Research & Development
High Com FM (researched and field-trialed by IRT between 1979 and 1984)
Wittmoor List (maintained by IRT up to June 2018)
European Broadcasting Union (EBU)
International Telecommunication Union (ITU)
DVB Project
WorldDAB
Public broadcasting
Teletext
(FTZ)
(RFZ)
(IVZ) |
https://en.wikipedia.org/wiki/DivestOS | DivestOS is an operating system based on the Android mobile platform. It is a soft fork of LineageOS that aims to increase security and privacy with support for end-of-life devices. As much as possible, it removes unnecessary proprietary Android components and includes only free-software.
DivestOS builds are signed with release-keys so bootloaders may be re-locked on many devices. An automated CVE patcher is used to patch the kernels against many known vulnerabilities.
DivestOS includes few default applications. F-Droid is included with options for selecting several custom F-Droid repositories as well. DivestOS supports using Orbot and Tor Browser as privacy-enhancing features.
History
The DivestOS project began in 2014, with the first properly signed builds being released in 2015.
The project is the work of one primary developer with contributions from numerous other developers.
Public release of DivestOS was announced on F-Droid forums in June 2020.
Supported devices
DivestOS primarily supports devices that have been supported by LineageOS.
History
In February 2022, TechTracker.in said DivestOS is one of few custom ROMs focusing on security and privacy, with monthly and incremental updates.
GNU/Linux.ch Linux and Freie Software News called DivestOS "relatively new and ambitious" and said it supports many devices, both newer and older.
DevsJournal called DivestOS 18.1 one of the best custom ROMs for the One Plus One phone.
DivestOS' Hypatia malware scanner for Android, and how to use their F-Droid repository, was reviewed by Gadget Hacks in March 2021. In November 2021, the Kuketz Security blog said Hypatia was the only malware scanner without tracking libraries of several they reviewed, but said its functionality was limited.
In March 2023, the 2022 Free Software Foundation Award for Outstanding New Free Software Contributor went to Tad (SkewedZeppelin), chief developer of the DivestOS project.
In a review in June 2023, the Kuketz Security blog sa |
https://en.wikipedia.org/wiki/Reference%20dimension | A reference dimension is a dimension on an engineering drawing provided for information only. Reference dimensions are provided for a variety of reasons and are often an accumulation of other dimensions that are defined elsewhere (e.g. on the drawing or other related documentation). These dimensions may also be used for convenience to identify a single dimension that is specified elsewhere (e.g. on a different drawing sheet).
Reference dimensions are not intended to be used directly to define the geometry of an object. Reference dimensions do not normally govern manufacturing operations (such as machining) in any way and, therefore, do not typically include a dimensional tolerance (though a tolerance may be provided if such information is deemed helpful). Consequently, reference dimensions are also not subject to dimensional inspection under normal circumstances.
Reference dimensions are commonly used in CAD software along with constraints that usually denote the opposite: mandatory dimensions to be precisely followed.
Notation
In Computer-Aided Design (CAD) it's commonly used to denote dimensions.
REF
Prior to use of modern CAD software, reference dimensions were traditionally indicated on a drawing by the abbreviation "REF" written adjacent to the dimension (typically to the right or underneath the dimension).
However, standard ASME Y14.5 has changed the way references are marked and the abbreviation "REF" has been replaced with the use of parentheses around the dimension. As an example, a distance of 1500 millimeters might be denoted by instead of .
This implementation has followed in modern CAD software that makes use of parentheses as the default denotation method whenever reference dimensions are "automatically" created by the software. The method for identifying a reference dimension (or reference data) on drawings is to enclose the dimension (or data) within parentheses.
See also
Engineering drawing abbreviations and symbols
Geometric dimension |
https://en.wikipedia.org/wiki/Oncofetal%20antigen | Oncofetal antigens are proteins which are typically present only during fetal development but are found in adults with certain kinds of cancer. These proteins are often measurable in the blood of individuals with cancer and may be used to both diagnose and follow treatment of the tumors. One example of an oncofetal antigen is alpha-fetoprotein, which is produced by hepatocellular carcinoma and some germ cell tumors. Another example is carcinoembryonic antigen, which is elevated in people with colon cancer and other tumors. Other oncofetal antigens are trophoblast glycoprotein precursor and immature laminin receptor protein (also known as oncofetal antigen protein). Oncofetal antigens are promising targets for vaccination against several types of cancers.
External links
Entrez protein entry for trophoblast glycoprotein precursor |
https://en.wikipedia.org/wiki/Metropolis%20light%20transport | Metropolis light transport (MLT) is a global illumination application of a variant of the Monte Carlo method called the Metropolis–Hastings algorithm to the rendering equation for generating images from detailed physical descriptions of three-dimensional scenes.
The procedure constructs paths from the eye to a light source using bidirectional path tracing, then constructs slight modifications to the path. Some careful statistical calculation (the Metropolis algorithm) is used to compute the appropriate distribution of brightness over the image. This procedure has the advantage, relative to bidirectional path tracing, that once a path has been found from light to eye, the algorithm can then explore nearby paths; thus difficult-to-find light paths can be explored more thoroughly with the same number of simulated photons. In short, the algorithm generates a path and stores the path's 'nodes' in a list. It can then modify the path by adding extra nodes and creating a new light path. While creating this new path, the algorithm decides how many new 'nodes' to add and whether or not these new nodes will actually create a new path.
Metropolis light transport is an unbiased method that, in some cases (but not always), converges to a solution of the rendering equation faster than other unbiased algorithms such as path tracing or bidirectional path tracing.
Energy Redistribution Path Tracing (ERPT) uses Metropolis sampling-like mutation strategies instead of an intermediate probability distribution step.
See also
Nicholas Metropolis – The physicist after whom the algorithm is named
Renderers using MLT:
Arion – A commercial unbiased renderer based on path tracing and providing an MLT sampler
Iray (external link) – An unbiased renderer that has an option for MLT
Kerkythea – A free unbiased 3D renderer that uses MLT
LuxRender – An open source unbiased renderer that uses MLT
Mitsuba Renderer (web site) A research-oriented renderer which implements several MLT varian |
https://en.wikipedia.org/wiki/Glycoside%20hydrolase%20family%2015 | In molecular biology, glycoside hydrolase family 15 is a family of glycoside hydrolases.
Glycoside hydrolases are a widespread group of enzymes that hydrolyse the glycosidic bond between two or more carbohydrates, or between a carbohydrate and a non-carbohydrate moiety. A classification system for glycoside hydrolases, based on sequence similarity, has led to the definition of >100 different families. This classification is available on the CAZy web site, and also discussed at CAZypedia, an online encyclopedia of carbohydrate active enzymes. y[ _]9
Glycoside hydrolase family 15 CAZY GH_15 comprises enzymes with several known activities; glucoamylase (); alpha-glucosidase (); glucodextranase ().
Glucoamylase (GA) catalyses the release of D-glucose from the non-reducing ends of starch and other oligo- or poly-saccharides. Studies of fungal GA have indicated 3 closely clustered acidic residues that play a role in the catalytic mechanism. This region is also conserved in a recently sequenced bacterial GA.
The 3D structure of the pseudo-tetrasaccharide acarbose complexed with glucoamylase II(471) from Aspergillus awamori var. X100 has been determined to 2.4A resolution. The protein belongs to the mainly alpha class, and contains 19 helices and 9 strands. |
https://en.wikipedia.org/wiki/Dunkaroos | Dunkaroos are a snack food from Betty Crocker, first launched in 1990. It consists of a snack-sized package containing cookies and frosting; as the name implies, the cookies are meant to be dunked into the frosting before eating. Individual snack packages contain about ten small cookies and one cubic inch of frosting. The cookies are made in a variety of shapes, including a circle with an uppercase "D" in the center (the only shape featured in the 2020 version), feet, the mascot in different poses, and a hot air balloon.
Marketing
The Dunkaroos mascot is a cartoon kangaroo, explaining the product's name which is a portmanteau of dunk and kangaroos. The original mascot was Sydney, a caricature of modern Australian culture, who wore a hat, vest, and tie and spoke with an Australian accent, and was voiced by John Cameron Mitchell. At the height of their popularity in 1996, a contest known as "Dunk-a-roos Kangaroo Kanga-Who Search" was held, resulting in the new mascot: Duncan, named the dunkin' daredevil.
History
The product was discontinued in the United States in 2012 but continued to be sold in Canada. In 2016, General Mills announced a campaign called "Smugglaroos", which encouraged Canadians travelling to the United States to bring the snack to Americans who wanted it. Dunkaroos continued being sold in Canada until January 2018, with no comment by General Mills. In December 2019 Dunkaroos were brought back unofficially by Nestlé with a chocolate-hazelnut flavour. The biscuits are shaped like a kangaroo biscuit.
This is only available in Australia as Nestlé does not have the right to sell Dunkaroos worldwide.
On February 3, 2020, a BuzzFeed article was published claiming that General Mills sent them exclusive info regarding a return of Dunkaroos. The official Twitter account for Dunkaroos claimed that they were scheduled to be re-released during the summer of 2020. It also used to link to the BuzzFeed article in the bio, but this was later changed to their off |
https://en.wikipedia.org/wiki/Posterior%20meningeal%20artery | The posterior meningeal artery is one of the meningeal branches of the ascending pharyngeal artery (and is typically considered the terminal branch of said artery). It passes through the jugular foramen to enter the posterior cranial fossa. It is the largest vessel supplying the dura of the posterior cranial fossa.
It may occasionally arise from other arteries (e.g. the occipital artery).
It forms anastomoses with the branches of the middle meningeal artery, and the vertebral artery. |
https://en.wikipedia.org/wiki/Optical%20chaos | In the field of photonics, optical chaos is chaos generated by laser instabilities using different schemes in semiconductor and fiber lasers. Optical chaos is observed in many non-linear optical systems. One of the most common examples is an optical ring resonators.
Optical computing
Optical chaos was a field of research in the mid-1980s and was aimed at the production of all-optical devices including all-optical computers. Researchers realised later the inherent limitation of the optical systems due to the nonlocalised nature of photons compared to highly localised nature of electrons.
Communications
Research in optical chaos has seen a recent resurgence in the context of studying synchronization phenomena, and in developing techniques for secure optical communications. |
https://en.wikipedia.org/wiki/Five-second%20rule | The five-second rule, or sometimes the three-second rule, is a food hygiene myth that states a defined time window after which it is not safe to eat food (or sometimes to use cutlery) after it has been dropped on the floor or on the ground and thus exposed to contamination.
There appears to be no scientific consensus on the general applicability of the rule, and its origin is unclear. It probably originated succeeding germ theory in the late 19th century. The first known mention of the rule in print is in the 1995 novel Wanted: Rowing Coach.
Research
The five-second rule has received some scholarly attention. It has been studied as both a public health recommendation and a sociological effect.
University of Illinois
In 2003, Jillian Clarke of the University of Illinois at Urbana–Champaign found in a survey that 56% of men and 70% of women surveyed were familiar with the five-second rule. She also determined that a variety of foods were significantly contaminated by even brief exposure to a tile inoculated with E. coli. On the other hand, Clarke found no significant evidence of contamination on public flooring. For this work, Clarke received the 2004 Prize in public health.
A more thorough study in 2007 using salmonella on wood, tiles, and nylon carpet, found that the bacteria could thrive under dry conditions even after twenty-eight days. Tested on surfaces that had been contaminated with salmonella eight hours previously, the bacteria could still contaminate bread and baloney lunchmeat in under five seconds. But a minute-long contact increased contamination about tenfold (especially on tile and carpet surfaces).
Rutgers University
Researchers at Rutgers University debunked the theory in 2016 by dropping watermelon cubes, gummy candies, plain white bread, and buttered bread from a height of onto surfaces slathered in Enterobacter aerogenes. The surfaces used were carpet, ceramic tile, stainless steel and wood. The food was left on the surface for intervals |
https://en.wikipedia.org/wiki/Roton | In theoretical physics, a roton is an elementary excitation, or quasiparticle, seen in superfluid helium-4 and Bose–Einstein condensates with long-range dipolar interactions or spin-orbit coupling. The dispersion relation of elementary excitations in this superfluid shows a linear increase from the origin, but exhibits first a maximum and then a minimum in energy as the momentum increases. Excitations with momenta in the linear region are called phonons; those with momenta close to the minimum are called rotons. Excitations with momenta near the maximum are called maxons.
The term "roton-like" is also used for the predicted eigenmodes in 3D metamaterials using beyond-nearest-neighbor coupling. The observation of such a "roton-like" dispersion relation was demonstrated under ambient conditions for both acoustic pressure waves in a channel-based metamaterial at audible frequencies and transverse elastic waves in a microscale metamaterial at ultrasound frequencies.
Models
Originally, the roton spectrum was phenomenologically introduced by Lev Landau. Currently there exist different models which try to explain the roton spectrum with different degrees of success and fundamentality. The requirement for any model of this kind is that it must explain not only the shape of the spectrum itself but also other related observables, such as the speed of sound and structure factor of superfluid helium-4. Microwave and Bragg spectroscopy has been conducted on helium to study the roton spectrum.
Bose–Einstein condensation
Bose–Einstein condensation of rotons has been also proposed and studied. Its first detection has been reported in 2018. Under specific conditions the roton minimum gives rise to a crystal solid-like structure called the supersolid, as shown in experiments from 2019.
See also
Superfluid
Macroscopic quantum phenomena
Bose–Einstein condensate |
https://en.wikipedia.org/wiki/Chromosome%20combing | Chromosome combing (also known as molecular combing or DNA combing) is a technique used to produce an array of uniformly stretched DNA that is then highly suitable for nucleic acid hybridization studies such as fluorescent in situ hybridisation (FISH) which benefit from the uniformity of stretching, the easy access to the hybridisation target sequences, and the resolution offered by the large distance between two probes, which is due to the stretching of the DNA by a factor of 1.5 times the crystallographic length of DNA.
DNA in solution (i.e. with a randomly-coiled structure) is stretched by retracting the meniscus of the solution at a constant rate (typically 300 µm/s). The ends of DNA strands, which are thought to be frayed (i.e. open and exposing polar groups) bind to ionisable groups coating a silanized glass plate at a pH below the pKa of the ionizable groups (ensuring they are charged enough to interact with the ends of DNA). The rest of the DNA, which is mostly dsDNA, cannot form these interactions (aside from a few ‘touch down’ segments along the length of the DNA strand) so is available for hybridisation to probes. As the meniscus retracts, surface retention creates a force that acts on DNA to retain it in the liquid phase; however this force is inferior to the strength of the DNA’s attachment; the result is that the DNA is stretched as it enters the air phase; as the force acts in the locality of the air/liquid phase, it is invariant to different lengths or conformations of the DNA in solution, so DNA of any length will be stretched the same as the meniscus retracts. As this stretching is constant along the length of a DNA, distance along the strand can be related to base content; 1 µm is approximately equivalent to 2 kb.
DNA regions of interest are observed by hybridising them with probes labelled by haptens like biotin; this can then be bound by one or more layers of fluorochrome-associated ligands (such as immunofluorescence antibodies). Multicolour |
https://en.wikipedia.org/wiki/Terence%20Gaffney | Terence Gaffney (born 9 March 1948) is an American mathematician who has made fundamental contributions to singularity theory – in particular, to the fields of singularities of maps and equisingularity theory.
Professional career
He is a Professor of Mathematics at Northeastern University. He did his undergraduate studies at Boston College. He received his Ph.D. from Brandeis University in 1975 under the direction of Edgar Henry Brown Jr. and Harold Levine. In 1975 he became an AMS Centennial Fellow at MIT and a year later he joined the Brown University faculty as Tamarkind instructor. In 1979 Gaffney became professor at Northeastern University where he has remained ever since. He has served as department chair, graduate director, chair of the undergraduate curriculum committee, and faculty senator.
Selected publications
.
.
.
.
.
.
.
.
.
.
.
.
.
See also
Mather-Gaffney criterion |
https://en.wikipedia.org/wiki/General%20Comprehensive%20Operating%20System | General Comprehensive Operating System (GCOS, ; originally GECOS, General Electric Comprehensive Operating Supervisor) is a family of operating systems oriented toward the 36-bit GE-600 series and Honeywell 6000 series mainframe computers.
The original version of GCOS was developed by General Electric beginning in 1962. The operating system is still used today in its most recent versions (GCOS 7 and GCOS 8) on servers and mainframes produced by Groupe Bull, primarily through emulation, to provide continuity with legacy mainframe environments. GCOS 7 and GCOS 8 are separate branches of the operating system and continue to be developed alongside each other.
History
GECOS
The GECOS operating system was developed by General Electric for the 36-bit GE-600 series in 1962–1964; GE released GECOS I (with a prototype 635) in April 1965, GECOS II in November 1965 and GECOS III (with time-sharing) in 1967. It bore a close resemblance architecturally to IBSYS on the IBM 7094 and less to DOS/360 on the IBM System/360. However, the GE 600 Series four processor architecture was very different from the System/360 and GECOS was more ambitious than DOS/360. GECOS-III supported both time-sharing (TSS) and batch processing, with dynamic allocation of memory (IBM had fixed partitions, at that time), making it a true second-generation operating system.
Honeywell GCOS 3
After Honeywell acquired GE's computer division, GECOS-III was renamed GCOS 3, and the hardware line was renamed to the Honeywell 6000 series, adding the EIS (enhanced instruction set, character oriented instead of word oriented).
GCOS 64
The name "GCOS" was extended to the operating systems for all Honeywell-marketed product lines. GCOS-64, a completely different 32-bit operating system for the Level 64 series, similar to a parallel development called Multics, was designed by Honeywell and Honeywell Bull developers in France and Boston.
GCOS 61/62
GCOS-62, the operating system for another 32-bit low-end line of mach |
https://en.wikipedia.org/wiki/Dethiosulfovibrio%20peptidovorans | Dethiosulfovibrio peptidovorans is an anaerobic, slightly halophilic, thiosulfate-reducing bacterium. Its genome has been sequenced. It is vibrio-shaped (3-5 by 1 µm), gram-negative and possesses lateral flagella. It is non-spore-forming. Its type strain is SEBR 4207T. |
https://en.wikipedia.org/wiki/Metamathematics | Metamathematics is the study of mathematics itself using mathematical methods. This study produces metatheories, which are mathematical theories about other mathematical theories. Emphasis on metamathematics (and perhaps the creation of the term itself) owes itself to David Hilbert's attempt to secure the foundations of mathematics in the early part of the 20th century. Metamathematics provides "a rigorous mathematical technique for investigating a great variety of foundation problems for mathematics and logic" (Kleene 1952, p. 59). An important feature of metamathematics is its emphasis on differentiating between reasoning from inside a system and from outside a system. An informal illustration of this is categorizing the proposition "2+2=4" as belonging to mathematics while categorizing the proposition "'2+2=4' is valid" as belonging to metamathematics.
History
Metamathematical metatheorems about mathematics itself were originally differentiated from ordinary mathematical theorems in the 19th century to focus on what was then called the foundational crisis of mathematics. Richard's paradox (Richard 1905) concerning certain 'definitions' of real numbers in the English language is an example of the sort of contradictions that can easily occur if one fails to distinguish between mathematics and metamathematics. Something similar can be said around the well-known Russell's paradox (Does the set of all those sets that do not contain themselves contain itself?).
Metamathematics was intimately connected to mathematical logic, so that the early histories of the two fields, during the late 19th and early 20th centuries, largely overlap. More recently, mathematical logic has often included the study of new pure mathematics, such as set theory, category theory, recursion theory and pure model theory, which is not directly related to metamathematics.
Serious metamathematical reflection began with the work of Gottlob Frege, especially his Begriffsschrift, published in 1879 |
https://en.wikipedia.org/wiki/Actinobacterial%20phage%20holin%20family | The Actinobacterial Phage Holin (APH) Family (TC# 1.E.57) is a fairly large family of proteins between 105 and 180 amino acyl residues in length, typically exhibiting a single transmembrane segment (TMS) near the N-terminus. A representative list of proteins belonging to the APH family can be found in the Transporter Classification Database.
One of the archetype proteins in this family is the Gp5 holin of mycobacteriophage Ms6. Mycobacteriophage Ms6 is a double-stranded DNA (dsDNA) bacteriophage which, in addition to the predicted endolysin (LysA)-holin (Gp4) lysis system, encodes three additional proteins within its lysis module: Gp1, LysB, and Gp5.
Ms6 Gp4 (TC# 1.E.18.1.2) was previously described as a class II holin-like protein. A second putative holin gene (gp5) encoding a protein (Gp5) with a predicted single N-terminal TMS was identified at the end of the Ms6 lytic operon. Neither the putative class II holin nor the single TMS polypeptide could trigger lysis in pairwise combinations with the endolysin LysA in Escherichia coli. However, further studies have shown that Ms6's Gp4 and Gp5 interact with each other. This suggests that in Ms6 infection, the correct and programmed timing of lysis is achieved by the combined action of Gp4 and Gp5.
See also
Actinobacteria
Holin
Lysin
Transporter Classification Database
Further reading |
https://en.wikipedia.org/wiki/Cardiac%20Pacemakers%2C%20Inc. | Cardiac Pacemakers, Inc.(CPI), doing business as Guidant Cardiac Rhythm Management, manufactured implantable cardiac rhythm management devices, such as pacemakers and defibrillators. It sold microprocessor-controlled insulin pumps and equipment to regulate heart rhythm. It developed therapies to treat irregular heartbeat. The company was founded in 1971 and is based in St. Paul, Minnesota. Cardiac Pacemakers, Inc. is a subsidiary of Boston Scientific Corporation.
Early history
CPI was founded in February 1972 in St. Paul, Minnesota. The first $50,000 capitalization for CPI was raised from a phone booth on the Minneapolis skyway system. They began designing and testing their implantable cardiac pacemaker powered with a new longer-life lithium battery in 1971. The first heart patient to receive a CPI pacemaker emerged from surgery in June 1973. Within two years, the upstart company that challenged Medtronic had sold approximately 8,500 pacemakers.
Medtronic at the time had 65% of the artificial pacemaker market. CPI was the first spin-off from Medtronic. It competition using the world's first lithium-powered pacemaker. Medtronic's market share plummeted to 35%.
Founding partners Anthony Adducci, Manny Villafaña, Jim Baustert, and Art Schwalm, were former Medtronic employees. Lawsuits ensued, all settled out of court.
Acquisition
In early 1978, CPI was concerned about a friendly takeover attempt. Despite impressive sales, the company's stock price had fluctuated wildly the year before, dropping from $33 to $11 per share. Some speculated that the stock was being sold short, while others attributed the price to the natural volatility of high-tech stock. As a one-product company, CPI was susceptible to changing market conditions, and its founders knew they needed to diversify. They considered two options: acquiring other medical device companies or being acquired themselves. They chose the latter.
Several companies expressed interest in acquiring CPI, including 3M, |
https://en.wikipedia.org/wiki/Lowest%20common%20ancestor | In graph theory and computer science, the lowest common ancestor (LCA) (also called least common ancestor) of two nodes and in a tree or directed acyclic graph (DAG) is the lowest (i.e. deepest) node that has both and as descendants, where we define each node to be a descendant of itself (so if has a direct connection from , is the lowest common ancestor).
The LCA of and in is the shared ancestor of and that is located farthest from the root. Computation of lowest common ancestors may be useful, for instance, as part of a procedure for determining the distance between pairs of nodes in a tree: the distance from to can be computed as the distance from the root to , plus the distance from the root to , minus twice the distance from the root to their lowest common ancestor .
In a tree data structure where each node points to its parent, the lowest common ancestor can be easily determined by finding the first intersection of the paths from and to the root. In general, the computational time required for this algorithm is where is the height of the tree (length of longest path from a leaf to the root). However, there exist several algorithms for processing trees so that lowest common ancestors may be found more quickly. Tarjan's off-line lowest common ancestors algorithm, for example, preprocesses a tree in linear time to provide constant-time LCA queries. In general DAGs, similar algorithms exist, but with super-linear complexity.
History
The lowest common ancestor problem was defined by , but were the first to develop an optimally efficient lowest common ancestor data structure. Their algorithm processes any tree in linear time, using a heavy path decomposition, so that subsequent lowest common ancestor queries may be answered in constant time per query. However, their data structure is complex and difficult to implement. Tarjan also found a simpler but less efficient algorithm, based on the union-find data structure, for computing lowest common |
https://en.wikipedia.org/wiki/Paneitz%20operator | In the mathematical field of differential geometry, the Paneitz operator is a fourth-order differential operator defined on a Riemannian manifold of dimension n. It is named after Stephen Paneitz, who discovered it in 1983, and whose preprint was later published posthumously in .
In fact, the same operator was found earlier in the context of conformal supergravity by E. Fradkin and A. Tseytlin in 1982
(Phys Lett B 110 (1982) 117 and Nucl Phys B 1982 (1982) 157 ).
It is given by the formula
where Δ is the Laplace–Beltrami operator, d is the exterior derivative, δ is its formal adjoint, V is the Schouten tensor, J is the trace of the Schouten tensor, and the dot denotes tensor contraction on either index. Here Q is the scalar invariant
where Δ is the positive Laplacian. In four dimensions this yields the Q-curvature.
The operator is especially important in conformal geometry, because in a suitable sense it depends only on the conformal structure. Another operator of this kind is the conformal Laplacian. But, whereas the conformal Laplacian is second-order, with leading symbol a multiple of the Laplace–Beltrami operator, the Paneitz operator is fourth-order, with leading symbol the square of the Laplace–Beltrami operator. The Paneitz operator is conformally invariant in the sense that it sends conformal densities of weight to conformal densities of weight . Concretely, using the canonical trivialization of the density bundles in the presence of a metric, the Paneitz operator P can be represented in terms of a representative the Riemannian metric g as an ordinary operator on functions that transforms according under a conformal change according to the rule
The operator was originally derived by working out specifically the lower-order correction terms in order to ensure conformal invariance. Subsequent investigations have situated the Paneitz operator into a hierarchy of analogous conformally invariant operators on densities: the GJMS operators.
The P |
https://en.wikipedia.org/wiki/Control%20structure%20diagram | A control structure diagram (CSD) automatically documents the program flow within the source code and adds indentation with graphical symbols. Thereby the source code becomes visibly structured without sacrificing space.
See also
Data structure diagram
Diagram
Entity-relationship model
Hierarchy diagram
Unified Modeling Language
Visual programming language
External links
"The Control Structure Diagram (CSD)" - A chapter from jGRASP Tutorials
"Control Structure Diagrams for Ada 95"
Data modeling diagrams
Data modeling languages
Source code |
https://en.wikipedia.org/wiki/Clique%20complex | Clique complexes, independence complexes, flag complexes, Whitney complexes and conformal hypergraphs are closely related mathematical objects in graph theory and geometric topology that each describe the cliques (complete subgraphs) of an undirected graph.
Clique complex
The clique complex of an undirected graph is an abstract simplicial complex (that is, a family of finite sets closed under the operation of taking subsets), formed by the sets of vertices in the cliques of . Any subset of a clique is itself a clique, so this family of sets meets the requirement of an abstract simplicial complex that every subset of a set in the family should also be in the family.
The clique complex can also be viewed as a topological space in which each clique of vertices is represented by a simplex of dimension . The 1-skeleton of (also known as the underlying graph of the complex) is an undirected graph with a vertex for every 1-element set in the family and an edge for every 2-element set in the family; it is isomorphic to .
Negative example
Every clique complex is an abstract simplicial complex, but the opposite is not true. For example, consider the abstract simplicial complex over with maximal sets If it were the of some graph , then had to have the edges so should also contain the clique
Independence complex
The independence complex of an undirected graph is an abstract simplicial complex formed by the sets of vertices in the independent sets of . The clique complex of is equivalent to the independence complex of the complement graph of .
Flag complex
A flag complex is an abstract simplicial complex with an additional property called "2-determined": for every subset S of vertices, if every pair of vertices in S is in the complex, then S itself is in the complex too.
Every clique complex is a flag complex: if every pair of vertices in S is a clique of size 2, then there is an edge between them, so S is a clique.
Every flag complex is a clique c |
https://en.wikipedia.org/wiki/Hagen%E2%80%93Rubens%20relation | In optics, the Hagen–Rubens relation (or Hagen–Rubens formula) is a relation between the coefficient of reflection and the conductivity for materials that are good conductors. The relation states that for solids where the contribution of the dielectric constant to the index of refraction is negligible, the reflection coefficient can be written as (in SI Units):
where is the frequency of observation, is the conductivity, and is the vacuum permittivity. For metals, this relation holds for frequencies (much) smaller than the Drude relaxation rate, and in this case the otherwise frequency-dependent conductivity can be assumed frequency-independent and equal to the dc conductivity.
The relation is named after German physicists Ernst Bessel Hagen and Heinrich Rubens who discovered it in 1903. |
https://en.wikipedia.org/wiki/Parathyroid%20hormone%20family | The parathyroid hormone family is a family of structurally and functionally related proteins. Parathyroid hormone (PTH) is a polypeptidic hormone primarily involved in calcium metabolism. The parathyroid hormone-related protein (PTH-rP) is a related protein with predominantly paracrine function and possibly an endocrine role in lactation, as PTHrP has been found to be secreted by mammary glands into the circulation and increase bone turnover. PTH and PTH-rP bind to the same G-protein coupled receptor. The related protein PTH-L has been found in teleost fish, which also have two forms of PTH and PTHrP. Three subfamilies can be identified: PTH, PTHrP and PTH-L.
Human proteins containing this domain
PTH
PTHLH |
https://en.wikipedia.org/wiki/Biosurvey | A biosurvey, or biological survey, is a scientific study of organisms to assess the condition of an ecological resource, such as a water body.
Overview
Biosurveys are used by government agencies responsible for management of public lands, environmental planning and/or environmental regulation to assess ecological resources, such as rivers, streams, lakes and wetlands. They involve collection and analysis of animal and/or plant samples which serve as bioindicators. The studies may be conducted by professional scientists or volunteer organizations. They are conducted according to published procedures to ensure consistency in data collection and analysis, and to compare findings to established metrics.
Biosurveys typically use metrics such as species composition and richness (e.g. number of species, extent of pollution-tolerant species), and ecological factors (number of individuals, proportion of predators, presence of disease). Biosurveys may identify pollution problems that are difficult or expensive to detect using chemical testing procedures.
A biosurvey may be used to generate an index of biological integrity (IBI), a scoring system for an ecological resource.
Water resource biosurveys
Protocols for conducting biosurveys of water resources have been published by state government agencies and the U.S. Environmental Protection Agency (EPA). Agencies use these protocols to implement the Clean Water Act. Similar protocols have been published by volunteer organizations.
See also
Bioassay
Biological integrity
Biomonitoring
Indicator species
Water pollution
Water quality |
https://en.wikipedia.org/wiki/The%20Darkest%20Hour%20%28film%29 | The Darkest Hour is a 2011 science fiction action film directed by Chris Gorak from a screenplay by Jon Spaihts and produced by Timur Bekmambetov. The film stars Emile Hirsch, Max Minghella, Olivia Thirlby, Rachael Taylor, and Joel Kinnaman as a group of people caught in an alien invasion. The film was released on December 25, 2011 in the United States, and grossed $65 million on a $35 million budget.
Plot
Two Americans, Ben and Sean, travel to Moscow to sell a piece of software. After arriving, they find their business partner, Skyler, has betrayed them by already selling a knockoff application. Disappointed, the pair goes to a nightclub and meets tourists Natalie and Anne. Suddenly, the lights go out and everyone heads outside. There, they witness balls of light fall from the sky and fade away. Invisible entities begin hunting and disintegrating people, generating panic.
Ben, Sean, Natalie, Anne, and now Skyler hide in the club's storeroom for seven days. With most of their food gone, the group leaves the club and finds the city full of scorched cars and cinders, but empty of people. While they search for supplies in a police car, an alien appears. They hide as it moves closer, causing the car's lights and siren to turn on. This makes Sean realize that light bulbs and other technologies give the aliens away. The group takes shelter in a shopping mall. Sean and Natalie go to look for clothes and almost run into an alien who cannot see them through a glass wall. Sean theorizes that the aliens can only see their electrical charge, but not through glass or other insulators.
The group finds the US embassy gutted and lifeless. A logbook there tells them that the invasion is worldwide. They also discover a radio broadcasting a message in Russian. During their search, Skyler is killed by the aliens. The others see a light in a nearby apartment tower and go to investigate. They find a young woman named Vika and a man named Sergei, an electrical engineer. Sergei has made |
https://en.wikipedia.org/wiki/Amina%20Doumane | Amina Doumane (September 2, 1990) is a Moroccan computer scientist who on during 2017 won the French Giles-Kahn prize for the best doctoral thesis in France. Her thesis was on the subject On the infinitary proof theory of logics with fixed points. On January 31 2018, Doumane was presented with the award by the French computer science society (SIF).
Research
Her doctoral thesis centered around a circular proof system.
Honours, decorations, awards and distinctions
Gilles Kahn prize for best French doctoral thesis, 2017, by the Société informatique de France (SIF). |
https://en.wikipedia.org/wiki/Gerard%20of%20Brussels | Gerard of Brussels (, ) was an early thirteenth-century geometer and philosopher known primarily for his Latin book Liber de motu (On Motion), which was a pioneering study in kinematics, probably written between 1187 and 1260. It has been described as "the first Latin treatise that was to take the fundamental approach to kinematics that was to characterize modern kinematics." He brought the works of Euclid and Archimedes back into popularity and was a direct influence on the Oxford Calculators (four kinematicists of Merton College) in the next century. Gerard is cited by Thomas Bradwardine in his Tractatus de proportionibus velocitatum (1328). His chief contribution was in moving away from Greek mathematics and closer to the notion of "a ratio of two unlike quantities such as distance and time", which is how modern physics defines velocity.
Modern editions
Clagett, Marshall. "The Liber de motu of Gerard of Brussels and the Origins of Kinematics in the West," Osiris, 12(1956):73–175. |
https://en.wikipedia.org/wiki/Genesee%20Scientific | Genesee Scientific Corporation is a global life sciences supplier.
History
Genesee Scientific was founded by Ken Fry in 1995 as a provider of supplies to laboratories located along Genesee Avenue in University City, San Diego.
Today, Genesee Scientific serves life science laboratories around the United States and the world.
Timeline
1995 - Genesee Scientific founded by Ken Fry.
2003 - Acquired United Scientific Plastics (USP), a California Bay Area distributor of laboratory plasticware.
2007 - Acquired Island Scientific, a Seattle area distributor of laboratory plasticware.
2007 - Established East Coast warehouse and offices located in Research Triangle Park, NC.
2009 - Acquired Continental Lab Products (CLP) brands.
2011 - Expansion of Olympus Plastics product line of tissue culture supplies.
2013 - Became an authorized Eppendorf distribution partner, a Germany-based manufacturer of instruments and consumables.
2016 - Introduction of Prometheus product line, proprietary products for protein biology research.
2017 - Launched GenClone product line, a full line of cell culture media products centered around ultra-pure, high-performance fetal bovine sera.
2021 - Acquired by LLR, a private equity firm investing in technology and healthcare businesses.
Innovations
Genesee Scientific is the world leader in innovation for and supply to the Drosophila (fruit fly) research community. Drosophila are widely used as a model organism in the field of genetics.
Genesee Scientific has been awarded three patents by the United States Patent and Trademark Office for its revolutionary Drosophila vial racking system (patent numbers D673,296 S; 8,136,679 B2; and 8,430,251 B2). This Drosophila vial racking system significantly decreases time spent racking vials and is more environmentally friendly compared to traditional vial packaging configurations.
Genesee Scientific has also developed the first atlas of Drosophila phenotypic markers available on mobile devices..
Registered t |
https://en.wikipedia.org/wiki/XtreemFS | XtreemFS is an object-based, distributed file system for wide area networks. XtreemFS' outstanding feature is full (all components) and real (all failure scenarios, including network partitions) fault tolerance, while maintaining POSIX file system semantics. Fault-tolerance is achieved by using Paxos-based lease negotiation algorithms and is used to replicate files and metadata. SSL and X.509 certificates support make XtreemFS usable over public networks.
XtreemFS has been under development since early 2007. A first public release was made in August 2008. XtreemFS 1.0 was released in August 2009. The 1.0 release includes support for read-only replication with failover, data center replica maps, parallel reads and writes, and a native Windows client. The 1.1 added automatic on-close replication and POSIX advisory locks. In mid-2011, release 1.3 added read/write replication for files. Version 1.4 underwent extensive testing and is considered production-quality. An improved Hadoop integration and support for SSDs was added in version 1.5.
XtreemFS is funded by the European Commission's IST programme.
The original XtreemFS team founded Quobyte Inc. in 2013. Quobyte offers a professional storage system as a commercial product.
Features
Secure connections to Contrail (software)
Clients for Linux, Windows and OS X
Open source (New BSD License since release 1.3)
Cross-site file replication with auto-failover
Partial replicas, objects fetched on demand
POSIX compatibility
Plugins for authentication policies, replica selection
RAID0 (striping) with parallel I/O over stripes
Read-only replication
Security (SSL, X.509 certificates)
Servers for Linux and Solaris Natively and Non-Native Windows Java & ANT based server.
experimental file system driver for Hadoop (added in version 1.2)
Use cases
as a filer replacement (home directories and group shares),
in HPC cluster,
in Hadoop clusters,
for VM block storage
cross-branch data sharing
and many more use cases |
https://en.wikipedia.org/wiki/Cocompact%20group%20action | In mathematics, an action of a group G on a topological space X is cocompact if the quotient space X/G is a compact space. If X is locally compact, then an equivalent condition is that there is a compact subset K of X such that the image of K under the action of G covers X. It is sometimes referred to as mpact, a tongue-in-cheek reference to dual notions where prefixing with "co-" twice would "cancel out". |
https://en.wikipedia.org/wiki/Dinostratus | Dinostratus (; c. 390 – c. 320 BCE) was a Greek mathematician and geometer, and the brother of Menaechmus. He is known for using the quadratrix to solve the problem of squaring the circle.
Life and work
Dinostratus' chief contribution to mathematics was his solution to the problem of squaring the circle. To solve this problem, Dinostratus made use of the trisectrix of Hippias, for which he proved a special property (Dinostratus' theorem) that allowed him the squaring of the circle. Due to his work the trisectrix later became known as the quadratrix of Dinostratus as well. Although Dinostratus solved the problem of squaring the circle, he did not do so using ruler and compass alone, and so it was clear to the Greeks that his solution violated the foundational principles of their mathematics. Over 2,200 years later Ferdinand von Lindemann would prove that it is impossible to square a circle using straight edge and compass alone.
Citations and footnotes |
https://en.wikipedia.org/wiki/Haplogroup%20D-Z27276 | Haplogroup D-Z27276 also known as Haplogroup D1a1 is a Y-chromosome haplogroup. It is one of two branches of Haplogroup D1, one of the descendants of Haplogroup D. The other is D-M55 which is only found in Japan.
This group is found in about 46.6% Tibetan people. It branched off D-M55 35,000-40,000 years before present or already 53,000 years before present.
One sample of a subgroup of D-Z27276 was also found among ancient samples of the Koban culture between Russia and Georgia.
Phylogenetic tree
By ISOGG tree(Version: 14.151).
DE (YAP)
D (CTS3946)
D1 (M174/Page30, IMS-JST021355, Haplogroup D-M174)
D1a (CTS11577)
D1a1 (F6251/Z27276)
D1a1a (M15) Tibet, Altai Republic, Mainland China
D1a1a (F849)
D1a1a1 (N1)
D1a1a1a (Z27269)
D1a1a1a1 (PH4979)
D1a1a1a1a2 (F729)
D1a1a1a1a2a (F17412)
D-F17412* Tibetan (Chamdo), Taiwan
D-MF10280 Sichuan, Japan (Osaka)
D1a1a1a1a2b (Y62194)
D1a1a1a1a2b1 (F17409) Tibetan (Chamdo), Sichuan
D1a1a1a1a2b2 (Y62517)
D1a1a1a1a2b2a (F16077) Tibetan (Shigatse, Shannan, Lhasa)
D1a1a1a1a2b2b (Y69263)
D1a1a1a1a2b2b1 (Y161914) Tibetan (Chamdo), Zhejiang
D1a1a1a1a2b2b2 (Y61759) Uzbekistan
D1a1a1a2 (Z31591) Tibetan (Shigatse, Lhasa)
D1a1a2 (F1070) Guangdong, Xishuangbanna (Dai)
D1a1b (P99) Tibet, Mongol, Central Asia, Altai Republic, Mainland China
D1a2 (Z3660)
D1a2a (M64.1/Page44.1, M55) Japan(Yamato people、Ryukyuan people、Ainu people)
D1a2b (Y34637) Andaman Islands(Onge people, Jarawa people)
D1b (L1378) Philippines
D2 (A5580.2) Nigeria, Saudi Arabia, African Americans in the United States, Syria |
https://en.wikipedia.org/wiki/52nd%20meridian%20east | The meridian 52° east of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, Europe, Asia, the Indian Ocean, the Southern Ocean, and Antarctica to the South Pole.
The 52nd meridian east forms a great circle with the 128th meridian west.
From Pole to Pole
Starting at the North Pole and heading south to the South Pole, the 52nd meridian east passes through:
{| class="wikitable plainrowheaders"
! scope="col" width="115" | Co-ordinates
! scope="col" width="155" | Country, territory or sea
! scope="col" | Notes
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Arctic Ocean
| style="background:#b0e0e6;" |
|-valign="top"
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Barents Sea
| style="background:#b0e0e6;" | Passing between Zemlya Georga and Hooker Island, Franz Josef Land,
|-
|
! scope="row" |
| Yuzhny Island, Novaya Zemlya
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Barents Sea
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
|
|-
|
! scope="row" |
|
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Caspian Sea
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Mangyshlak Peninsula
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Caspian Sea
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
|
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Persian Gulf
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| An island in the emirate of Abu Dhabi
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Persian Gulf
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Emirate of Abu Dhabi
|-valign="top"
|
! scope="row" |
| The meridian touches on the westernmost point of at the border with
|-
|
! scope="row" |
|
|-valign="top"
| style="background:#b0e0e6;" |
! scope="row" style="background |
https://en.wikipedia.org/wiki/Edit%20conflict | An edit conflict is a computer problem that may occur when multiple editors edit the same file and cannot merge without losing part or all of their edit. The conflict occurs when an editor gets a copy of a shared document file, changes the copy, and attempts to save the changes to the original file, which has been altered by another editor after the copy was obtained.
Resolution
The simplest way to resolve an edit conflict is to ignore intervening edits and overwrite the current file. This may lead to a substantial loss of information, and alternative methods are often employed to resolve or prevent conflicts:
Manual resolution, where the editor determines which version to retain and may manually incorporate edits into the current version of the file.
Store backups or file comparisons of each edit, so there are the previous versions of the file can still be accessed once the original is overwritten.
File locking, which limits the file to one editor at a time to prevent edit conflicts. Computer writer Gary B. Shelly notes that many wiki systems "will block the contributor who is attempting to edit the page from being able to do so until the contributor currently editing the page saves changes or remains idle on the page for an extended period of time."
Merge, by determining if the edits are in unrelated parts of the file and combining without user intervention.
Occurrences
The problem is encountered on heavily edited articles in wikis (frequency higher in articles related to a current event or person), distributed data systems (e.g., Google Sites), and revision control systems not using file locking, as well as other high-traffic pages. If a significant amount of new text is involved, the editor who receives an "edit conflict" error message can cut and paste the new text into a word processor or similar program for further editing, or can paste that text directly into a newer version of the target document. Simple copyediting can be done directly on the ne |
https://en.wikipedia.org/wiki/Twelfth%20root%20of%20two | The twelfth root of two or (or equivalently ) is an algebraic irrational number, approximately equal to 1.0594631. It is most important in Western music theory, where it represents the frequency ratio (musical interval) of a semitone () in twelve-tone equal temperament. This number was proposed for the first time in relationship to musical tuning in the sixteenth and seventeenth centuries. It allows measurement and comparison of different intervals (frequency ratios) as consisting of different numbers of a single interval, the equal tempered semitone (for example, a minor third is 3 semitones, a major third is 4 semitones, and perfect fifth is 7 semitones). A semitone itself is divided into 100 cents (1 cent = ).
Numerical value
The twelfth root of two to 20 significant figures is . Fraction approximations in increasing order of accuracy include , , , , and .
, its numerical value has been computed to at least twenty billion decimal digits.
The equal-tempered chromatic scale
A musical interval is a ratio of frequencies and the equal-tempered chromatic scale divides the octave (which has a ratio of 2:1) into twelve equal parts. Each note has a frequency that is 2 times that of the one below it.
Applying this value successively to the tones of a chromatic scale, starting from A above middle C (known as A4) with a frequency of 440 Hz, produces the following sequence of pitches:
The final A (A5: 880 Hz) is exactly twice the frequency of the lower A (A4: 440 Hz), that is, one octave higher.
Other tuning scales
Other tuning scales use slightly different interval ratios:
The just or Pythagorean perfect fifth is 3/2, and the difference between the equal tempered perfect fifth and the just is a grad, the twelfth root of the Pythagorean comma ().
The equal tempered Bohlen–Pierce scale uses the interval of the thirteenth root of three ().
Stockhausen's Studie II (1954) makes use of the twenty-fifth root of five (), a compound major third divided into 5×5 parts.
The |
https://en.wikipedia.org/wiki/Electronic%20filter%20topology | Electronic filter topology defines electronic filter circuits without taking note of the values of the components used but only the manner in which those components are connected.
Filter design characterises filter circuits primarily by their transfer function rather than their topology. Transfer functions may be linear or nonlinear. Common types of linear filter transfer function are; high-pass, low-pass, bandpass, band-reject or notch and all-pass. Once the transfer function for a filter is chosen, the particular topology to implement such a prototype filter can be selected so that, for example, one might choose to design a Butterworth filter using the Sallen–Key topology.
Filter topologies may be divided into passive and active types. Passive topologies are composed exclusively of passive components: resistors, capacitors, and inductors. Active topologies also include active components (such as transistors, op amps, and other integrated circuits) that require power. Further, topologies may be implemented either in unbalanced form or else in balanced form when employed in balanced circuits. Implementations such as electronic mixers and stereo sound may require arrays of identical circuits.
Passive topologies
Passive filters have been long in development and use. Most are built from simple two-port networks called "sections". There is no formal definition of a section except that it must have at least one series component and one shunt component. Sections are invariably connected in a "cascade" or "daisy-chain" topology, consisting of additional copies of the same section or of completely different sections. The rules of series and parallel impedance would combine two sections consisting only of series components or shunt components into a single section.
Some passive filters, consisting of only one or two filter sections, are given special names including the L-section, T-section and Π-section, which are unbalanced filters, and the C-section, H-section and b |
https://en.wikipedia.org/wiki/Barefoot%20Networks | Barefoot Networks is a computer networking company headquartered in Santa Clara, California. The company designs and produces programmable network switch silicon, systems and software. The company was acquired by Intel in 2019.
Background
Barefoot Networks was founded in 2013. The company is backed by Andreessen Horowitz, Lightspeed Venture Partners and Sequoia Capital. The company's co-founders are Nick McKeown, Martin Izzard, Pat Bosshart, and Stefanos Sidiropoulos. Dan Lenoski joined in 2014 and was also given co-founder status. The company came out of stealth mode on June 14, 2016. The company also announced a third round led by Goldman Sachs, AT&T, Dell, and Google. Later in 2016, the company announced additional funding from Alibaba Group and Tencent. In 2017, Craig H. Barratt took over from Martin Izzard as CEO.
In June 2019, Intel announced it was acquiring Barefoot for an undisclosed price.
In January 2023, Intel stated that is has halted production on its networking chips.
Products
Barefoot Tofino
Barefoot Tofino is a P4-programmable switch chip that can run up to speeds of 6.5 Tbit/s.
Programmability
P4 is a programming language designed to allow programming of packet forwarding dataplanes.
Barefoot Deep Insight
Barefoot Deep Insight is a network monitoring system that provides full visibility into every packet in a network. Running on commodity servers, Barefoot Deep Insight interprets, analyzes and pinpoints a myriad of conditions that can impede packet flow, and does so in real time and at line-rate. |
https://en.wikipedia.org/wiki/Michigan%20Mathematics%20Prize%20Competition | The Michigan Mathematics Prize Competition (MMPC) is an annual high school mathematics competition held in Michigan. First founded in 1958, the competition has grown to include over 10,000 high school participants (although middle-schoolers may also participate through a high school). The director and host of this competition changes every three years, the most recent director being Stephanie Edwards of Hope College. This competition consists of two parts, which are added together to determine score:
Part I: A 40 question, multiple-choice exam open to all Michigan high schoolers
Part II: A 5 question, proof exam given only to the Top 1000 scorers on Part I
The Top 100 scorers on the combined score of both parts of the competition are honored at an awards banquet, usually at the host university, although recent years have seen more than 100 people being awarded due to ties.
Problem difficulty
The problems on the competition range from basic algebra to precalculus and are within the grasp of a high schooler's mathematical knowledge. The contest contains concepts from a variety of topics, including geometry and combinatorics.
Grading
Part I has 40 multiple-choice questions with five choices each. One point is awarded for each correct answer, giving a maximum score of forty points.
Part II has five ten point proof-based problems. The test is graded out of fifty points. This part is weighted x1.2, so the total number of points possible is 60.
The highest possible score on this test is 100 points (summing the Part I and Part II scores).
It is common for the winner of the competition to score anywhere from 90 to 95 points due to the difficulty of the exam. It sometimes falls even lower due to especially tough exams.
Through 2018, the only perfect scores were achieved in 2015, by Ankan Bhattacharya of International Academy East, and in 2016, by Chittesh Thavamani and Freddie Zhao, both of Troy High School. 2016 also marked the year of the highest scoring 3rd, |
https://en.wikipedia.org/wiki/Far-Play | Far-Play (stylized fAR-Play, from augmented reality) is a software platform developed at the University of Alberta, for creating location-based, scavenger-hunt style games which use the GPS and web-connectivity features of a player's smartphone. According to the development team, "our long-term objective is to develop a general framework that supports the implementation of AARGs that are fun to play and also educational". It utilizes Layar, an augmented reality smartphone application, QR codes located at particular real-world sites, or a phone's web browser, to facilitate games which require players to be in close physical proximity to predefined "nodes". A node, referred to by the developers as a Virtual Point of Interest (vPOI), is a point in space defined by a set of map coordinates; fAR-Play uses the GPS function of a player's smartphone — or, for indoor games, which are not easily tracked by GPS satellites, specially-created QR codes— to confirm that they are adequately near a given node. Once a player is within a node's proximity, Layar's various augmented reality features can be utilized to display a range of extra content overlaid upon the physical play-space or launch another application for extra functionality.
Development and features
fAR-Play began development in 2008, emerging from a collaborative project undertaken by a group of University of Alberta students from the Computer Science and Humanities Computing departments. fAR-Play is still under development, but a beta version is available for testing by request. fAR-Play's development is managed by a team of interdisciplinary professors and students at the University of Alberta. Currently, the developing team's roster includes Supervising Professors Geoffrey Rockwell and Eleni Stroulia, Developers Lucio Gutierrez and Matthew Delaney, and Website Developers Calen Henry and Garry Wong.
Technology
fAR-Play relies on a number of open- and closed-source web technologies as tools to create, and enhance |
https://en.wikipedia.org/wiki/ConSentry%20Networks | ConSentry Networks provides intelligent switching, providing user and application control for enterprises. ConSentry access layer LAN switches understand the username, device, role, application at Layer 7, and destination for each flow and apply policy dynamically. The company's patented multi-core CPU hardware enables this intelligent switching at up to 10 Gbit/s throughput.
Jeff Prince, who co-founded Foundry Networks, is a co-founder of ConSentry. ConSentry is headquartered in Milpitas, CA and has offices throughout the world.
ConSentry Networks went out of business on August 20, 2009.
See also
Network Access Control
External links
Official site
InfoWorld Review |
https://en.wikipedia.org/wiki/Haynsworth%20inertia%20additivity%20formula | In mathematics, the Haynsworth inertia additivity formula, discovered by Emilie Virginia Haynsworth (1916–1985), concerns the number of positive, negative, and zero eigenvalues of a Hermitian matrix and of block matrices into which it is partitioned.
The inertia of a Hermitian matrix H is defined as the ordered triple
whose components are respectively the numbers of positive, negative, and zero eigenvalues of H. Haynsworth considered a partitioned Hermitian matrix
where H11 is nonsingular and H12* is the conjugate transpose of H12. The formula states:
where H/H11 is the Schur complement of H11 in H:
Generalization
If H11 is singular, we can still define the generalized Schur complement, using the Moore–Penrose inverse instead of .
The formula does not hold if H11 is singular. However, a generalization has been proven in 1974 by Carlson, Haynsworth and Markham, to the effect that and .
Carlson, Haynsworth and Markham also gave sufficient and necessary conditions for equality to hold.
See also
Block matrix pseudoinverse
Sylvester's law of inertia
Notes and references
Linear algebra
Matrix theory
Theorems in algebra |
https://en.wikipedia.org/wiki/Debasisa%20Mohanty | Debasisa Mohanty (born 30 November 1966) is an Indian computational biologist, bioinformatician and a staff scientists at the National Institute of Immunology, India. Known for his studies on structure and function prediction of proteins, genome analysis and computer simulation of biomolecular systems, Mohanty is an elected fellow of all the three major Indian science academies namely the Indian Academy of Sciences, the Indian National Science Academy and the National Academy of Sciences, India. The Department of Biotechnology of the Government of India awarded him the National Bioscience Award for Career Development, one of the highest Indian science awards, for his contributions to biosciences, in 2009.
Biography
Born on 30 November 1966, Debasisa Mohanty earned a post graduate degree (MSc) in physics from the Indian Institute of Technology, Kanpur in 1988 and did his doctoral studies at the Molecular Biophysics Unit of the Indian Institute of Science to secure a PhD in computational biophysics in 1994. Subsequently, he completed his post-doctoral work, first the Hebrew University of Jerusalem and, later, at the Scripps Research Institute. On his return to India, he joined the National Institute of Immunology, India (NII) where he serves as a Grade VII staff scientist and hosts a number of research scholars at his laboratory. He currently hold director position at NII. At NII, he also supervises the activities of RiPPMiner, (Bioinformatics Resource for Deciphering Chemical Structures of RiPPs) and the Bioinformatics Centre.
Mohanty resides at the NII Campus, along Aruna Asaf Ali Marg in New Delhi.
Legacy
Mohanty's research focus is in the fields of computational biology and bioinformatics and he is known to have developed computational methods for predicting the substrate specificity of proteins as well as identified biosynthetic pathways. His work has assisted in widening the understanding of the function of putative proteins in genomes and the protein int |
https://en.wikipedia.org/wiki/DNA%20damage%20theory%20of%20aging | The DNA damage theory of aging proposes that aging is a consequence of unrepaired accumulation of naturally occurring DNA damage. Damage in this context is a DNA alteration that has an abnormal structure. Although both mitochondrial and nuclear DNA damage can contribute to aging, nuclear DNA is the main subject of this analysis. Nuclear DNA damage can contribute to aging either indirectly (by increasing apoptosis or cellular senescence) or directly (by increasing cell dysfunction).
Several review articles have shown that deficient DNA repair, allowing greater accumulation of DNA damage, causes premature aging; and that increased DNA repair facilitates greater longevity, e.g. Mouse models of nucleotide-excision–repair syndromes reveal a striking correlation between the degree to which specific DNA repair pathways are compromised and the severity of accelerated aging, strongly suggesting a causal relationship. Human population studies show that single-nucleotide polymorphisms in DNA repair genes, causing up-regulation of their expression, correlate with increases in longevity. Lombard et al. compiled a lengthy list of mouse mutational models with pathologic features of premature aging, all caused by different DNA repair defects. Freitas and de Magalhães presented a comprehensive review and appraisal of the DNA damage theory of aging, including a detailed analysis of many forms of evidence linking DNA damage to aging. As an example, they described a study showing that centenarians of 100 to 107 years of age had higher levels of two DNA repair enzymes, PARP1 and Ku70, than general-population old individuals of 69 to 75 years of age. Their analysis supported the hypothesis that improved DNA repair leads to longer life span. Overall, they concluded that while the complexity of responses to DNA damage remains only partly understood, the idea that DNA damage accumulation with age is the primary cause of aging remains an intuitive and powerful one.
In humans and othe |
https://en.wikipedia.org/wiki/Inhalation | Inhalation (or inspiration) is the process of drawing air or other gases into the respiratory tract, primarily for the purpose of breathing and oxygen exchange within the body. It is a fundamental physiological function in humans and many other organisms, essential for sustaining life. Inhalation is the first phase of respiration, allowing the exchange of oxygen and carbon dioxide between the body and the environment, vital for the body's metabolic processes. This article delves into the mechanics of inhalation, its significance in various contexts, and its potential impact on health.
Physiology
The process of inhalation involves a series of coordinated movements and physiological mechanisms. The primary anatomical structures involved in inhalation are the respiratory system, which includes the nose, mouth, pharynx, larynx, trachea, bronchi, and lungs. Here is a brief overview of the inhalation process:
Inspiration: Inhalation begins with the contraction of the thoracic diaphragm, a dome-shaped muscle that separates the chest cavity from the abdominal cavity. The diaphragm contracts and moves downward, increasing the volume of the thoracic cavity.
Air entry: When a person or animal inhales, the diaphragm, located below the lungs, contracts, and the intercostal muscles between the ribs expand the chest cavity. This expansion creates a lower pressure inside the chest compared to the atmosphere, causing air to flow into the lungs.
Air filtration: The nasal passages and the mouth act as entry points for air. These passages are lined with tiny hair-like structures called cilia and mucus-producing cells that help filter and humidify the incoming air, removing particles and debris before it reaches the lungs.
Gas exchange: Once the air enters the lungs, it travels through a branching network of tubes known as the bronchial tree, ultimately reaching tiny air sacs called alveoli. In the alveoli, oxygen from the inhaled air diffuses into the bloodstream, while carbon dioxide |
https://en.wikipedia.org/wiki/Casimersen | Casimersen, sold under the brand name Amondys 45, is an antisense oligonucleotide medication used for the treatment of Duchenne muscular dystrophy (DMD) in people who have a confirmed mutation of the dystrophin gene that is amenable to exon 45 skipping. It is an antisense oligonucleotide of phosphorodiamidate morpholino oligomer (PMO). Duchenne muscular dystrophy is a rare disease that primarily affects boys. It is caused by low levels of a muscle protein called dystrophin. The lack of dystrophin causes progressive muscle weakness and premature death.
The most common side effects include upper respiratory tract infections, cough, fever, headache, joint pain and throat pain.
Casimersen was approved for medical use in the United States in February 2021, and it is the first FDA-approved targeted treatment for people who have a confirmed mutation of the DMD gene that is amenable to skipping exon 45.
Medical uses
Casimersen is indicated for the treatment of Duchenne muscular dystrophy (DMD) in people who have a confirmed mutation of the DMD gene that is amenable to exon 45 skipping.
Adverse effects
Common side effects include: headache, fever, joint pain, cough and cold symptoms.
Pharmacology
Pharmacodynamics
Duchenne muscular dystrophy is an X-linked recessive disorder that results in the absence of a functional dystrophin protein. Dystrophin protein is a protein that consists of an N-terminal actin-binding domain, C-terminal B-dystroglycan- binding domain, and 24 internal spectrum-like repeats. Dystrophin plays a role in muscle function and without dystrophin, muscle tissue will be replaced with fibrous and adipose tissue. Casimersen is an antisense phosphorodiamidate morpholino oligonucleotide designed to bind to the exon 45 of the DMD pre-MRNA, which prevents its exclusion into the mature RNA before translation. This change causes the production of an internally truncated dysphotrophin protein.
History
Casimersen was evaluated in a double-blind, placebo-co |
https://en.wikipedia.org/wiki/Norton%27s%20dome | Norton's dome is a thought experiment that exhibits a non-deterministic system within the bounds of Newtonian mechanics. It was devised by John D. Norton in 2003. It is a special limiting case of a more general class of examples from 1997 due to Sanjay Bhat and Dennis Bernstein. The Norton's dome problem can be regarded as a problem in physics, mathematics, and philosophy.
Description
The model consists of an idealized particle initially sitting motionless at the apex of an idealized radially symmetrical frictionless dome described by the equation
where h is the vertical displacement from the top of the dome to a point on the dome, r is the geodesic distance from the dome's apex to that point (in other words, a radial coordinate r is "inscribed" on the surface), g is acceleration due to gravity and b is a proportionality constant.
From Newton's second law, the tangent component of the acceleration on a point mass resting frictionlessly on the surface is .
Solutions to the equations of motion
Norton shows that there are two classes of mathematical solutions to this equation. In the first, the particle stays sitting at the apex of the dome forever. In the second, the particle sits at the apex of the dome for a while, and then after an arbitrary period of time starts to slide down the dome in an arbitrary direction. The apparent paradox in this second case is that this would seem to occur for no discernible reason, and without any radial force being exerted on it by any other entity, apparently contrary to both physical intuition and normal intuitive concepts of cause and effect, yet the motion is still entirely consistent with the mathematics of Newton's laws of motion.
To see that all these equations of motion are physically possible solutions, it's helpful to use the time reversibility of Newtonian mechanics. It is possible to roll a ball up the dome in such a way that it reaches the apex in finite time and with zero energy, and stops there. By time-reversal, |
https://en.wikipedia.org/wiki/Talairach%20coordinates | Talairach coordinates, also known as Talairach space, is a 3-dimensional coordinate system (known as an 'atlas') of the human brain, which is used to map the location of brain structures independent from individual differences in the size and overall shape of the brain. It is still common to use Talairach coordinates in functional brain imaging studies and to target transcranial stimulation of brain regions. However, alternative methods such as the MNI Coordinate System (originated at the Montreal Neurological Institute and Hospital) have largely replaced Talairach for stereotaxy and other procedures.
History
The coordinate system was first created by neurosurgeons Jean Talairach and Gabor Szikla in their work on the Talairach Atlas in 1967, creating a standardized grid for neurosurgery. The grid was based on the idea that distances to lesions in the brain are proportional to overall brain size (i.e., the distance between two structures is larger in a larger brain). In 1988 a second edition of the Talairach Atlas came out that was coauthored by Tournoux, and it is sometimes known as the Talairach-Tournoux system. This atlas was based on single post-mortem dissection of a human brain.
The Talairach Atlas uses Brodmann areas as the labels for brain regions.
Description
The Talairach coordinate system is defined by making two anchors, the anterior commissure and posterior commissure, lie on a straight horizontal line. Since these two points lie on the midsagittal plane, the coordinate system is completely defined by requiring this plane to be vertical. Distances in Talairach coordinates are measured from the anterior commissure as the origin (as defined in the 1998 edition). The y-axis points posterior and anterior to the commissures, the left and right is the x-axis, and the z-axis is in the ventral-dorsal (down and up) directions. Once the brain is reoriented to these axes, the researchers must also outline the six cortical outlines of the brain: anterior, posteri |
https://en.wikipedia.org/wiki/NUTS%20statistical%20regions%20of%20Romania | In the NUTS (Nomenclature of Territorial Units for Statistics) codes of Romania (RO), the three levels are:
NUTS codes
RO1 Macroregion one (Macroregiunea Unu)
RO11 Nord-Vest
RO111 Bihor County
RO112 Bistrița-Năsăud County
RO113 Cluj County
RO114 Maramureș County
RO115 Satu Mare County
RO116 Sălaj County
RO12 Centru
RO121 Alba County
RO122 Brașov County
RO123 Covasna County
RO124 Harghita County
RO125 Mureș County
RO126 Sibiu County
RO2 Macroregion two (Macroregiunea doi)
RO21 Nord-Est
RO211 Bacău County
RO212 Botoșani County
RO213 Iași County
RO214 Neamț County
RO215 Suceava County
RO216 Vaslui County
RO22 Sud-Est
RO221 Brăila County
RO222 Buzău County
RO223 Constanța County
RO224 Galați County
RO225 Tulcea County
RO226 Vrancea County
RO3 Macroregion three (Macroregiunea trei)
RO31 Sud-Muntenia
RO311 Argeș County
RO312 Călărași County
RO313 Dâmbovița County
RO314 Giurgiu County
RO315 Ialomița County
RO316 Prahova County
RO317 Teleorman County
RO32 București-Ilfov
RO321 București
RO322 Ilfov County
RO4 Macroregion four (Macroregiunea patru)
RO41 Sud-Vest Oltenia
RO411 Dolj County
RO412 Gorj County
RO413 Mehedinți County
RO414 Olt County
RO415 Vâlcea County
RO42 Vest
RO421 Arad County
RO422 Caraș-Severin County
RO423 Hunedoara County
RO424 Timiș County
In the 2003 version, the codes were as follows:
RO0 Romania
RO01 Nord-Est
RO011 Bacău County
RO012 Botoșani County
RO013 Iași County
RO014 Neamț County
RO015 Suceava County
RO016 Vaslui County
RO02 Sud-Est
RO021 Brăila County
RO022 Buzău County
RO023 Constanța County
RO024 Galați County
RO025 Tulcea County
RO026 Vrancea County
RO03 Sud-Muntenia
RO031 Argeș County
RO032 Călărași County
RO033 Dâmbovița County
RO034 Giurgiu County
RO035 Ialomița County
RO036 Prahova County
RO037 Teleorman County
RO04 Sud-Vest Oltenia
RO041 Dolj County
RO042 Gorj County
RO043 Mehedinți County
RO044 Olt County
RO045 Vâlcea County
RO05 Vest
RO051 Arad County
RO052 Caraș-Severin County
RO053 Hunedoara County
RO054 Timiș County
RO06 Nord- |
https://en.wikipedia.org/wiki/Perineal%20artery | The perineal artery (superficial perineal artery) arises from the internal pudendal artery, and turns upward, crossing either over or under the superficial transverse perineal muscle, and runs forward, parallel to the pubic arch, in the interspace between the bulbospongiosus and ischiocavernosus muscles, both of which it supplies, and finally divides into several posterior scrotal branches which are distributed to the skin and dartos tunic of the scrotum.
As it crosses the superficial transverse perineal muscle it gives off the transverse perineal artery which runs transversely on the cutaneous surface of the muscle, and anastomoses with the corresponding vessel of the opposite side and with the perineal and inferior hemorrhoidal arteries.
It supplies the transverse perineal muscles and the structures between the anus and the urethral bulb. |
https://en.wikipedia.org/wiki/Aphonogelia | Aphonogelia is a rare neuropsychological condition with which a person cannot laugh audibly. |
https://en.wikipedia.org/wiki/Serum%20chloride | Chloride is an anion in the human body needed for metabolism (the process of turning food into energy). It also helps keep the body's acid-base balance. The amount of serum chloride is carefully controlled by the kidneys.
Chloride ions have important physiological roles. For instance, in the central nervous system, the inhibitory action of glycine and some of the action of GABA relies on the entry of Cl− into specific neurons. Also, the chloride-bicarbonate exchanger biological transport protein relies on the chloride ion to increase the blood's capacity of carbon dioxide, in the form of the bicarbonate ion; this is the mechanism underpinning the chloride shift occurring as the blood passes through oxygen-consuming capillary beds.
The normal blood reference range of chloride for adults in most labs is 96 to 106 milliequivalents (mEq) per liter. The normal range may vary slightly from lab to lab. Normal ranges are usually shown next to results in the lab report. A diagnostic test may use a chloridometer to determine the serum chloride level.
The North American Dietary Reference Intake recommends a daily intake of between 2300 and 3600 mg/day for 25-year-old males. |
https://en.wikipedia.org/wiki/TPC-W | TPC-W was a web server and database performance benchmark, proposed by Transaction Processing Performance Council.
This benchmark defined the complete Web-based shop for searching, browsing and ordering books. The system under testing needed to provide the implementation of this shop. TPC-W standard describes all pages that must be present in the shop (including sample HTML code), interaction graphs (how the user navigates between the pages), transition tables (that is the probability that the user will move from page A to page B) and database schema. In addition, the standard provided generator to produce synthetic images (book covers) that the system under testing needed to show in the virtual shop. Standard also describes how random strings and random numbers must be generated.
During testing, the server was visited by a growing number of web-bots, each simulating individual customer. The pause between web interactions from the single customer and the number of total pages each customer visits per session are random numbers that must follow asymmetric distribution, specified by the standard. The navigation pattern is defined by three transition tables that differ in accordance with the preferred plans of the user (shopping mix, browsing mix and ordering mix). The main measured parameter was WIPS, the number of web interactions per second that the system is capable to deliver.
It was also possible to visit and actually use the virtual shop with the ordinary browser.
The official TPC-W page in the past included performance comparisons, providing information, how well the virtual shop performs when implemented with various development platforms and running on various web servers and operating systems. This is information that is no longer available from the web site.
While discontinued, TPC-W is still used in universities for teaching, requiring students to implement TPC-W - compliant shop and perform benchmarking .
Other web application benchmarks
RUBBoS
Ru |
https://en.wikipedia.org/wiki/Avoidance%20reaction | Avoidance reaction is a term used in the description of the movement of paramecium. This helps the cell avoid obstacles and causes other objects to bounce off of the cell's outer membrane. The paramecium does this by reversing the direction in which its cilia beat. This results in stopping, spinning or turning, after which point the paramecium resumes swimming forward. If multiple avoidance reactions follow one another, it is possible for a paramecium to swim backward, though not as smoothly as swimming forward.
Avoidance reaction occurs when the cell hits an obstruction, providing an anterior, mechanical stimulus:
- The cell will then reverse.
- It will then stop and rotate.
- Now facing a new direction, the cell will move off in that direction.
This process will continue until the cell is able to negotiate its way around the obstruction.
Movement of Paramecium cells is caused by control of calcium ions inside the cell and membrane potentials. The simplest explanation for the avoidance reaction is that membrane potential controls the influx of calcium ions, which regulates the beat frequency and angles of cilia on the surface of the cell. |
https://en.wikipedia.org/wiki/Incertae%20sedis | or is a term used for a taxonomic group where its broader relationships are unknown or undefined. Alternatively, such groups are frequently referred to as "enigmatic taxa". In the system of open nomenclature, uncertainty at specific taxonomic levels is indicated by (of uncertain family), (of uncertain suborder), (of uncertain order) and similar terms.
Examples
The fossil plant Paradinandra suecica could not be assigned to any family, but was placed incertae sedis within the order Ericales when described in 2001.
The fossil Gluteus minimus, described in 1975, could not be assigned to any known animal phylum. The genus is therefore incertae sedis within the kingdom Animalia.
While it was unclear to which order the New World vultures (family Cathartidae) should be assigned, they were placed in Aves incertae sedis. It was later agreed to place them in a separate order, Cathartiformes.
Bocage's longbill, Motacilla bocagii, previously known as Amaurocichla bocagii, is a species of passerine bird that belongs to the superfamily Passeroidea. Since it was unclear to which family it belongs, it was classified as Passeroidea incertae sedis, until a 2015 phylogenetic study placed it in Motacilla of Motacillidae.
Parakaryon myojinensis, a single-celled organism that is apparently distinct from prokaryotes and eukaryotes.
Metallogenium is a bacteria that can form star-shaped minerals.
In formal nomenclature
When formally naming a taxon, uncertainty about its taxonomic classification can be problematic. The International Code of Nomenclature for algae, fungi, and plants, stipulates that "species and subdivisions of genera must be assigned to genera, and infraspecific taxa must be assigned to species, because their names are combinations", but ranks higher than the genus may be assigned incertae sedis.
Reason for use
Poor description
This excerpt from a 2007 scientific paper about crustaceans of the Kuril–Kamchatka Trench and the Japan Trench describes typical circumstan |
https://en.wikipedia.org/wiki/Brian%20Randell | Brian Randell DSc FBCS FLSW (born 1936) is a British computer scientist, and emeritus professor at the School of Computing, Newcastle University, United Kingdom. He specialises in research into software fault tolerance and dependability, and is a noted authority on the early pre-1950 history of computing hardware.
Biography
Randell was employed at English Electric from 1957 to 1964 where he was working on compilers. His work on ALGOL 60 is particularly well known, including the development of the Whetstone compiler for the English Electric KDF9, an early stack machine. In 1964, he joined IBM, where he worked at the Thomas J. Watson Research Center on high performance computer architectures and also on operating system design methodology. In May 1969, he became a professor of computing science at the then named University of Newcastle upon Tyne, where he has worked since then in the area of software fault tolerance and dependability.
He is a member of the Special Interest Group on Computers, Information and Society (SIGCIS) of the Society for the History of Technology CIS, and a founding member of the Editorial Board of the IEEE Annals of the History of Computing journal. He is a Fellow of the Association for Computing Machinery (2008). He was elected a Fellow of the Learned Society of Wales in 2011.
He was, until 1969, a member of the International Federation for Information Processing (IFIP) IFIP Working Group 2.1 (WG2.1) on Algorithmic Languages and Calculi, which specified, maintains, and supports the programming languages ALGOL 60 and ALGOL 68. He is also a founding member of IFIP WG2.3 on Programming Methodology, and of IFIP WG10.4 on Dependability and Fault Tolerance.
He is married (to Liz, a teacher of French) and has four children.
Work
Brian Randell's main research interests are in the field of computer science, specifically on system dependability and fault tolerance. His interest in the history of computing was started by coming across the then almos |
https://en.wikipedia.org/wiki/Agata%20Ciabattoni | Agata Ciabattoni is an Italian mathematical logician specializing in non-classical logic. She is a full professor at the Institute of Logic and Computation of the Faculty of Informatics at the Vienna University of Technology (TU Wien), and a co-chair of the Vienna Center for Logic and Algorithms of TU Wien (VCLA).
Education and career
Ciabattoni is originally from Ripatransone. She studied computer science at the University of Bologna, and completed her Ph.D. in 2000 at the University of Milan. Her dissertation, Proof-theory in many-valued logics, was supervised by Daniele Mundici.
She moved to Vienna in 2000 with the support of an EU Marie Curie Fellowship, and In 2007, she earned her habilitation at TU Wien.
She remains affiliated with TU Wien, as a professor in the faculty of informatics.
She also serves as the Collegium Logicum lecture series chair for the Kurt Gödel Society.
Contributions
One of Ciabattoni's projects at TU Wien involves using mathematical logic to formalize the ethical reasoning in the Vedas, a body of Indian sacred texts.
Recognition
In 2011, Ciabattoni won the Start-Preis of the Austrian Science Fund, the only woman to win the prize that year. |
https://en.wikipedia.org/wiki/Wax%20thermostatic%20element | The wax thermostatic element was invented in 1934 by Sergius Vernet (1899–1968). Its principal application is in automotive thermostats used in the engine cooling system. The first applications in the plumbing and heating industries were in Sweden (1970) and in Switzerland (1971).
Wax thermostatic elements transform heat energy into mechanical energy using the thermal expansion of waxes when they melt. This wax motor principle also finds applications besides engine cooling systems, including heating system thermostatic radiator valves, plumbing, industrial, and agriculture.
Automotive thermostats
The internal combustion engine cooling thermostat maintains the temperature of the engine near its optimum operating temperature by regulating the flow of coolant to an air cooled radiator. This regulation is now carried out by an internal thermostat. Conveniently, both the sensing element of the thermostat and its control valve may be placed at the same location, allowing the use of a simple self-contained non-powered thermostat as the primary device for the precise control of engine temperature. Although most vehicles now have a temperature-controlled electric cooling fan, "the unassisted air stream can provide sufficient cooling up to 95% of the time" and so such a fan is not the mechanism for primary control of the internal temperature.
Research in the 1920s showed that cylinder wear was aggravated by condensation of fuel when it contacted a cool cylinder wall which removed the oil film. The development of the automatic thermostat in the 1930s solved this problem by ensuring fast engine warm-up.
The first thermostats used a sealed capsule of an organic liquid with a boiling point just below the desired opening temperature. These capsules were made in the form of a cylindrical bellows. As the liquid boiled inside the capsule, the capsule bellows expanded, opening a sheet brass plug valve within the thermostat. As these thermostats could fail in service, they were des |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.