source
stringlengths 31
227
| text
stringlengths 9
2k
|
|---|---|
https://en.wikipedia.org/wiki/Rotary%20converter
|
A rotary converter is a type of electrical machine which acts as a mechanical rectifier, inverter or frequency converter.
Rotary converters were used to convert alternating current (AC) to direct current (DC), or DC to AC power, before the advent of chemical or solid state power rectification and inverting. They were commonly used to provide DC power for commercial, industrial and railway electrification from an AC power source.
Principles of operation
The rotary converter can be thought of as a motor-generator, where the two machines share a single rotating armature and set of field coils. The basic construction of the rotary converter consists of a DC generator (dynamo) with a set of slip rings tapped into its rotor windings at evenly spaced intervals. When a dynamo is spun the electric currents in its rotor windings alternate as it rotates in the magnetic field of the stationary field windings. This alternating current is rectified by means of a commutator, which allows direct current to be extracted from the rotor. This principle is taken advantage of by energizing the same rotor windings with AC power, which causes the machine to act as a synchronous AC motor. The rotation of the energized coils excites the stationary field windings producing part of the direct current. The other part is alternating current from the slip rings, which is directly rectified into DC by the commutator. This makes the rotary converter a hybrid dynamo and mechanical rectifier. When used in this way it is referred to as a synchronous rotary converter or simply a synchronous converter. The AC slip rings also allow the machine to act as an alternator.
The device can be reversed and DC applied to the field and commutator windings to spin the machine and produce AC power. When operated as a DC to AC machine it is referred to as an inverted rotary converter.
One way to envision what is happening in an AC-to-DC rotary converter is to imagine a rotary reversing switch that is being dr
|
https://en.wikipedia.org/wiki/Fibroblast%20growth%20factor
|
Fibroblast growth factors (FGF) are a family of cell signalling proteins produced by macrophages; they are involved in a wide variety of processes, most notably as crucial elements for normal development in animal cells. Any irregularities in their function lead to a range of developmental defects. These growth factors typically act as systemic or locally circulating molecules of extracellular origin that activate cell surface receptors. A defining property of FGFs is that they bind to heparin and to heparan sulfate. Thus, some are sequestered in the extracellular matrix of tissues that contains heparan sulfate proteoglycans and are released locally upon injury or tissue remodeling.
Families
In humans, 23 members of the FGF family have been identified, all of which are structurally related signaling molecules:
Members FGF1 through FGF10 all bind fibroblast growth factor receptors (FGFRs). FGF1 is also known as acidic fibroblast growth factor, and FGF2 is also known as basic fibroblast growth factor.
Members FGF11, FGF12, FGF13, and FGF14, also known as FGF homologous factors 1-4 (FHF1-FHF4), have been shown to have distinct functions compared to the FGFs. Although these factors possess remarkably similar sequence homology, they do not bind FGFRs and are involved in intracellular processes unrelated to the FGFs. This group is also known as "iFGF".
Human FGF18 is involved in cell development and morphogenesis in various tissues including cartilage.
Human FGF20 was identified based on its homology to Xenopus FGF-20 (XFGF-20).
FGF15 through FGF23 were described later and functions are still being characterized. FGF15 is the mouse ortholog of human FGF19 (there is no human FGF15) and, where their functions are shared, they are often described as FGF15/19. In contrast to the local activity of the other FGFs, FGF15/19, FGF21 and FGF23 have hormonal systemic effects.
Receptors
The mammalian fibroblast growth factor receptor family has 4 members, FGFR1, FGFR2, FGFR3,
|
https://en.wikipedia.org/wiki/Stanley%20Pons
|
Bobby Stanley Pons (born August 23, 1943) is an American electrochemist known for his work with Martin Fleischmann on cold fusion in the 1980s and 1990s.
Early life
Pons was born in Valdese, North Carolina. He attended Valdese High School, then Wake Forest University in Winston-Salem, North Carolina, where he studied chemistry. He began his PhD studies in chemistry at the University of Michigan in Ann Arbor, but left before completing his PhD. His thesis resulted in a paper, co-authored in 1967 with Harry B. Mark, his adviser. The New York Times wrote that it pioneered a way to measure the spectra of chemical reactions on the surface of an electrode.
He decided to finish his PhD in England at the University of Southampton, where in 1975 he met Martin Fleischmann. Pons was a student in Alan Bewick's group; he earned his PhD in 1978.
Career
On March 23, 1989, while Pons was the chairman of the chemistry department at the University of Utah, he and Fleischmann announced the experimental production of "N-Fusion", which was quickly labeled by the press cold fusion. After a short period of public acclaim, hundreds of scientists attempted to reproduce the effects but generally failed. After the claims were found to be unreproducible, the scientific community determined the claims were incomplete and inaccurate.
Pons moved to France in 1992, along with Fleischmann, to work at a Toyota-sponsored laboratory. The laboratory closed in 1998 after a £12 million research investment without conclusive results. He gave up his US citizenship and became a French citizen.
|
https://en.wikipedia.org/wiki/T-carrier
|
The T-carrier is a member of the series of carrier systems developed by AT&T Bell Laboratories for digital transmission of multiplexed telephone calls.
The first version, the Transmission System 1 (T1), was introduced in 1962 in the Bell System, and could transmit up to 24 telephone calls simultaneously over a single transmission line of copper wire. Subsequent specifications carried multiples of the basic T1 (1.544 Mbit/s) data rates, such as T2 (6.312 Mbit/s) with 96 channels, T3 (44.736 Mbit/s) with 672 channels, and others.
Although a T2 was defined as part of AT&T's T-carrier system, which defined five levels, T1 through T5, only the T1 and T3 were commonly in use.
Transmission System 1
The T-carrier is a hardware specification for carrying multiple time-division multiplexed (TDM) telecommunications channels over a single four-wire transmission circuit. It was developed by AT&T at Bell Laboratories ca. 1957 and first employed by 1962 for long-haul pulse-code modulation (PCM) digital voice transmission with the D1 channel bank.
The T-carriers are commonly used for trunking between switching centers in a telephone network, including to private branch exchange (PBX) interconnect points. It uses the same twisted pair copper wire that analog trunks used, employing one pair for transmitting, and another pair for receiving. Signal repeaters may be used for extended distance requirements.
Before the digital T-carrier system, carrier wave systems such as 12-channel carrier systems worked by frequency-division multiplexing; each call was an analog signal. A T1 trunk could transmit 24 telephone calls at a time, because it used a digital carrier signal called Digital Signal 1 (DS-1). DS-1 is a communications protocol for multiplexing the bitstreams of up to 24 telephone calls, along with two special bits: a framing bit (for frame synchronization) and a maintenance-signaling bit. T1's maximum data transmission rate is 1.544 megabits per second.
Outside of the United
|
https://en.wikipedia.org/wiki/The%20Analyst
|
The Analyst (subtitled A Discourse Addressed to an Infidel Mathematician: Wherein It Is Examined Whether the Object, Principles, and Inferences of the Modern Analysis Are More Distinctly Conceived, or More Evidently Deduced, Than Religious Mysteries and Points of Faith) is a book by George Berkeley. It was first published in 1734, first by J. Tonson (London), then by S. Fuller (Dublin). The "infidel mathematician" is believed to have been Edmond Halley, though others have speculated Sir Isaac Newton was intended.
Background and purpose
From his earliest days as a writer, Berkeley had taken up his satirical pen to attack what were then called 'free-thinkers' (secularists, skeptics, agnostics, atheists, etc.—in short, anyone who doubted the truths of received Christian religion or called for a diminution of religion in public life). In 1732, in the latest installment in this effort, Berkeley published his Alciphron, a series of dialogues directed at different types of 'free-thinkers'. One of the archetypes Berkeley addressed was the secular scientist, who discarded Christian mysteries as unnecessary superstitions, and declared his confidence in the certainty of human reason and science. Against his arguments, Berkeley mounted a subtle defense of the validity and usefulness of these elements of the Christian faith.
Alciphron was widely read and caused a bit of a stir. But it was an offhand comment mocking Berkeley's arguments by the 'free-thinking' royal astronomer Sir Edmund Halley that prompted Berkeley to pick up his pen again and try a new tack. The result was The Analyst, conceived as a satire attacking the foundations of mathematics with the same vigor and style as 'free-thinkers' routinely attacked religious truths.
Berkeley sought to take mathematics apart, claimed to uncover numerous gaps in proof, attacked the use of infinitesimals, the diagonal of the unit square, the very existence of numbers, etc. The general point was not so much to mock mathe
|
https://en.wikipedia.org/wiki/Strapwork
|
In the history of art and design, strapwork is the use of stylised representations in ornament of ribbon-like forms. These may loosely imitate leather straps, parchment or metal cut into elaborate shapes, with piercings, and often interwoven in a geometric pattern. In early examples there may or may not be three-dimensionality, either actual in curling relief ends of the elements, or just represented in two dimensions. As the style continued, these curling elements became more prominent, often turning into scrollwork, where the ends curl into spirals or scrolls. By the Baroque scrollwork was a common element in ornament, often partly submerged by other rich ornament.
European strapwork is a frequent background and framework for grotesque ornament – arabesque or candelabra figures filled with fantastical creatures, garlands and other elements – which were a frequent decorative motif in 16th-century Northern Mannerism, and revived in the 19th century and which may appear on walls – painted, in frescos, carved in wood, or moulded in plaster or stucco – or in graphic work. The Europeanized arabesque patterns called moresque are also very often combined with strapwork, especially in tooled and gilded bookbindings.
Scrollwork is a variant that tended to replace strapwork almost completely by the Baroque. It is less geometric and more organic, more three dimensional, and with emphasis on the curling ends of the "straps". The Italian artists at the Palace of Fontainebleau had already moved onto this by the 1530s, but in provincial work in northern Europe flat strapwork panels continued for another century or more.
Where there is no suggestion of three dimensions – curling ends and the like – the decoration may also be called bandwork or "interlaced bands", the more technically correct term. Peter Fuhring derives this style from Islamic ornament.
Italy
Strapwork designs, influenced by Islamic ornament, are found on tooled book-covers in Italy and Spain by the mid
|
https://en.wikipedia.org/wiki/Continuum%20%28measurement%29
|
Continuum (: continua or continuums) theories or models explain variation as involving gradual quantitative transitions without abrupt changes or discontinuities. In contrast, categorical theories or models explain variation using qualitatively different states.
In physics
In physics, for example, the space-time continuum model describes space and time as part of the same continuum rather than as separate entities. A spectrum in physics, such as the electromagnetic spectrum, is often termed as either continuous (with energy at all wavelengths) or discrete (energy at only certain wavelengths).
In contrast, quantum mechanics uses quanta, certain defined amounts (i.e. categorical amounts) which are distinguished from continuous amounts.
In mathematics and philosophy
A good introduction to the philosophical issues involved is John Lane Bell's essa in the Stanford Encyclopedia of Philosophy. A significant divide is provided by the law of excluded middle. It determines the divide between intuitionistic continua such as Brouwer's and Lawvere's, and classical ones such as Stevin's and Robinson's.
Bell isolates two distinct historical conceptions of infinitesimal, one by Leibniz and one by Nieuwentijdt, and argues that Leibniz's conception was implemented in Robinson's hyperreal continuum, whereas Nieuwentijdt's, in Lawvere's smooth infinitesimal analysis, characterized by the presence of nilsquare infinitesimals: "It may be said that Leibniz recognized the need for the first, but not the second type of infinitesimal and Nieuwentijdt, vice versa. It is of interest to note that Leibnizian infinitesimals (differentials) are realized in nonstandard analysis, and nilsquare infinitesimals in smooth infinitesimal analysis".
In social sciences, psychology and psychiatry
In social sciences in general, psychology and psychiatry included, data about differences between individuals, like any data, can be collected and measured using different levels of measurement. Those lev
|
https://en.wikipedia.org/wiki/Martin%20Charles%20Golumbic
|
Martin Charles Golumbic (born 1948) is a mathematician and computer scientist known for his research on perfect graphs, graph sandwich problems, compiler optimization, and spatial-temporal reasoning. He is a professor emeritus of computer science at the University of Haifa, and was the founder of the journal Annals of Mathematics and Artificial Intelligence.
Education and career
Golumbic majored in mathematics at Pennsylvania State University, graduating in 1970 with bachelor's and master's degrees. He completed his Ph.D. at Columbia University in 1975, with the dissertation Comparability Graphs and a New Matroid supervised by Samuel Eilenberg.
He became an assistant professor in the Courant Institute of Mathematical Sciences of New York University from 1975 until 1980, when he moved to Bell Laboratories. From 1983 to 1992 he worked for IBM Research in Israel, and from 1992 to 2000 he was a professor of mathematics and computer science at Bar-Ilan University. He moved to the University of Haifa in 2000, where he founded the Caesarea Edmond Benjamin de Rothschild Institute for Interdisciplinary Applications of Computer Science.
In 1989, Golumbic founded the Bar-Ilan Symposium in Foundations of Artificial Intelligence, a leading artificial intelligence conference in Israel. In 1990 Golumbic became the founding editor-in-chief of the journal Annals of Mathematics and Artificial Intelligence, published by Springer.
Recognition
Golumbic is a fellow of the European Association for Artificial Intelligence (2005). He was elected to the Academia Europaea in 2013.
At the 2019 Bar-Ilan Symposium in Foundations of Artificial Intelligence, Golumbic was given the Lifetime Achievement and Service Award of the Israeli Association for Artificial Intelligence.
Selected publications
Golumbic is the author of books including:
Algorithmic Graph Theory and Perfect Graphs (Academic Press, 1980; 2nd ed., Elsevier, 2004)
Tolerance Graphs (with Ann Trenk, Cambridge University Press, 20
|
https://en.wikipedia.org/wiki/Ponte%20del%20Diavolo
|
(Italian for "Devil's bridge") by Martin Ebel is a territorial game (with connective elements similar to Go), in which two players create islands and then add bridges to connect them. It was created by Martin Ebel and published by Hans im Glück in 2007 and by Rio Grande Games in 2008. Games magazine named Ponte del Diavolo their "Best New Abstract Strategy Game" Winner for 2009.
External links
|
https://en.wikipedia.org/wiki/SRGAP2
|
SLIT-ROBO Rho GTPase-activating protein 2 (srGAP2), also known as formin-binding protein 2 (FNBP2), is a mammalian protein that in humans is encoded by the SRGAP2 gene. It is involved in neuronal migration and differentiation and plays a critical role in synaptic development, brain mass and number of cortical neurons. Downregulation of srGAP2 inhibits cell–cell repulsion and enhances cell–cell contact duration.
SRGAP2 dimerizes through its F-BAR domain. SRGAP2C, a shortened version found in early hominins and humans that only has the F-BAR domain, antagonizes its action. It slows maturation of some neurons and increases neuronal spine density.
Evolution
SRGAP2 is one of 23 genes that are known to be duplicated in humans but not other primates. SRGAP2 has been duplicated three times in the human genome in the past 3.4 million years: one duplication 3.4 million years ago (mya) called SRGAP2B, followed by two that copied SRGAP2B 2.4 mya into SRGAP2C and ~1 mya into SRGAP2D. All three duplications are also present in Denisovans and Neanderthals. They are shortened in the same manner, keeping the F-box domain but ditching the RhoGAP and SH3 domains. All humans possess SRGAP2C. SRGAP2C inhibits the function of the ancestral copy, SRGAP2A, by heterodimerization and allows faster migration of neurons by interfering with filopodia production as well as slowing the rate of synaptic maturation and increasing the density of synapses in the cerebral cortex. SRGAP2B is expressed at very low levels, and SRGAP2D is a pseudogene. Not all humans have SRGAP2B or SRGAP2D.
|
https://en.wikipedia.org/wiki/Object%E2%80%93relational%20mapping
|
Object–relational mapping (ORM, O/RM, and O/R mapping tool) in computer science is a programming technique for converting data between a relational database and the heap of an object-oriented programming language. This creates, in effect, a virtual object database that can be used from within the programming language.
In object-oriented programming, data-management tasks act on objects that combine scalar values into objects. For example, consider an address book entry that represents a single person along with zero or more phone numbers and zero or more addresses. This could be modeled in an object-oriented implementation by a "Person object" with an attribute/field to hold each data item that the entry comprises: the person's name, a list of phone numbers, and a list of addresses. The list of phone numbers would itself contain "PhoneNumber objects" and so on. Each such address-book entry is treated as a single object by the programming language (it can be referenced by a single variable containing a pointer to the object, for instance). Various methods can be associated with the object, such as methods to return the preferred phone number, the home address, and so on.
By contrast, relational databases, such as SQL, group scalars into tuples, which are then enumerated in tables. Tuples and objects have some general similarity, in that they are both ways to collect values into named fields such that the whole collection can be manipulated as a single compound entity. They have many differences, though, in particular: lifecycle management (row insertion and deletion, versus garbage collection or reference counting), references to other entities (object references, versus foreign key references), and inheritance (non-existent in relational databases). As well, objects are managed on-heap and are under full control of a single process, while database tuples are shared and must incorporate locking, merging, and retry. Object–relational mapping provides automated suppo
|
https://en.wikipedia.org/wiki/FishEye%20%28software%29
|
Fisheye is a revision-control browser and search engine owned by Atlassian, Inc. Although Fisheye is a commercial product, it is freely available to open source projects and non-profit institutions. In addition to the advanced search and diff capabilities, it provides:
the notion of changelog and changesets - even if the underlying version control system (such as CVS) does not support this
direct, resource-based URLs down to line-number level
monitoring and user-level notifications via e-mail or RSS
Use in open-source projects
Atlassian approves free licenses for community and open-source installations under certain conditions. Many major open source projects use Fisheye to provide a front-end for the source code repository:
Atlassian provides free licences of Fisheye and Crucible for open-source projects.
Integration
Fisheye supported integration with the following revision control systems:
CVS
Git
Mercurial
Perforce
Subversion
Due to the resource-based URLs, it is possible to integrate Fisheye with different issue and bug tracking systems. It also provides a REST and XML-RPC API. Fisheye also integrates with IDEs like IntelliJ IDEA via the Atlassian IDE Connector.
See also
Crucible
OpenGrok
Source code repository
Trac
ViewVC
|
https://en.wikipedia.org/wiki/Syringomycin%20E
|
Syringomycin E is a member of a class of lipodepsinonapeptide molecules that are secreted by the plant pathogen Pseudomonas syringae. Lipodepsinonapeptides comprise a closed ring of nine nonribosomally synthesized amino acids bonded to a fatty acid hydrocarbon tail. A commonly encountered pathovar (pv) of P. syringae is P. syringae pv syringae, which secretes a number of closely related forms of the molecule. Syringomycins are virulence determinants, which means that their secretion is required for the manifestation of disease symptoms on a number of stone fruit crop plants.
Syringomycins have two widely recognized mechanisms of action. They can function as detergents which are powerful enough to dissolve plant membranes at high concentrations. It is not clear whether concentrations high enough to dissolve membranes are ever reached in planta. In addition to being surfactants, aggregates of syringomycins can insert into plant cell membranes and form small pores. These pores allow the leakage of ions from the plant cell cytoplasm. Affected plant cells are unable to maintain their required levels of electrolyte and ultimately cell death and lysis occurs. It is believed that P. syringae benefits from the release of nutrients that occurs as a consequence of cellular lysis.
The biosynthesis of this class of molecules has been elucidated.
|
https://en.wikipedia.org/wiki/Richard%20H.%20Barrett
|
Richard Howard Barrett (June 10, 1915 – December 24, 1994), was an American speech-language pathologist and co-author of two early textbooks on the subject of orofacial myology: Oral Myofunctional Disorders, and Fundamentals of Orofacial Myology. Barrett was a founding father of the professional association that would become the International Association of Orofacial Myology, and he served as its president from 1977 to 1979. In his clinical practice he treated swallowing disorders related to muscular dysfunction and trained hundreds of other clinicians in the use of his techniques.
|
https://en.wikipedia.org/wiki/Behavioral%20immune%20system
|
The behavioral immune system is a phrase coined by the psychological scientist Mark Schaller to refer to a suite of psychological mechanisms that allow individual organisms to detect the potential presence of infectious parasites or pathogens in their immediate environment, and to engage in behaviors that prevent contact with those objects and individuals.
The existence of a behavioral immune system has been documented across many animal species, including humans. It is theorized that the mechanisms that comprise the behavioral immune system evolved as a crude first line of defense against disease-causing pathogens.
In humans and animals, activating a physiological immune response to pathogens is effective, but metabolically costly. Immune responses are activated at the expense of other fitness enhancing activities. Inflammation after infection can also be harmful to the body (e.g., contribute to diseases of aging). In addition to cultural adaptations to avoid pathogens, the behavioral immune system acts as a set of defense mechanisms to protect against pathogens before infection occurs
Proximate mechanisms
Mechanisms for the behavioral immune system include sensory processes through which cues connoting the presence of parasitic infections are perceived (e.g., the smell of a foul odor, the sight of pox or pustules), as well as stimulus–response systems through which these sensory cues trigger a cascade of aversive affective, cognitive, and behavioral reactions (e.g., arousal of disgust, automatic activation of cognitions that connote the threat of disease, behavioral avoidance).
Sensory components
Early and current research on behavioral immune system activation has been focused on visual cues or triggers that elicit responses. However, recent work suggests that other sensory modalities may be at work for disease detection.
Smell
Studies show that olfactory cues of disease elicit disgust and predict pathogen avoidance behaviors. In humans, body odors from d
|
https://en.wikipedia.org/wiki/MaxCliqueDyn%20algorithm
|
The MaxCliqueDyn algorithm is an algorithm for finding a maximum clique in an undirected graph.
It is based on a basic algorithm (MaxClique algorithm) which finds a maximum clique of bounded size. The bound is found using an improved coloring algorithm. The MaxCliqueDyn extends MaxClique algorithm to include dynamically varying bounds. This algorithm was designed by Janez Konc and the description was published in 2007. In comparison to earlier algorithms described in the published article the MaxCliqueDyn algorithm is improved by an improved approximate coloring algorithm (ColorSort algorithm) and by applying tighter, more computationally expensive upper bounds on a fraction of the search space. Both improvements reduce time to find maximum clique. In addition to reducing time improved coloring algorithm also reduces the number of steps needed to find a maximum clique.
MaxClique algorithm
The MaxClique algorithm is the basic algorithm of MaxCliqueDyn algorithm. The pseudo code of the algorithm is:
procedure MaxClique(R, C) is
Q = Ø, Qmax = Ø
while R ≠ Ø do
choose a vertex p with a maximum color C(p) from set R
R := R\{p}
if |Q| + C(p)>|Qmax| then
Q := Q ⋃ {p}
if R ⋂ Γ(p) ≠ Ø then
obtain a vertex-coloring C' of G(R ⋂ Γ(p))
MaxClique(R ⋂ Γ(p), C')
else if |Q|>|Qmax| then Qmax := Q
Q := Q\{p}
else
return
end while
where Q is a set of vertices of the currently growing clique, Qmax is a set of vertices of the largest clique currently found, R is a set of candidate vertices and C its corresponding set of color classes. The MaxClique algorithm recursively searches for maximum clique by adding and removing vertices to and from Q.
Coloring algorithm (ColorSort)
In the MaxClique algorithm the approximate coloring algorithm is used to obtain set of color classes C. The ColorSort algorithm is an improved algorithm of t
|
https://en.wikipedia.org/wiki/Minor%20%28linear%20algebra%29
|
In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix, cut down from A by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices (first minors) are required for calculating matrix cofactors, which in turn are useful for computing both the determinant and inverse of square matrices. The requirement that the square matrix be smaller than the original matrix is often omitted in the definition.
Definition and illustration
First minors
If A is a square matrix, then the minor of the entry in the ith row and jth column (also called the (i, j) minor, or a first minor) is the determinant of the submatrix formed by deleting the ith row and jth column. This number is often denoted Mi,j. The (i, j) cofactor is obtained by multiplying the minor by .
To illustrate these definitions, consider the following 3 by 3 matrix,
To compute the minor M2,3 and the cofactor C2,3, we find the determinant of the above matrix with row 2 and column 3 removed.
So the cofactor of the (2,3) entry is
General definition
Let A be an m × n matrix and k an integer with 0 < k ≤ m, and k ≤ n. A k × k minor of A, also called minor determinant of order k of A or, if m = n, (n−k)th minor determinant of A (the word "determinant" is often omitted, and the word "degree" is sometimes used instead of "order") is the determinant of a k × k matrix obtained from A by deleting m−k rows and n−k columns. Sometimes the term is used to refer to the k × k matrix obtained from A as above (by deleting m−k rows and n−k columns), but this matrix should be referred to as a (square) submatrix of A, leaving the term "minor" to refer to the determinant of this matrix. For a matrix A as above, there are a total of minors of size k × k. The minor of order zero is often defined to be 1. For a square matrix, the zeroth minor is just the determinant of the matrix.
Let and be ordered sequences (in natural order, as it is
|
https://en.wikipedia.org/wiki/Sverdrup%20wave
|
A Sverdrup wave (also known as Poincaré wave, or rotational gravity wave ) is a wave in the ocean, or large lakes, which is affected by gravity and Earth's rotation (see Coriolis effect).
For a non-rotating fluid, shallow water waves are affected only by gravity (see Gravity wave), where the phase velocity of shallow water gravity wave (c) can be noted as
and the group velocity (cg) of shallow water gravity wave can be noted as
i.e.
where g is gravity, λ is the wavelength and H is the total depth.
Derivation
When the fluid is rotating, gravity waves with a long enough wavelength (discussed below) will also be affected by rotational forces. The linearized, shallow-water equations with a constant rotation rate, f0, are
where u and v are the horizontal velocities and h is the instantaneous height of the free surface. Using Fourier analysis, these equations can be combined to find the dispersion relation for Sverdrup waves:
where k and l are the wavenumbers associated with the horizontal and vertical directions, and is the frequency of oscillation.
Limiting Cases
There are two primary modes of interest when considering Poincaré waves:
Short wave limit where is the Rossby radius of deformation. In this limit, the dispersion relation reduces to the solution for a non-rotating gravity wave.
Long wave limit which looks like inertial oscillations driven purely by rotational forces.
Solution for the one-dimensional case
For a wave traveling in one direction (), the horizontal velocities are found to be equal to
This shows that the inclusion of rotation will cause the wave to develop oscillations at 90° to the wave propagation at the opposite phase. In general, these are elliptical orbits that depend on the relative strength of gravity and rotation. In the long wave limit, these are circular orbits characterized by inertial oscillations.
|
https://en.wikipedia.org/wiki/Hedonic%20index
|
A hedonic index is any price index which uses information from hedonic regression, which describes how product price could be explained by the product's characteristics. Hedonic price indexes have proved to be very useful when applied to calculate price indices for information and communication products (e.g. personal computers) and housing, because they can successfully mitigate problems such as those that arise from there being new goods to consider and from rapid changes of quality.
Motivation
In the last two decades considerable attention has been drawn to the methods of computing price indexes. The Boskin Commission in 1996 asserted that there were biases in the price index: traditional matched model indexes can substantially overestimate inflation, because they are not able to measure the impact of peculiarities of specific industries such as fast rotation of goods, huge quality differences among products on the market, and short product life cycle. The Commission showed that the usage of matched model indexes (traditional price indexes) leads to an overestimation of inflation by 0.6% per year in the US official CPI (CPI-U). Information and Communications Technology (ICT) products led both to an increase in capital stock and labor productivity growth. Similar results were obtained by Crawford for Canada, by Shiratsuka for Japan, and by Cunningham for the UK. By reversing hedonic methodology, and pending further disclosure from commercial sources, bias has also been enumerated annually over five decades, for the U.S.A.
Quality adjustments are also important for understanding national accounts deflators (see GDP deflator). In the USA, for example, growth acceleration after 1995 was driven by the increased investment in ICT products that lead both to an increase in capital stock and labor productivity growth. This increases the complexity of international comparisons of deflators. Wyckoff and Eurostat show that there is a huge dispersion in ICT deflator
|
https://en.wikipedia.org/wiki/Traffic%20generation%20model
|
A traffic generation model is a stochastic model of the traffic flows or data sources in a communication network, for example a cellular network or a computer network. A packet generation model is a traffic generation model of the packet flows or data sources in a packet-switched network. For example, a web traffic model is a model of the data that is sent or received by a user's web-browser. These models are useful during the development of telecommunication technologies, in view to analyse the performance and capacity of various protocols, algorithms and network topologies .
Application
The network performance can be analyzed by network traffic measurement in a testbed network, using a network traffic generator such as iperf, bwping and Mausezahn. The traffic generator sends dummy packets, often with a unique packet identifier, making it possible to keep track of the packet delivery in the network.
Numerical analysis using network simulation is often a less expensive approach.
An analytical approach using queueing theory may be possible for a simplified traffic model but is often too complicated if a realistic traffic model is used.
The greedy source model
A simplified packet data model is the greedy source model. It may be useful in analyzing the maximum throughput for best-effort traffic (without any quality-of-service guarantees). Many traffic generators are greedy sources.
Poisson traffic model
Another simplified traditional traffic generation model for packet data, is the Poisson process, where the number of incoming packets and/or the packet lengths are modeled as an exponential distribution. When the packets interarrival time is exponential, with constant packet size it resembles an M/D/1 system. When both packet inter arrivals and sizes are exponential, it is an M/M/1 queue:
Long-tail traffic models
However, the Poisson traffic model is memoryless, which means that it does not reflect the bursty nature of packet data, also known as the long-range d
|
https://en.wikipedia.org/wiki/Mesopredator
|
A Mesopredator is a predator that occupies a mid-ranking trophic level in a food web. There is no standard definition of a mesopredator, but mesopredators are usually medium-sized carnivorous or omnivorous animals, such as raccoons, foxes, or coyotes. They are often defined by contrast from apex predators or prey in a particular food web. Mesopredators typically prey on smaller animals.
Mesopredators vary across different ecosystems. Sometimes, the same species is a mesopredator in one ecosystem and an apex predator in another ecosystem, depending on the composition of that ecosystem. When new species are introduced into an ecosystem, the role of the mesopredator often changes; this can also happen if species are removed.
The Mesopredator Release Effect
When populations of an apex predator decrease, populations of mesopredators in the area often increase due to decreased competition and conflict with the apex predator. This is known as the mesopredator release effect, which refers to the release of mesopredators from the trophic cascade. These mesopredator outbreaks can lead to declining prey populations, destabilize ecological communities, reduce biodiversity, and can even drive local extinctions.
Typically, mesopredators are in competition with apex predators for food and other resources. Apex predators reduce mesopredator populations and change mesopredator behaviors and habitat choices by preying on and intimidating mesopredators. When apex predator populations decline, mesopredators can access hunting and den areas once controlled by the apex predators, essentially assuming the role of an apex predator. However, mesopredators often occupy different ecological niches than the former apex predator and will have different effects on the structure and stability of the ecosystem.
Mesopredator outbreaks are becoming more common in fragmented habitats, which are areas where a species' preferred environment is broken up by obstacles. Fragmented habitats can be
|
https://en.wikipedia.org/wiki/Synizesis%20%28biology%29
|
Synizesis refers to a phenomenon sometimes observed in one of the subphases of meiosis. This phenomenon, sometimes referred to as a "synizetic knot", and contrasted with the chromosome "bouquet" more typically observed, is characterized by the localization of the meiotic chromosomes in a tight clump on one side of the nucleus. The term synizesis seems to have been coined by Clarence Erwin McClung in 1905.
The synizetic knot (Synizesis) was later found to be a technical artifact induced by the feature of strong acidic fixatives used during that time (e.g., Flemming's strong fixative) to precipitate the thread-like delicate chromosomes of the Leptotene stage of first meiotic prophase into a dark staining knot.
|
https://en.wikipedia.org/wiki/CIT%20Program%20Tumor%20Identity%20Cards
|
The "Cartes d'Identité des Tumeurs (CIT)" program, launched and funded by the French charity "Ligue Nationale contre le Cancer," aims to improve or develop better targeted therapeutic approaches by refining molecular knowledge of multiple types of tumors. The CIT program mainly relies on the large-scale and systematic profiling of large cohorts of tumors at various molecular levels including at least the genome, the epigenome, and the transcriptome.
See also
Precision medicine
Oncology
Cancer Research
Bioinformatics
Computational genomics
Oncogenomics
Genomics
Transcriptome
Gene expression profiling
|
https://en.wikipedia.org/wiki/Woolmer%20Lecture
|
The Woolmer lecture is the flagship lecture of the Institute of Physics and Engineering in Medicine. It takes place annually during the Institute's Medical Physics and Engineering Conference.
Dedication
The lecture is dedicated to Professor Ronald Woolmer (1908–1962) who was the first Director of the Research Department of Anaesthetics at the Royal College of Surgeons. Woolmer convened a meeting at the Royal College of Surgeons, London, to discuss the evolving field of engineering applied to medicine. It was agreed that the group should hold regular meetings and as a result the Biological Engineering Society (BES) was formed with Ronald Woolmer as the first President. Woolmer died two years after the formation of the BES and it was agreed that a memorial lecture would be sponsored in recognition of his achievements.
Lecturers
See also
List of medicine awards
|
https://en.wikipedia.org/wiki/VyOS
|
VyOS is an open source network operating system based on Debian.
VyOS provides a free routing platform that competes directly with other commercially available solutions from well-known network providers. Because VyOS is run on standard amd64 systems, it can be used as a router and firewall platform for cloud deployments.
History
After Brocade Communications stopped development of the Vyatta Core Edition of the Vyatta Routing software, a small group of enthusiasts in 2013 took the last Community Edition, and worked on building an Open Source fork to live on in place of the end of life VC.
Features
BGP (IPv4 and IPv6), OSPF (v2 and v3), RIP and RIPng, policy-based routing.
IPsec, VTI, VXLAN, L2TPv3, L2TP/IPsec and PPTP servers, tunnel interfaces (GRE, IPIP, SIT), OpenVPN in client, server, or site-to-site modes, WireGuard.
Stateful firewall, zone-based firewall, all types of source and destination NAT (one to one, one to many, many to many).
DHCP and DHCPv6 server and relay, IPv6 RA, DNS forwarding, TFTP server, web proxy, PPPoE access concentrator, NetFlow/sFlow sensor, QoS.
VRRP for IPv4 and IPv6, ability to execute custom health checks and transition scripts; ECMP, stateful load balancing.
Built-in versioning.
Releases
VyOS version 1.0.0 (Hydrogen) was released on December 22, 2013. On October 9, 2014, version 1.1.0 (Helium) was released. All versions released thus far have been based on Debian 6.0 (Squeeze), and are available as a 32-bit images and 64-bit images for both physical and virtual machines.
On January 28, 2019, version 1.2.0 (Crux) was released. Version 1.2.0 is based on Debian 8 (Jessie).
While version 1.0 and 1.1 were named after elements, a new naming scheme based on constellations is used from version 1.2.
Release History
VMware Support
The VyOS OVA image for VMware was released with the February 3, 2014 maintenance release.
It allows a convenient setup of VyOS on a VMware platform and includes all of the VMware tools and paravirtual
|
https://en.wikipedia.org/wiki/Security%20as%20a%20service
|
Security as a service (SECaaS) is a business model in which a service provider integrates their security services into a corporate infrastructure on a subscription basis more cost-effectively than most individuals or corporations can provide on their own when the total cost of ownership is considered. SECaaS is inspired by the "software as a service" model as applied to information security type services and does not require on-premises hardware, avoiding substantial capital outlays. These security services often include authentication, anti-virus, anti-malware/spyware, intrusion detection, Penetration testing, and security event management, among others.
Outsourced security licensing and delivery are boasting a multibillion-dollar market. SECaaS provides users with Internet security services providing protection from online threats and attacks such as DDoS that are constantly searching for access points to compromise websites. As the demand and use of cloud computing skyrockets, users are more vulnerable to attacks due to accessing the Internet from new access points. SECaaS serves as a buffer against the most persistent online threats.
Categories of SECaaS
The Cloud Security Alliance (CSA) is an organization that is dedicated to defining and raising awareness of secure cloud computing. In doing so, the CSA has defined the following categories of SECaaS tools and created a series of technical and implementation guidance documents to help businesses implement and understand SECaaS. These categories include:
Business continuity and disaster recovery (BCDR or BC/DR)
Continuous monitoring
Data loss prevention (DLP)
Email security
Encryption
Identity and access management (IAM)
Intrusion management
Network security
Security assessment
Penetration testing
Security information and event management (SIEM)
Vulnerability scanning
Web security
SECaaS models
SECaaS are typically offered in several forms:
Subscription
Payment for utilized services
Freeware,
|
https://en.wikipedia.org/wiki/Fractional%20Pareto%20efficiency
|
In economics and computer science, Fractional Pareto efficiency or Fractional Pareto optimality (fPO) is a variant of Pareto efficiency used in the setting of fair allocation of discrete objects. An allocation of objects is called discrete if each item is wholly allocated to a single agent; it is called fractional if some objects are split among two or more agents. A discrete allocation is called Pareto-efficient (PO) if it is not Pareto-dominated by any discrete allocation; it is called fractionally Pareto-efficient (fPO) if it is not Pareto-dominated by any discrete or fractional allocation. So fPO is a stronger requirement than PO: every fPO allocation is PO, but not every PO allocation is fPO.
Formal definitions
There is a set of n agents and a set of m objects. An allocation is determined by an n-by-m matrix z, where each element z[i,o] is a real number between 0 and 1. It represents the fraction that agent i gets from object o. For every object o, the sum of all elements in column o equals 1, since the entire object is allocated.
An allocation is called discrete or integral if all its elements z[i,o] are either 0 or 1; that is, each object is allocated entirely to a single agent.
An allocation y is called a Pareto improvement of an allocation z, if the utility of all agents in y is at least as large as in z, and the utility of some agents in y is strictly larger than in z. In this case, we also say that y Pareto-dominates z.
If an allocation z is not Pareto-dominated by any discrete allocation, then it is called discrete Pareto-efficient, or simply Pareto-efficient (usually abbreviated PO).
If z is not Pareto-dominated by any allocation at all - whether discrete or fractional - then it is called fractionally Pareto-efficient (usually abbreviated fPO).
Examples
PO does not imply fPO
Suppose there are two agents and two items. Alice values the items at 3, 2 and George values them at 4, 1. Let z be the allocation giving the first item to Alice and the s
|
https://en.wikipedia.org/wiki/Call%20super
|
Call super is a code smell or anti-pattern of some object-oriented programming languages. Call super is a design pattern in which a particular class stipulates that in a derived subclass, the user is required to override a method and call back the overridden function itself at a particular point. The overridden method may be intentionally incomplete, and reliant on the overriding method to augment its functionality in a prescribed manner. However, the fact that the language itself may not be able to enforce all conditions prescribed on this call is what makes this an anti-pattern.
Description
In object-oriented programming, users can inherit the properties and behaviour of a superclass in subclasses. A subclass can override methods of its superclass, substituting its own implementation of the method for the superclass's implementation. Sometimes the overriding method will completely replace the corresponding functionality in the superclass, while in other cases the superclass's method must still be called from the overriding method. Therefore, most programming languages require that an overriding method must explicitly call the overridden method on the superclass for it to be executed.
The call super anti-pattern relies on the users of an interface or framework to derive a subclass from a particular class, override a certain method and require the overridden method to call the original method from the overriding method:
This is often required, since the superclass must perform some setup tasks for the class or framework to work correctly, or since the superclass's main task (which is performed by this method) is only augmented by the subclass.
The anti-pattern is the of calling the parent. There are many examples in real code where the method in the subclass may still want the superclass's functionality, usually where it is only augmenting the parent functionality. If it still has to call the parent class even if it is fully replacing the functionality, the a
|
https://en.wikipedia.org/wiki/Noether%20identities
|
In mathematics, Noether identities characterize the degeneracy of a Lagrangian system. Given a Lagrangian system and its Lagrangian L, Noether identities can be defined as a differential operator whose kernel contains a range of the Euler–Lagrange operator of L. Any Euler–Lagrange operator obeys Noether identities which therefore are separated into the trivial and non-trivial ones. A Lagrangian L is called degenerate if the Euler–Lagrange operator of L satisfies non-trivial Noether identities. In this case Euler–Lagrange equations are not independent.
Noether identities need not be independent, but satisfy first-stage Noether identities, which are subject to the second-stage Noether identities and so on. Higher-stage Noether identities also are separated into the trivial and non-trivial once. A degenerate Lagrangian is called reducible if there exist non-trivial higher-stage Noether identities. Yang–Mills gauge theory and gauge gravitation theory exemplify irreducible Lagrangian field theories.
Different variants of second Noether’s theorem state the one-to-one correspondence between the non-trivial reducible Noether identities and the non-trivial reducible gauge symmetries. Formulated in a very general setting, second Noether’s theorem associates to the Koszul–Tate complex of reducible Noether identities, parameterized by antifields, the BRST complex of reducible gauge symmetries parameterized by ghosts. This is the case of covariant classical field theory and Lagrangian BRST theory.
See also
Noether's second theorem
Emmy Noether
Lagrangian system
Variational bicomplex
Gauge symmetry (mathematics)
|
https://en.wikipedia.org/wiki/Tepidanaerobacter
|
Tepidanaerobacter is a genus of anaerobic, moderately thermophilic, syntrophic bacteria from the family Tepidanaerobacteraceae.
|
https://en.wikipedia.org/wiki/Dynamic%20Monte%20Carlo%20method
|
In chemistry, dynamic Monte Carlo (DMC) is a Monte Carlo method for modeling the dynamic behaviors of molecules by comparing the rates of individual steps with random numbers. It is essentially the same as Kinetic Monte Carlo. Unlike the Metropolis Monte Carlo method, which has been employed to study systems at equilibrium, the DMC method is used to investigate non-equilibrium systems such as a reaction, diffusion, and so-forth (Meng and Weinberg 1994). This method is mainly applied to analyze adsorbates' behavior on surfaces.
There are several well-known methods for performing DMC simulations, including the First Reaction Method (FRM) and Random Selection Method (RSM). Although the FRM and RSM give the same results from a given model, the computer resources are different depending on the applied system.
In the FRM, the reaction whose time is minimum on the event list is advanced. In the event list, the tentative times for all possible reactions are stored. After the selection of one event, the system time is advanced to the reaction time, and the event list is recalculated. This method is efficient in computation time because the reaction always occurs in one event. On the other hand, it consumes a lot of computer memory because of the event list. Therefore, it is difficult to apply to large-scale systems.
The RSM decides whether the reaction of the selected molecule proceeds or not by comparing the transition probability with a random number. In this method, the reaction does not necessarily proceed in one event, so it needs significantly more computation time than FRM. However, this method saves computer memory because it does not use an event list. Large-scale systems are able to be calculated by this method.
See also
Hybrid Monte Carlo
|
https://en.wikipedia.org/wiki/European%20Society%20for%20Evolutionary%20Biology
|
The European Society for Evolutionary Biology (ESEB) was founded in 1987 in Basel (Switzerland) with around 450 evolutionary biologists attending the inaugural congress. It is an academic society that brings together more than 1500 evolutionary biologists from across Europe and beyond. The founding of the society was closely linked with the launch of the society's journal, the Journal of Evolutionary Biology with the first issue appearing in 1988. ESEB aims at supporting the study of evolution. Beside publishing the journal and co-publishing Evolution Letters, the society organises a biannual congress and supports other events to promote advances in evolutionary biology. ESEB also supports activities to promote a scientific view of evolution in research and education.
Its objectives are to "Support the study of organic evolution and the integration of those scientific fields that are concerned with evolution: molecular and microbial evolution, behaviour, genetics, ecology, life histories, development, paleontology, systematics and morphology."
ESEB supports young researchers through sponsoring the annual EMPSEB (European Meeting of PhD Students in Evolutionary Biology) research conference for Ph.D. students.
Presidents
Source: ESEB
1987–1989 : Arthur Cain (first president)
1989–1991 : Bengt Bengtsson
1991–1993 : John Maynard Smith
1993–1995 : John L. Harper
1995–1997 : Wim Scharloo
1997–1999 : Stephen Stearns
1999–2001 : Godfrey Hewitt
2001–2003 : Deborah Charlesworth
2003–2005 : Rolf Hoekstra
2005–2007 : Paul Brakefield
2007–2009 : Isabelle Olivieri
2009–2011 : Siv Andersson
2011–2013 : Brian Charlesworth
2013–2015 : Roger Butlin
2015–2017 : Laurent Keller
2017–2019 : Nina Wedell
2019–2021 : Ophélie Ronce
Further reading
|
https://en.wikipedia.org/wiki/Union%20Medal%20of%20the%20British%20Ornithological%20Union
|
The Union Medal is a medal of the British Ornithologists' Union, given "in recognition of eminent services to ornithology and to the Union and ornithology." From 2019 it is to be known as the "Janet Kear Union Medal", after Janet Kear, with a new medal design.
In his history of the BOU, History of the Union, Guy Montfort wrote:
The BOU introduced the Godman-Salvin Medal, awarded "to an individual as a signal honour for distinguished ornithological work.", and nowadays the Union Medal recognises people "who have given distinguished service to the Union itself".
Medallists
Medallists include:
1912 Walter Goodfellow, C. H. B. Grant
1948 Willoughby P. Lowe
1953 A. W. Boyd
1959 W. B. Alexander, E. A. Armstrong, David Armitage Bannerman, Evelyn Baxter, Peter M Scott
1960 C. W. Benson
1967 Salim Ali
1968 J. M. M. Fisher, Charles R. S. Pitman
1969 C. W. Mackworth-Praed
1970 L. H. Brown
1971 Stephen Marchant, Henry Neville Southern, Bernard Stonehouse
1972 Derek Goodwin, Norman W. Moore
1973 Beryl Patricia Hall
1975 Karel H. Voous
1976 Ken Williamson, G. C. Shortridge, Alexander F. R. Wollaston
1979 Ken E. L. Simmons
1980 Geoffrey V. T. Matthews
1984 Stanley Cramp, Philip A. D. Hollom, Guy Mountfort
1987 Ian Newton
1988 James F. Monk
1989 Robert Spencer
1991 F. B. M. Campbell
1992 Michael Philip Harris
1993 Ronald M. Lockley
1995 R. Tory Peterson
1996 Christopher J. Mead
1997 Robert A. F. Gillmor, John Sidney Ash
1998 Janet Kear
2004 Gwen Bonham
2008 Christopher J. Feare
2011 Peter Jones
2012 Andy Gosler
2013 Neil J. Bucknell
2015 John Croxall
2016 Chris Perrins
2017 Prof Jenny Gill (former BOU President)
See also
Alfred Newton Lecture
Godman-Salvin Medal
List of ornithology awards
|
https://en.wikipedia.org/wiki/Silent%20speech%20interface
|
Silent speech interface is a device that allows speech communication without using the sound made when people vocalize their speech sounds. As such it is a type of electronic lip reading. It works by the computer identifying the phonemes that an individual pronounces from nonauditory sources of information about their speech movements. These are then used to recreate the speech using speech synthesis.
Information sources
Silent speech interface systems have been created using ultrasound and optical camera input of tongue and lip movements. Electromagnetic devices are another technique for tracking tongue and lip movements. The detection of speech movements by electromyography of speech articulator
muscles and the larynx is another technique. Another source of information is the vocal tract resonance signals that get transmitted through bone conduction called non-audible murmurs.
They have also been created as a brain–computer interface using brain activity in the motor cortex obtained from intracortical microelectrodes.
Uses
Such devices are created as aids to those unable to create the sound phonation needed for audible speech such as after laryngectomies. Another use is for communication when speech is masked by background noise or distorted by self-contained breathing apparatus. A further practical use is where a need exists for silent communication, such as when privacy is required in a public place, or hands-free data silent transmission is needed during a military or security operation.
In 2002, the Japanese company NTT DoCoMo announced it had created a silent mobile phone using electromyography and imaging of lip movement. "The spur to developing such a phone," the company said, "was ridding public places of noise," adding that, "the technology is also expected to help people who have permanently lost their voice." The feasibility of using silent speech interfaces for practical communication has since then been shown.
Recent Development and Research
A
|
https://en.wikipedia.org/wiki/Greenworld
|
Greenworld (Japanese: グリーンワールド Hepburn: Gurīn wārudo) is a 2010 speculative evolution and science fiction book written by Scottish geologist and paleontologist Dougal Dixon and primarily illustrated by Dixon himself, alongside a few images by other artists. Greenworld features a fictional alien planet of the same name and a diverse biosphere of alien organisms. Greenworld has so far only been published in Japan, where it was released in two volumes.
The premise of Greenworld follows human colonisation of the alien planet over the course of a thousand years, chronicling mankind's disastrous impact on Greenworld's ecosystems, similar to how humans today are impacting Earth and its life. Greenworld and its creatures were originally designed by Dixon as a design exercise for his local science fiction group and the planet and its organisms first appeared in a 1992 episode of the Channel 4 series Equinox, followed by appearances in various other media, including the 1997 programme Natural History of an Alien.
Greenworld's premise is similar to, and repurposed from, Dixon's original idea for his book Man After Man (1990), which would have involved humans time-travelling 50 million years into the future to colonize biosphere of the future he had developed for After Man (1981). The version of Man After Man that was eventually published was considerably different from Dixon's original concept.
Summary
Greenworld is a hypothetical Earth-like exoplanet with a diverse and thriving biosphere. All animal-analogous organisms on Greenworld are descended from a radially symmetrical six-legged starfish-like animal. Animals on Greenworld secondarily developed bilateral symmetry (which is what is seen in most animals on Earth), developing into two major groups; “sulcosyms” in which the plane of symmetry lies between the legs (meaning they have three pairs of limbs) and “brachiosyms” in which the plane of symmetry has led to the formation of one “arm” at each of its ends (meaning two
|
https://en.wikipedia.org/wiki/Crithidia%20luciliae
|
Crithidia luciliae is a flagellate parasite that uses the housefly, Musca domestica, as a host. As part of the family of Trypanosomatidae, it is characterised by the presence of a kinetoplast, a complex network of interlocking circular double-stranded DNA (dsDNA) molecules. The presence of the kinetoplast makes this organism important in the diagnosis of systemic lupus erythamatosus (SLE). By using C. luciliae as a substrate for immunofluorescence, the organelle can be used to detect anti-dsDNA antibodies, a common feature of the disease.
Taxonomy
C. luciliae is a eukaryotic single-cell protozoan. The family Trypanosomatidae belongs to the order Kinetoplastida and is characterised by the presence of the kinetoplast, a network of interlocking circular DNA in a large mitochondrion. All organisms in Kinetoplastida are parasitic, and the host organism for C. luciliae is the housefly, Musca domestica.
Role in systemic lupus erythematosus diagnosis
The kinetoplast found in C. luciliae allows them to be used for the detection of anti-dsDNA antibodies, a type of anti-nuclear antibody. Anti-nuclear antibodies are a common feature in SLE, and anti-dsDNA antibodies are highly specific for the disease. The high concentration of dsDNA and the absence of human nuclear antigens in the kinetoplast provides a specific substrate for the detection of anti-dsDNA antibodies.
Purine nucleotide and nucleobase uptake
As a parasitic protozoan, C. luciliae lacks the ability to biosynthetically produce purine bases and therefore needs to salvage them from the surrounding environment. Three transport systems are used for the uptake of bases from the host organism: one for the uptake of adenosine and its analogues; one for guanosine, its analogues and inosine; and one for hypoxanthine, adenine and adenosine.
|
https://en.wikipedia.org/wiki/Carbon%20%28API%29
|
Carbon was one of two primary C-based application programming interfaces (APIs) developed by Apple for the macOS (formerly Mac OS X and OS X) operating system. Carbon provided a good degree of backward compatibility for programs that ran on Mac OS 8 and 9. Developers could use the Carbon APIs to port (“carbonize”) their “classic” Mac applications and software to the Mac OS X platform with little effort, compared to porting the app to the entirely different Cocoa system, which originated in OPENSTEP. With the release of macOS 10.15 Catalina, the Carbon API was officially discontinued and removed, leaving Cocoa as the sole primary API for developing macOS applications.
Carbon was an important part of Apple's strategy for bringing Mac OS X to market, offering a path for quick porting of existing software applications, as well as a means of shipping applications that would run on either Mac OS X or the classic Mac OS. As the market has increasingly moved to the Cocoa-based frameworks, especially after the release of iOS, the need for a porting library was diluted. Apple did not create a 64-bit version of Carbon while updating their other frameworks in the 2007 time-frame, and eventually deprecated the entire API in OS X 10.8 Mountain Lion, which was released on July 24, 2012.
History
Classic Mac OS programming
The original Mac OS used Pascal as its primary development platform, and the APIs were heavily based on Pascal's call semantics. Much of the Macintosh Toolbox consisted of procedure calls, passing information back and forth between the API and program using a variety of data structures based on Pascal's variant record concept.
Over time, a number of object libraries evolved on the Mac, notably the Object Pascal library MacApp and the THINK C Think Class Library, and later versions of MacApp and CodeWarrior's PowerPlant in C++. By the mid-1990s, most Mac software was written in C++ using CodeWarrior.
Rhapsody
With the purchase of NeXT in late 1996, Apple devel
|
https://en.wikipedia.org/wiki/Positive%20energy%20theorem
|
The positive energy theorem (also known as the positive mass theorem) refers to a collection of foundational results in general relativity and differential geometry. Its standard form, broadly speaking, asserts that the gravitational energy of an isolated system is nonnegative, and can only be zero when the system has no gravitating objects. Although these statements are often thought of as being primarily physical in nature, they can be formalized as mathematical theorems which can be proven using techniques of differential geometry, partial differential equations, and geometric measure theory.
Richard Schoen and Shing-Tung Yau, in 1979 and 1981, were the first to give proofs of the positive mass theorem. Edward Witten, in 1982, gave the outlines of an alternative proof, which were later filled in rigorously by mathematicians. Witten and Yau were awarded the Fields medal in mathematics in part for their work on this topic.
An imprecise formulation of the Schoen-Yau / Witten positive energy theorem states the following:
The meaning of these terms is discussed below. There are alternative and non-equivalent formulations for different notions of energy-momentum and for different classes of initial data sets. Not all of these formulations have been rigorously proven, and it is currently an open problem whether the above formulation holds for initial data sets of arbitrary dimension.
Historical overview
The original proof of the theorem for ADM mass was provided by Richard Schoen and Shing-Tung Yau in 1979 using variational methods and minimal surfaces. Edward Witten gave another proof in 1981 based on the use of spinors, inspired by positive energy theorems in the context of supergravity. An extension of the theorem for the Bondi mass was given by Ludvigsen and James Vickers, Gary Horowitz and Malcolm Perry, and Schoen and Yau.
Gary Gibbons, Stephen Hawking, Horowitz and Perry proved extensions of the theorem to asymptotically anti-de Sitter spacetimes and to Ei
|
https://en.wikipedia.org/wiki/Modified%20Chee%27s%20medium
|
In cellular biology and microbiology, modified Chee's medium (otherwise known as "MCM") is used to cultivate cells and bacteria. It uses various additives (fat acids, albumins and selenium) to facilitate cellular and bacterial growth.
|
https://en.wikipedia.org/wiki/Colorado%20Time%20Systems
|
Colorado Time Systems (CTS) is an American company based in Loveland, Colorado that designs, manufactures, sells, and services aquatic timing systems, scoreboards, LED video displays, and related products.
History
Colorado Time Systems was born in the Test & Measurement division of Hewlett-Packard (HP). HP wanted to explore opportunities in the sports timing industry and chose aquatics because it required such precise measurement. In 1972, a group of HP engineers spun off from HP and founded CTS. From those very specific aquatic beginnings, an extensive and multi-faced sports timing and display company was born. Throughout the years, CTS has maintained a steadfast commitment to provide cutting edge scoring and display products for all venues. In July 2011, Colorado Time Systems was acquired by PlayCore; based in Chattanooga, TN. Colorado Time Systems is part of the Everactive Brands division.
Colorado Memory Systems
In 1985, Colorado Time Systems co-founder William "Bill" Beierwaltes founded Colorado Memory Systems (CMS) as a division of Colorado Time Systems. Whereas Colorado Time Systems focused on timekeeping displays, Beierwaltes founded Colorado Memory Systems chiefly to market data storage products for the burgeoning personal computer industry of the 1980s. Colorado Data later became a major player in the field of quarter-inch cartridge manufacturing with their Jumbo line of drives and spun off from Colorado Time Systems in around 1990. In 1992, CMS was acquired by Hewlett-Packard.
Sponsorship partners
Colorado Time Systems is the official timing, scoring and display partner to: USA Water Polo, US Synchronized Swimming, Mexican Swimming Federation, FINA Junior World Swimming Championship, American Swimming Coaches Association, and China Swimming Association.
Important Mentions
Modern Marvels - History Channel
On December 23, 2008, the History Channel series, Modern Marvels, aired an episode titled "Measure It!". This episode discussed various moder
|
https://en.wikipedia.org/wiki/Deep%20cervical%20fascia
|
The deep cervical fascia (or fascia colli in older texts) lies under cover of the platysma, and invests the muscles of the neck; it also forms sheaths for the carotid vessels, and for the structures situated in front of the vertebral column. Its attachment to the hyoid bone prevents the formation of a dewlap.
The investing portion of the fascia is attached behind to the ligamentum nuchæ and to the spinous process of the seventh cervical vertebra.
The alar fascia is a portion of the deep cervical fascia.
Divisions
The deep cervical fascia is often divided into a superficial, middle, and deep layer.
The superficial layer is also known as the investing layer of deep cervical fascia. It envelops the trapezius, sternocleidomastoid, and muscles of facial expression. It also contains the submandibular and parotid salivary gland as well as the muscles of mastication (the masseter, pterygoid, and temporalis muscles).
The middle layer is also known as the pretracheal fascia. It envelopes the strap muscles (sternohyoid, sternothyroid, thyrohyoid, and omohyoid muscles). It also surrounds the pharynx, larynx, trachea, esophagus, thyroid, parathyroids, buccinators, and constrictor muscles of the pharynx.
The deep layer is also known as the prevertebral fascia. It surrounds the paraspinous muscles and cervical vertebrae.
The carotid sheath is also considered a component of the deep cervical fascia.
Superior extent of the investing fascia
Above, the fascia is attached to the superior nuchal line of the occipital bone, to the mastoid process of the temporal bone, and to the whole length of the inferior border of the body of the mandible.
Opposite the angle of the mandible the fascia is very strong, and binds the anterior edge of the sternocleidomastoideus firmly to that bone.
Between the mandible and the mastoid process it ensheathes the parotid gland—the layer which covers the gland extends upward under the name of the parotideomasseteric fascia and is fixed to the zygoma
|
https://en.wikipedia.org/wiki/Petkau%20effect
|
The Petkau effect is an early counterexample to linear-effect assumptions usually made about radiation exposure. It was found by Dr. Abram Petkau at the Atomic Energy of Canada Whiteshell Nuclear Research Establishment, Manitoba and published in Health Physics March 1972. The Petkau effect was coined by Swiss nuclear hazards commentator Ralph Graeub in 1985 in this book Der Petkau-Effekt und unsere strahlende Zukunft (The Petkau effect and our Radiating Future).
Petkau had been measuring, in the usual way, the radiation dose that would rupture a simulated artificial cell membrane. He found that 3500 rads delivered in hours (26 rad/min = 15.5 Sv/h) would do it. Then, almost by chance, Petkau repeated the experiment with much weaker radiation and found that 0.7 rad delivered in hours (1 millirad/min = 0.61 mSv/h) also ruptured the membrane. This was counter to the prevailing assumption of a linear relationship between total dose or dose rate and the consequences.
The radiation was of ionizing nature, and produced negative oxygen ions (free radicals). Those ions were more damaging to the simulated membrane in lower concentrations than higher (a somewhat counter-intuitive result in itself) because in the latter, they more readily recombine with each other instead of interfering with the membrane. The ion concentration directly correlated with the radiation dose rate and the composition had non-monotonic consequences.
Radio-protective effects of superoxide dismutase
Petkau conducted further experiments with simulated cells in 1976 and found that the enzyme superoxide dismutase protected the cells from free radicals generated by ionizing radiation, obviating the effects seen in his earlier experiment. Petkau also discovered that superoxide dismutase was elevated in the leukocytes (white blood cells) in a sub-population of nuclear workers occupationally exposed to elevated radiation (ca. 10 mSv in 6 months), further supporting the hypothesis that superoxide dismuta
|
https://en.wikipedia.org/wiki/Long%20non-coding%20RNA
|
Long non-coding RNAs (long ncRNAs, lncRNA) are a type of RNA, generally defined as transcripts more than 200 nucleotides that are not translated into protein. This arbitrary limit distinguishes long ncRNAs from small non-coding RNAs, such as microRNAs (miRNAs), small interfering RNAs (siRNAs), Piwi-interacting RNAs (piRNAs), small nucleolar RNAs (snoRNAs), and other short RNAs. Given that some lncRNAs have been reported to have the potential to encode small proteins or micro-peptides, the latest definition of lncRNA is a class of RNA molecules of over 200 nucleotides that have no or limited coding capacity. Long intervening/intergenic noncoding RNAs (lincRNAs) are sequences of lncRNA which do not overlap protein-coding genes.
Long non-coding RNAs include intergenic lincRNAs, intronic ncRNAs, and sense and antisense lncRNAs, each type showing different genomic positions in relation to genes and exons.
Abundance
In 2007 a study found only one-fifth of transcription across the human genome is associated with protein-coding genes, indicating at least four times more long non-coding than coding RNA sequences. Large-scale complementary DNA (cDNA) sequencing projects such as FANTOM reveal the complexity of this transcription. The FANTOM3 project identified ~35,000 non-coding transcripts that bear many signatures of messenger RNAs, including 5' capping, splicing, and poly-adenylation, but have little or no open reading frame (ORF). This number represents a conservative lower estimate, since it omitted many singleton transcripts and non-polyadenylated transcripts (tiling array data shows more than 40% of transcripts are non-polyadenylated). Identifying ncRNAs within these cDNA libraries is challenging since it can be difficult to distinguish protein-coding transcripts from non-coding transcripts. It has been suggested through multiple studies that testis, and neural tissues express the greatest amount of long non-coding RNAs of any tissue type. Using FANTOM5, 27,919 long
|
https://en.wikipedia.org/wiki/Appeal%20to%20motive
|
Appeal to motive is a pattern of argument which consists in challenging a thesis by calling into question the motives of its proposer. It can be considered as a special case of the ad hominem circumstantial argument. As such, this type of argument may be an informal fallacy.
A common feature of appeals to motive is that only the possibility of a motive (however small) is shown, without showing the motive actually existed or, if the motive did exist, that the motive played a role in forming the argument and its conclusion. Indeed, it is often assumed that the mere possibility of motive is evidence enough.
Examples
"That website recommended ACME's widget over Megacorp's widget. But the website also displays ACME advertising on their site, so they must be biased in their review." The thesis in this case is the website's evaluation of the relative merits of the two products.
"The referee is a New York City native, so his refereeing was obviously biased towards New York teams." In this case, the thesis consists of the referee's rulings.
"My opponent argues on and on in favor of allowing that mall to be built in the center of town. What he won't tell you is that his daughter and her friends plan to shop there once it's open."
See also
Bulverism
Call-out culture
Conflict of interest
Cui bono
Race card
Shooting the messenger
Woman card
|
https://en.wikipedia.org/wiki/MDA%20framework
|
In game design the Mechanics-Dynamics-Aesthetics (MDA) framework is a tool used to analyze games. It formalizes the consumption of games by breaking them down into three components: Mechanics, Dynamics and Aesthetics. These three words have been used informally for many years to describe various aspects of games, but the MDA framework provides precise definitions for these terms and seeks to explain how they relate to each other and influence the player's experience.
Mechanics are the base components of the game - its rules, every basic action the player can take in the game, the algorithms and data structures in the game engine etc.
Dynamics are the run-time behavior of the mechanics acting on player input and "cooperating" with other mechanics.
Aesthetics are the emotional responses evoked in the player.
There are many types of aesthetics, including but not limited to the following eight stated by Hunicke, LeBlanc and Zubek:
Sensation (Game as sense-pleasure): Player enjoys memorable audio-visual effects.
Fantasy (Game as make-believe): Imaginary world.
Narrative (Game as drama): A story that drives the player to keep coming back
Challenge (Game as obstacle course): Urge to master something. Boosts a game's replayability.
Fellowship (Game as social framework): A community where the player is an active part of it. Almost exclusive for multiplayer games.
Discovery (Game as uncharted territory): Urge to explore game world.
Expression (Game as self-discovery): Own creativity. For example, creating character resembling player's own avatar.
Submission (Game as pastime): Connection to the game, as a whole, despite constraints.
The paper also mentions a ninth kind of fun competition. The paper seeks to better specify terms such as 'gameplay' and 'fun', and extend the vocabulary of game studies, suggesting a non-exhaustive taxonomy of eight different types of play. The framework uses these definitions to demonstrate the incentivising and disincentivising propert
|
https://en.wikipedia.org/wiki/Animal%20geography
|
Animal geography is a subfield of the nature–society/human–environment branch of geography as well as a part of the larger, interdisciplinary umbrella of human–animal studies (HAS). Animal geography is defined as the study of "the complex entanglings of human–animal relations with space, place, location, environment and landscape" or "the study of where, when, why and how nonhuman animals intersect with human societies". Recent work advances these perspectives to argue about an ecology of relations in which humans and animals are enmeshed, taking seriously the lived spaces of animals themselves and their sentient interactions with not just human but other nonhuman bodies as well.
The Animal Geography Specialty Group of the Association of American Geographers was founded in 2009 by Monica Ogra and Julie Urbanik, and the Animal Geography Research Network was founded in 2011 by Daniel Allen.
Overview
First wave
The first wave of animal geography, known as zoogeography, came to prominence as a geographic subfield from the late 1800s through the early part of the 20th century. During this time the study of animals was seen as a key part of the discipline and the goal was "the scientific study of animal life with reference to the distribution of animals on the earth and the mutual influence of environment and animals upon each other". The animals that were the focus of studies were almost exclusively wild animals and zoogeographers were building on the new theories of evolution and natural selection. They mapped the evolution and movement of species across time and space and also sought to understand how animals adapted to different ecosystems. "The ambition was to establish general laws of how animals arranged themselves across the earth's surface or, at smaller scales, to establish patterns of spatial co-variation between animals and other environmental factors." Key works include Newbigin's Animal Geography, Bartholomew, Clarke, and Grimshaw's Atlas of Zoogeography
|
https://en.wikipedia.org/wiki/V0-morph
|
A V0-morph is an organism whose surface area remains constant as the organism grows.
The reason why the concept is important in the context of the Dynamic Energy Budget theory is that food (substrate) uptake is proportional to surface area, and maintenance to volume. The surface area that is of importance is that part that is involved in substrate uptake.
Biofilms on a flat solid substrate are examples of V0-morphs; they grow in thickness, but not in surface area that is involved in nutrient exchange. Other examples are dinophyta and diatoms that have a cell wall that does not change during the cell cycle. During cell-growth, when the amounts of protein and carbohydrates increase, the vacuole shrinks. The outer membrane that is involved in nutrient uptake remains constant. At cell division, the daughter cells rapidly take up water, complete a new cell wall and the cycle repeats.
Rods (bacteria that have the shape of a rod and grow in length, but not in diameter) are a static mixture between a V0- and a V1-morph, where the caps act as V0-morphs and the cylinder between the caps as V1-morph.The mixture is called static because the weight coefficients of the contributions of the V0- and V1-morph terms in the shape correction function are constant during growth.
Crusts, such as lichens that grow on a solid substrate, are a dynamic mixture between a V0- and a V1-morph, where the inner part acts as V0-morph, and the outer annulus as V1-morph.The mixture is called dynamic because the weight coefficients of the contributions of the V0- and V1-morph terms in the shape correction function change during growth. The Dynamic Energy Budget theory explains why the diameter of crusts grow linearly in time at constant substrate availability.
|
https://en.wikipedia.org/wiki/Mycosubtilin
|
Mycosubtilin is a natural lipopeptide with antifungal and hemolytic activities and isolated from Bacillus species. It belongs to the iturin lipopeptide family.
Definition
Mycosubtilin is a natural lipopeptide. It is produced by the strains of Bacillus spp mainly by Bacillus subtilis. It was discovered due to its antifungal activities. It belongs to the family of iturin lipopeptides
Structure
Mycosubtilin is a heptapeptide, cyclized in a ring with a β-amino fatty acid. The peptide sequence is composed of L-Asn-D-Tyr-D-Asn-L-Gln-L-Pro-D-Ser-L-Asn.
Biological activities
Mycosubtilin has strong antifungal and hemolytic activities. It is active against fungi and yeasts such as Candida albicans, Candida tropicalis, Saccharomyces cerevisiae, Penicillium notatum, and Fusarium oxysporum.
Its antibacterial activity is quite limited to bacteria such as Micrococcus luteus.
|
https://en.wikipedia.org/wiki/Sainsbury%20Laboratory
|
The Sainsbury Laboratory (TSL) is a research institute located at the Norwich Research Park in Norwich, Norfolk, England, that carries out fundamental biological research and technology development on aspects of plant disease, plant disease resistance and microbial symbiosis in plants.
History
In 1987, an agreement was signed to establish The Sainsbury Laboratory. This agreement made the laboratory a joint venture between several organizations, including the Gatsby Charitable Foundation, the John Innes Foundation, the University of East Anglia, and the Agricultural and Food Research Council (now BBSRC).
Later that year, the laboratory employed its first members of staff. Then, in 1989, The Sainsbury Laboratory moved into its current building. This building was constructed alongside the John Innes Centre on the Norwich Research Park.
Research
The Sainsbury Laboratory conducts research on various topics related to plant-microbe interactions.
The laboratory investigates innate immune recognition in plants and the signaling and cellular changes that occur during plant-microbe interactions. Additionally, researchers at the laboratory study plant and pathogen genomics to gain a better understanding of the mechanisms involved in plant-microbe interactions.
One of the key areas of research is the identification and study of plant disease resistance genes. Another important research area is the biology of pathogen effector proteins, which play a crucial role in the interaction between plants and pathogens.
With this knowledge, the laboratory employs biotechnological approaches to develop crop disease resistance. These approaches are aimed at reducing agrochemical input and the percentage of crops lost to disease.
The Sainsbury Laboratory partners with the John Innes Centre on a Plant Health Institute Strategic Program (ISP) funded by the Biotechnology and Biological Sciences Research Council (BBSRC).
Training
The Sainsbury Laboratory provides a training environment
|
https://en.wikipedia.org/wiki/Shunting%20%28neurophysiology%29
|
Shunting is an event in the neuron which occurs when an excitatory postsynaptic potential and an inhibitory postsynaptic potential are occurring close to each other on a dendrite, or are both on the soma of the cell.
According to temporal summation one would expect the inhibitory and excitatory currents to be summed linearly to describe the resulting current entering the cell. However, when inhibitory and excitatory currents are on the soma of the cell, the inhibitory current causes the cell resistance to change (making the cell "leakier"), thereby "shunting" instead of completely eliminating the effects of the excitatory input.
See also
Spatial summation
Temporal summation
|
https://en.wikipedia.org/wiki/Star%20Trek%20-%20The%20Wrath%20of%20Khan%20%28miniatures%29
|
Star Trek - The Wrath of Khan is a set of miniatures for Star Trek: The Role Playing Game published by FASA.
Contents
Star Trek - The Wrath of Khan is a series of blister-packed 25mm miniature figures modeled on the crew of the original starship Enterprise.
Reception
Ed Andrews reviewed Star Trek - The Wrath of Khan in Space Gamer No. 66. Andrews commented that "These figures are quite passable as gaming figures. Considering what they represent, and the fact that this is FASA's debut into miniatures, I wish they were exceptional."
|
https://en.wikipedia.org/wiki/NAPG
|
Gamma-soluble NSF attachment protein is a SNAP protein that in humans is encoded by the NAPG gene.
Function
NSF and SNAPs (NSF attachment proteins) are general elements of the cellular membrane transport apparatus. The sequence of the predicted 312-amino acid human protein encoded by NAPG is 95% identical to that of bovine gamma-SNAP. NAPG mediates platelet exocytosis and controls the membrane fusion events of this process.
|
https://en.wikipedia.org/wiki/John%20Wilson%20%28English%20judge%29
|
Sir John Wilson (6 August 1741, Applethwaite, Westmorland – 18 October 1793, Kendal, Westmorland) was an English mathematician and judge. Wilson's theorem is named after him.
Wilson attended school in Staveley, Cumbria before going up to Peterhouse, Cambridge in 1757, where he was a student of Edward Waring. He was Senior Wrangler in 1761. He was later knighted, and became a Fellow of the Royal Society in 1782. He was Judge of Common Pleas from 1786 until his death in 1793.
See also
Wilson prime
Notes
|
https://en.wikipedia.org/wiki/Avicide
|
An avicide is any substance (normally a chemical) used to kill birds.
Commonly used avicides include strychnine (also used as rodenticide and predacide), DRC-1339 (3-chloro-4-methylaniline hydrochloride, Starlicide) and CPTH (3-chloro-p-toluidine, the free base of Starlicide), Avitrol (4-aminopyridine) and chloralose (also used as rodenticide). In the past, highly concentrated formulations of parathion in diesel oil were sprayed by aircraft over birds' nesting colonies.
Avicides are banned in many countries because of their ecological impact, which is poorly studied. They are still used in the United States, Canada, Australia and New Zealand. The practice is criticized by animal rights advocates and those who kill birds with guns and traps. Pigeon fanciers sometimes poison problematic birds of prey, even in countries like Russia and Ukraine where avicides are illegal.
See also
Bird kill
|
https://en.wikipedia.org/wiki/Shunt%20regulated%20push-pull%20amplifier
|
A shunt regulated push-pull amplifier is a Class A amplifier whose output drivers (transistors or more commonly vacuum tubes) operate in antiphase. The key design element is the output stage also serves as the phase splitter.
The acronym SRPP is also used to describe a series regulated push-pull amplifier.
History
The earliest vacuum tubes based circuit reference is a patent by Henry Clough of the Marconi company filled in 1940. It proposes its use as a modulator, but also mentions an audio amplifier use.
Other patents mention this circuit later in a slightly modified form, but it is not widely used until 1951, when Peterson and Sinclair finally adapted and patented the SRPP for audio use. Variety of transistor based versions appeared after the 1960s.
|
https://en.wikipedia.org/wiki/Artisanal%20food
|
Artisanal food encompasses breads, cheeses, fruit preserves, cured meats, beverages, oils, and vinegars that are made by hand using traditional methods by skilled craftworkers, known as food artisans. The foodstuff material from farmers and backyard growers can include fruit, grains and flours, milks for cheese, cured meats, fish, beverages, oils, and vinegars. The movement is focused on providing farm to fork type foods with locally sourced products that benefit the consumer, small scale growers and producers, and the local economy.
Food artisans
Food artisans produce foods and edible foodstuffs that are not mass produced, but rather made by hand. These include cheeses, breads and baked goods, charcuterie and other foods that involve preservation or fermentation, home preservation or canning processes, and fruit preserves, cured meats, beverages, oils, and vinegars.
Fermentation or otherwise controlling the preservation environment for beneficial microorganisms can be utilized for vinegars, cheeses, cured meats, wine, oolong tea, kimchi and other examples.
An artisan food item is usually developed and produced over a long period of time and consumed relatively close to where the food is created.
Legislation
In 2009, the Food Safety Enhancement Act was proposed and passed the House of Representatives, but did not pass. The measure was renegotiated and became known as the Food Safety Modernization Act (FSMA). On 4 January 2011, President Barack Obama signed the bill into law.
Tester-Hagan Amendment
Senator Jon Tester (D-MT) and Senator Kay Hagan (D-NC) introduced two amendments to the FSMA that removed local food growers and food processors from federal oversight. These growers and producers would remain under the jurisdiction of state and local health and sanitation laws, rules, and regulations.
Controversy
As of 2016, there was not a published official standard or definition for artisan foods. A good working definition can be gleaned from the Tester-Hagen Ame
|
https://en.wikipedia.org/wiki/Project%20Denver
|
Project Denver is the codename of a central processing unit designed by Nvidia that implements the ARMv8-A 64/32-bit instruction sets using a combination of simple hardware decoder and software-based binary translation (dynamic recompilation) where "Denver's binary translation layer runs in software, at a lower level than the operating system, and stores commonly accessed, already optimized code sequences in a 128 MB cache stored in main memory". Denver is a very wide in-order superscalar pipeline. Its design makes it suitable for integration with other SIPs cores (e.g. GPU, display controller, DSP, image processor, etc.) into one die constituting a system on a chip (SoC).
Project Denver is targeted at mobile computers, personal computers, servers, as well as supercomputers. Respective cores have found integration in the Tegra SoC series from Nvidia. Initially Denver cores was designed for the 28 nm process node (Tegra model T132 aka "Tegra K1"). Denver 2 was an improved design that built for the smaller, more efficient 16 nm node. (Tegra model T186 aka "Tegra X2").
In 2018, Nvidia released an improved design (codename: "Carmel", based on ARMv8 (64-bit; variant: ARM-v8.2 with 10-way superscalar, functional safety, dual execution, parity & ECC) got integrated into the Tegra Xavier SoC offering a total of 8 cores (or 4 dual-core pairs). The Carmel CPU core supports full Advanced SIMD (ARM NEON), VFP (Vector Floating Point), and ARMv8.2-FP16. First published testings of Carmel cores integrated in the Jetson AGX development kit by third party experts took place in September 2018 and indicated a noticeably increased performance as should expected for this real world physical manifestation compared to predecessors systems, despite all doubts the used quickness of such a test setup in general an in particular implies. The Carmel design can be found in the Tegra model T194 ("Tegra Xavier") that is designed with a 12 nm structure size.
Overview
Pipelined processor with
|
https://en.wikipedia.org/wiki/Yang%E2%80%93Mills%20equations
|
In physics and mathematics, and especially differential geometry and gauge theory, the Yang–Mills equations are a system of partial differential equations for a connection on a vector bundle or principal bundle. They arise in physics as the Euler–Lagrange equations of the Yang–Mills action functional. They have also found significant use in mathematics.
Solutions of the equations are called Yang–Mills connections or instantons. The moduli space of instantons was used by Simon Donaldson to prove Donaldson's theorem.
Motivation
Physics
In their foundational paper on the topic of gauge theories, Robert Mills and Chen-Ning Yang developed (essentially independent of the mathematical literature) the theory of principal bundles and connections in order to explain the concept of gauge symmetry and gauge invariance as it applies to physical theories. The gauge theories Yang and Mills discovered, now called Yang–Mills theories, generalised the classical work of James Maxwell on Maxwell's equations, which had been phrased in the language of a gauge theory by Wolfgang Pauli and others. The novelty of the work of Yang and Mills was to define gauge theories for an arbitrary choice of Lie group , called the structure group (or in physics the gauge group, see Gauge group (mathematics) for more details). This group could be non-Abelian as opposed to the case corresponding to electromagnetism, and the right framework to discuss such objects is the theory of principal bundles.
The essential points of the work of Yang and Mills are as follows. One assumes that the fundamental description of a physical model is through the use of fields, and derives that under a local gauge transformation (change of local trivialisation of principal bundle), these physical fields must transform in precisely the way that a connection (in physics, a gauge field) on a principal bundle transforms. The gauge field strength is the curvature of the connection, and the energy of the gauge field is give
|
https://en.wikipedia.org/wiki/Web%20server%20benchmarking
|
Web server benchmarking is the process of estimating a web server performance in order to find if the server can serve sufficiently high workload.
Key parameters
The performance is usually measured in terms of:
Number of requests that can be served per second (depending on the type of request, etc.);
Latency response time in milliseconds for each new connection or request;
Throughput in bytes per second (depending on file size, cached or not cached content, available network bandwidth, etc.).
The measurements must be performed under a varying load of clients and requests per client.
Tools for benchmarking
Load testing (stress/performance testing) a web server can be performed using automation/analysis tools such as:
Apache JMeter, an open-source Java load testing tool
ApacheBench (or ab), a command line program bundled with Apache HTTP Server
Siege, an open-source web-server load testing and benchmarking tool
Wrk, an open-source C load testing tool
Locust, an open-source Python load testing tool
Web application benchmarks
Web application benchmarks measure the performance of application servers and database servers used to host web applications. TPC-W was a common benchmark emulating an online bookstore with synthetic workload generation.
|
https://en.wikipedia.org/wiki/Gibbs%20rotational%20ensemble
|
The Gibbs rotational ensemble represents the possible states of a mechanical system in thermal and rotational equilibrium at temperature and angular velocity . The Jaynes procedure can be used to obtain this ensemble. An ensemble is the set of microstates corresponding to a given macrostate.
The Gibbs rotational ensemble assigns a probability to a given microstate characterized by energy and angular momentum for a given temperature and rotational velocity .
where is the partition function
Derivation
The Gibbs rotational ensemble can be derived using the same general method as to derive any ensemble, as given by E.T. Jaynes in his 1956 paper Information Theory and Statistical Mechanics. Let be a function with expectation value
where is the probability of , which is not known a priori. The probabilities obey normalization
To find , the Shannon entropy is maximized, where the Shannon entropy goes as
The method of Lagrange multipliers is used to maximize under the constraints and the normalization condition, using Lagrange multipliers and to find
is found via normalization
and can be written as
where is the partition function
This is easily generalized to any number of equations via the incorporation of more Lagrange multipliers.
Now investigating the Gibbs rotational ensemble, the method of Lagrange multipliers is again used to maximize the Shannon entropy , but this time under the constraints of energy expectation value and angular momentum expectation value , which gives as
Via normalization, is found to be
Like before, and are given by
The entropy of the system is given by
such that
where is the Boltzmann constant. The system is assumed to be in equilibrium, follow the laws of thermodynamics, and have fixed uniform temperature and angular velocity . The first law of thermodynamics as applied to this system is
Recalling the entropy differential
Combining the first law of thermodynamics with the entropy differential gives
C
|
https://en.wikipedia.org/wiki/Young%27s%20interference%20experiment
|
Young's interference experiment, also called Young's double-slit interferometer, was the original version of the modern double-slit experiment, performed at the beginning of the nineteenth century by Thomas Young. This experiment played a major role in the general acceptance of the wave theory of light. In Young's own judgement, this was the most important of his many achievements.
Theories of light propagation in the 17th and 18th centuries
During this period, many scientists proposed a wave theory of light based on experimental observations, including Robert Hooke, Christiaan Huygens and Leonhard Euler. However, Isaac Newton, who did many experimental investigations of light, had rejected the wave theory of light and developed his corpuscular theory of light according to which light is emitted from a luminous body in the form of tiny particles. This theory held sway until the beginning of the nineteenth century despite the fact that many phenomena, including diffraction effects at edges or in narrow apertures, colours in thin films and insect wings, and the apparent failure of light particles to crash into one another when two light beams crossed, could not be adequately explained by the corpuscular theory which, nonetheless, had many eminent supporters, including Pierre-Simon Laplace and Jean-Baptiste Biot.
Young's work on wave theory
While studying medicine at Göttingen in the 1790s, Young wrote a thesis on the physical and mathematical properties of sound and in 1800, he presented a paper to the Royal Society (written in 1799) where he argued that light was also a wave motion. His idea was greeted with a certain amount of skepticism because it contradicted Newton's corpuscular theory.
Nonetheless, he continued to develop his ideas. He believed that a wave model could much better explain many aspects of light propagation than the corpuscular model:
In 1801, Young presented a famous paper to the Royal Society entitled "On the Theory of Light and Colours" wh
|
https://en.wikipedia.org/wiki/Principle%20of%20distributivity
|
The principle of distributivity states that the algebraic distributive law is valid, where both logical conjunction and logical disjunction are distributive over each other so that for any propositions A, B and C the equivalences
and
hold.
The principle of distributivity is valid in classical logic, but both valid and invalid in quantum logic.
The article "Is Logic Empirical?" discusses the case that quantum logic is the correct, empirical logic, on the grounds that the principle of distributivity is inconsistent with a reasonable interpretation of quantum phenomena.
|
https://en.wikipedia.org/wiki/Polymer%20physics
|
Polymer physics is the field of physics that studies polymers, their fluctuations, mechanical properties, as well as the kinetics of reactions involving degradation and polymerisation of polymers and monomers respectively.
While it focuses on the perspective of condensed matter physics, polymer physics is originally a branch of statistical physics. Polymer physics and polymer chemistry are also related with the field of polymer science, where this is considered the applicative part of polymers.
Polymers are large molecules and thus are very complicated for solving using a deterministic method. Yet, statistical approaches can yield results and are often pertinent, since large polymers (i.e., polymers with many monomers) are describable efficiently in the thermodynamic limit of infinitely many monomers (although the actual size is clearly finite).
Thermal fluctuations continuously affect the shape of polymers in liquid solutions, and modeling their effect requires using principles from statistical mechanics and dynamics. As a corollary, temperature strongly affects the physical behavior of polymers in solution, causing phase transitions, melts, and so on.
The statistical approach for polymer physics is based on an analogy between a polymer and either a Brownian motion, or other type of a random walk, the self-avoiding walk. The simplest possible polymer model is presented by the ideal chain, corresponding to a simple random walk. Experimental approaches for characterizing polymers are also common, using polymer characterization methods, such as size exclusion chromatography, viscometry, dynamic light scattering, and Automatic Continuous Online Monitoring of Polymerization Reactions (ACOMP) for determining the chemical, physical, and material properties of polymers. These experimental methods also helped the mathematical modeling of polymers and even for a better understanding of the properties of polymers.
Flory is considered the first scientist establishing th
|
https://en.wikipedia.org/wiki/Median%20cut
|
Median cut is an algorithm to sort data of an arbitrary number of dimensions into series of sets by recursively cutting each set of data at the median point along the longest dimension. Median cut is typically used for color quantization. For example, to reduce a 64k-colour image to 256 colours, median cut is used to find 256 colours that match the original data well.
Implementation of color quantization
Suppose we have an image with an arbitrary number of pixels and want to generate a palette of 16 colors. Put all the pixels of the image (that is, their RGB values) in a bucket. Find out which color channel (red, green, or blue) among the pixels in the bucket has the greatest range, then sort the pixels according to that channel's values. For example, if the blue channel has the greatest range, then a pixel with an RGB value of is less than a pixel with an RGB value of , because . After the bucket has been sorted, move the upper half of the pixels into a new bucket. (It is this step that gives the median cut algorithm its name; the buckets are divided into two at the median of the list of pixels.) This process can be repeated to further subdivide the set of pixels: choose a bucket to divide (e.g., the bucket with the greatest range in any color channel) and divide it into two. After the desired number of buckets have been produced, average the pixels in each bucket to get the final color palette.
See also
k-d tree
|
https://en.wikipedia.org/wiki/Tangloids
|
Tangloids is a mathematical game for two players created by Piet Hein to model the calculus of spinors.
A description of the game appeared in the book "Martin Gardner's New Mathematical Diversions from Scientific American" by Martin Gardner from 1996 in a section on the mathematics of braiding.
Two flat blocks of wood each pierced with three small holes are joined with three parallel strings. Each player holds one of the blocks of wood. The first player holds one block of wood still, while the other player rotates the other block of wood for two full revolutions. The plane of rotation is perpendicular to the strings when not tangled. The strings now overlap each other. Then the first player tries to untangle the strings without rotating either piece of wood. Only translations (moving the pieces without rotating) are allowed. Afterwards, the players reverse roles; whoever can untangle the strings fastest is the winner. Try it with only one revolution. The strings are of course overlapping again but they can not be untangled without rotating one of the two wooden blocks.
The Balinese cup trick, appearing in the Balinese candle dance, is a different illustration of the same mathematical idea. The anti-twister mechanism is a device intended to avoid such orientation entanglements. A mathematical interpretation of these ideas can be found in the article on quaternions and spatial rotation.
Mathematical articulation
This game serves to clarify the notion that rotations in space have properties that cannot be intuitively explained by considering only the rotation of a single rigid object in space. The rotation of vectors does not encompass all of the properties of the abstract model of rotations given by the rotation group. The property being illustrated in this game is formally referred to in mathematics as the "double covering of SO(3) by SU(2)". This abstract concept can be roughly sketched as follows.
Rotations in three dimensions can be expressed as 3x3 matrice
|
https://en.wikipedia.org/wiki/OrCam%20device
|
OrCam devices such as OrCam MyEye are portable, artificial vision devices that allow visually impaired people to understand text and identify objects through audio feedback, describing what they are unable to see.
Reuters described an important part of how it works as "a wireless smartcamera" which, when attached outside eyeglass frames, can read and verbalize text, and also supermarket barcodes. This information is converted to spoken words and entered "into the user’s ear." Face-recognition is also part of OrCam's feature set.
Devices
OrCam Technologies Ltd has created three devices; OrCam MyEye 2.0, OrCam MyEye 1, and OrCam MyReader.
OrCam My Eye 2.0:
OrCam debuted the second-generation model, the OrCam MyEye 2.0 in December 2017.
About the size of a finger, the MyEye 2.0 is battery-powered, and has been compressed into a self-contained device.
The device snaps onto any eyeglass frame magnetically.
Orcam 2.0 is small and light (22.5 grams/0.8 ounces) with functionality to restore independence to the visually impaired.
It comes in two versions. The basic model can read text, and a more advanced one adds features such as face recognition and barcode reading.
As of July 2023, the retail cost is between $4000 and $6000 (USD).
Clinical Studies
JAMA Ophthalmology:
In 2016 JAMA Ophthalmology conducted a study involving 12 legally blind participants to evaluate the usefulness of a portable artificial vision device (OrCam) for patients with low vision. The results showed that the OrCam device improved the patient's ability to perform tasks simulating those of daily living, such as reading a message on an electronic device, a newspaper article or a menu.
Wills Eye:
Wills Eye was a clinical study designed to measure the impact of the OrCam device on the quality of life of patients with End-stage Glaucoma. The conclusion was that OrCam, a novel artificial vision device using a mini-camera mounted on eyeglasses
|
https://en.wikipedia.org/wiki/Rated%20R%26B
|
Rated R&B is an independent online magazine based in the United States, which is dedicated and devoted to news, topics and almost all things related to rhythm and blues (R&B) and soul music.
It publishes independent music reviews, features, interview, and media. Founded and currently edited by Keithan Samuels in August 2011, the webzine has become a leading online source for R&B and soul music news according to Samuels himself and also openly promotes underrated/unnoticed artists of that genre.
Reviews by Rated R&B have been mentioned in publications such as BBC,VIBE, Dazed, Essence, Forbes, The Huffington Post, Yahoo! and Vogue. Rated R&B also publishes music premieres, exclusive live performances and playlists.
During an interview with Shannon Ramsey, the host of Incisive Entertainment's Let's Talk web series, Samuels, who first began writing articles about music in 2009, revealed the backstory and inspiration behind him launching the webzine, saying: In 2020, eZ Toolset listed Rated R&B at number two on their 15 Top R&B Music RSS Feeds To Follow list and Feedspost also listed the webzine at number four on their Top 40 R&B Music Blogs'' list.
|
https://en.wikipedia.org/wiki/Anterior%20branch%20of%20obturator%20nerve
|
The anterior branch of the obturator nerve is a branch of the obturator nerve found in the pelvis and leg.
It leaves the pelvis in front of the obturator externus and descends anterior to the adductor brevis, and posterior to the pectineus and adductor longus; at the lower border of the latter muscle it communicates with the anterior cutaneous and saphenous branches of the femoral nerve, forming a kind of plexus.
It then descends upon the femoral artery, to which it is finally distributed. Near the obturator foramen the nerve gives off an articular branch to the hip joint.
Behind the pectineus, it distributes branches to the adductor longus and gracilis, and usually to the adductor brevis, and in rare cases to the pectineus; it receives a communicating branch from the accessory obturator nerve when that nerve is present.
|
https://en.wikipedia.org/wiki/INK4
|
INK4 is a family of cyclin-dependent kinase inhibitors (CKIs). The members of this family (p16INK4a, p15INK4b, p18INK4c, p19INK4d) are inhibitors of CDK4 (hence their name INhibitors of CDK4), and of CDK6. The other family of CKIs, CIP/KIP proteins are capable of inhibiting all CDKs. Enforced expression of INK4 proteins can lead to G1 arrest by promoting redistribution of Cip/Kip proteins and blocking cyclin E-CDK2 activity. In cycling cells, there is a resassortment of Cip/Kip proteins between CDK4/5 and CDK2 as cells progress through G1. Their function, inhibiting CDK4/6, is to block progression of the cell cycle beyond the G1 restriction point. In addition, INK4 proteins play roles in cellular senescence, apoptosis and DNA repair.
INK4 proteins are tumor suppressors and loss-of-function mutations lead to carcinogenesis.
INK4 proteins are highly similar in terms of structure and function, with up to 85% amino acid similarity. They contain multiple ankyrin repeats.
Genes
The INK4a/ARF/INK4b locus encodes three genes (p15INK4b, ARF, and p16INK4a) in a 35-kilobase stretch of the human genome. P15INK4b has a different reading frame that is physically separated from p16INK4a and ARF. P16INK4a and ARF have different first exons that are spliced to the same second and third exon. While those second and third exons are shared by p16INK4a and ARF, the proteins are encoded in different reading frames meaning that p16INK4a and ARF are not isoforms, nor do they share any amino acid homology.
Evolution
Polymorphisms of the p15INK4b/p16INK4a homolog were found to segregate with melanoma susceptibility in the Xiphophorus indicating that INK4 proteins have been involved with tumor suppression for over 350 million years. Furthermore, the older INK4-based system has been further bolstered by the evolution of the recent addition of the ARF-based anti-cancer response.
Function
INK4 proteins are cell-cycle inhibitors. When they bind to CDK4 and CDK6, they induce an alloster
|
https://en.wikipedia.org/wiki/Manifest%20typing
|
In computer science, manifest typing is explicit identification by the software programmer of the type of each variable being declared. For example: if variable X is going to store integers then its type must be declared as integer. The term "manifest typing" is often used with the term latent typing to describe the difference between the static, compile-time type membership of the object and its run-time type identity.
In contrast, some programming languages use implicit typing (a.k.a. type inference) where the type is deduced from context at compile-time or allow for dynamic typing in which the variable is just declared and may be assigned a value of any type at runtime.
Examples
Consider the following example written in the C programming language:
#include <stdio.h>
int main(void) {
char s[] = "Test String";
float x = 0.0f;
int y = 0;
printf("Hello, World!\n");
return 0;
}
The variables s, x, and y were declared as a character array, floating point number, and an integer, respectively. The type system rejects, at compile-time, such fallacies as trying to add s and x. Since C23, type inference can be used in C with the keyword auto. Using that feature, the preceding example could become:
#include <stdio.h>
int main(void) {
char s[] = "Test String";
// auto s = "Test String"; is instead equivalent to char* s = "Test String";
auto x = 0.0f;
auto y = 0;
printf("Hello, World!\n");
return 0;
}
Similarly to the second example, in Standard ML, the types do not need to be explicitly declared. Instead, the type is determined by the type of the assigned expression.
let val s = "Test String"
val x = 0.0
val y = 0
in print "Hello, World!\n"
end
There are no manifest types in this program, but the compiler still infers the types string, real and int for them, and would reject the expression s+x as a compile-time error.
|
https://en.wikipedia.org/wiki/AC-to-AC%20converter
|
A solid-state AC-to-AC converter converts an AC waveform to another AC waveform, where the output voltage and frequency can be set arbitrarily.
Categories
Referring to Fig 1, AC-AC converters can be categorized as follows:
Indirect AC-AC (or AC/DC-AC) converters (i.e., with rectifier, DC link and inverter), such as those used in variable frequency drives
Cycloconverters
Hybrid matrix converters
Matrix converters (MC)
AC voltage controllers
DC link converters
There are two types of converters with DC link:
Voltage-source inverter (VSI) converters (Fig. 2): In VSI converters, the rectifier consists of a diode-bridge and the DC link consists of a shunt capacitor.
Current-source inverter (CSI) converters (Fig. 3): In CSI converters, the rectifier consists of a phase-controlled switching device bridge and the DC link consists of 1 or 2 series inductors between one or both legs of the connection between rectifier and inverter.
Any dynamic braking operation required for the motor can be realized by means of braking DC chopper and resistor shunt connected across the rectifier. Alternatively, an anti-parallel thyristor bridge must be provided in the rectifier section to feed energy back into the AC line. Such phase-controlled thyristor-based rectifiers however have higher AC line distortion and lower power factor at low load than diode-based rectifiers.
An AC-AC converter with approximately sinusoidal input currents and bidirectional power flow can be realized by coupling a pulse-width modulation (PWM) rectifier and a PWM inverter to the DC-link. The DC-link quantity is then impressed by an energy storage element that is common to both stages, which is a capacitor C for the voltage DC-link or an inductor L for the current DC-link. The PWM rectifier is controlled in a way that a sinusoidal AC line current is drawn, which is in phase or anti-phase (for energy feedback) with the corresponding AC line phase voltage.
Due to the DC-link storage element, there is the
|
https://en.wikipedia.org/wiki/Nicola%20Fusco
|
Nicola Fusco (born August 14, 1956 in Napoli) is an Italian mathematician
mainly known for his contributions to the fields of calculus of variations, regularity theory of partial differential equations, and the theory of symmetrization. He is currently professor at the Università di Napoli "Federico II". Fusco also taught and conducted research at the Australian National University at Canberra, the Carnegie Mellon University at Pittsburgh and at the University of Florence.
He is the Managing Editor of the scientific journal Advances in Calculus of Variations, and member of the editorial boards of various scientific journals.
Awards
Fusco won the 1994 edition of the Caccioppoli Prize of the Italian Mathematical Union, and, in 2010, the Tartufari Prize from the Accademia Nazionale dei Lincei. In 2008 he was an invited speaker at European Congress of Mathematics and in 2010 he was invited speaker at the International Congress of Mathematicians on the topic of "Partial Differential Equations."
From 2010 he is a corresponding member of Accademia Nazionale dei Lincei.
Selected publications
Acerbi, E.; Fusco, N. "Semicontinuity problems in the Calculus of Variations" Archive for Rational Mechanics and Analysis 86 (1984)
Haïm Brezis; Fusco, N.; Sbordone, C. "Integrability for the Jacobian of orientation preserving mappings" Journal of Functional Analysis 115 (1993)
Fusco, N.; Pierre-Louis Lions; Sbordone, C. "Sobolev imbedding theorems in borderline cases" Proceedings of the American Mathematical Society 124 (1996)
Luigi Ambrosio, L.; Fusco, N.; Pallara, D. "Partial regularity of free discontinuity sets" Annali della Scuola Normale Superiore di Pisa Classe di Scienze (2) 24 (1997)
Ambrosio, L.; Fusco, N.; Pallara, D. Functions of bounded variation and free discontinuity problems. Oxford Mathematical Monographs. The Clarendon Press, Oxford University Press, New York (2000)
Irene Fonseca; Fusco, N.; Paolo Marcellini; "On the total variation of the Jacobian" Journal of F
|
https://en.wikipedia.org/wiki/Anatomical%20plane
|
An anatomical plane is a hypothetical plane used to transect the body, in order to describe the location of structures or the direction of movements. In human and animal anatomy, three principal planes are used:
The sagittal plane or lateral plane (longitudinal, anteroposterior) is a plane parallel to the sagittal suture. It divides the body into left and right.
The coronal plane or frontal plane (vertical) divides the body into dorsal and ventral (back and front, or posterior and anterior) portions.
The transverse plane or axial plane (horizontal) divides the body into cranial and caudal (head and tail) portions.
Terminology
There could be any number of sagittal planes, but only one cardinal sagittal plane exists. The term cardinal refers to the one plane that divides the body into equal segments, with exactly one half of the body on either side of the cardinal plane. The term cardinal plane appears in some texts as the principal plane. The terms are interchangeable.
Human anatomy
In human anatomy, the anatomical planes are defined in reference to a body in the upright or standing orientation.
A transverse plane (also known as axial or horizontal plane) is parallel to the ground; it separates the superior from the inferior, or the head from the feet. The transverse planes identified in Terminologia Anatomica are the transpyloric plane, the subcostal plane, the transumbilical (or umbilical) plane, the supracristal plane, the intertubercular plane, and the interspinous plane.
A coronal plane (also known as frontal plane) is perpendicular to the ground; it separates the anterior from the posterior, the front from the back, and the ventral from the dorsal.
A sagittal plane (also known as anteroposterior plane) is perpendicular to the ground, separating left from right. The median (or midsagittal) plane is the sagittal plane in the middle of the body; it passes through midline structures such as the navel and the spine. All other sagittal planes (also known as
|
https://en.wikipedia.org/wiki/Safe%20Meat%20and%20Poultry%20Inspection%20Panel
|
The Safe Meat and Poultry Inspection Panel is an advisory panel to review and evaluate meat inspection policies and proposed changes that the 1996 farm bill (P.L. 104–127, Sec. 918) permanently authorized by amendment to the federal meat and poultry inspection statutes. Provisions in annual USDA appropriations laws since 1996 have prohibited the department from actually establishing the advisory panel.
|
https://en.wikipedia.org/wiki/Code%20page%201106
|
Code page 1106 (CCSID 1106), also known as CP1106 or S7DEC, is an IBM code page number assigned to the Swedish variant of DEC's National Replacement Character Set (NRCS). The 7-bit character set was introduced for DEC's computer terminal systems, starting with the VT200 series in 1983, but is also used by IBM for their DEC emulation. Similar but not identical to the series of ISO 646 character sets, the character set is a close derivation from ASCII with only ten code points differing.
Code page layout
See also
Code page 1103 (very similar Finnish code page differing only in one code point)
Code page 1018 (similar ISO-646-FI / ISO-646-SE / IR-10 code page)
National Replacement Character Set (NRCS)
|
https://en.wikipedia.org/wiki/Tc1/mariner
|
Tc1/mariner is a class and superfamily of interspersed repeats DNA (Class II) transposons. The elements of this class are found in all animals, including humans. They can also be found in protists and bacteria.
The class is named after its two best-studied members, the Tc1 transposon of Caenorhabditis elegans and the mariner transposon of Drosophila.
Structure
The transposon consists of a transposase gene, flanked by two terminal inverted repeats (TIR). Two short tandem site duplications (TSD) are present on both sides of the insert. Transposition happens when two transposases recognize and bind to TIR sequences, join together and promote DNA double-strand cleavage. The DNA-transposase complex then inserts its DNA cargo at specific DNA motifs elsewhere in the genome, creating short TSDs upon integration. In the IS630/Tc1/mariner system, the motif used is a "TA" dinucleotide, duplicated on both ends after insertion.
When the transposase gene is not carried by the transposon, it becomes a non-autonomous in that it now requires the gene to be expressed elsewhere to move around.
Transposase
The 360-amino acid polypeptide has three major subdomains: the amino-terminal DNA-recognition domain that is responsible for binding to the DR sequences in the mirrored IR/DR sequences of the transposon, a nuclear localization sequence (NLS), and a DDE domain that catalyzes the cut-and-paste set of reactions that comprise transposition. The DNA-recognition domain has two paired box sequences that can bind to DNA and are related to various motifs found on some transcription factors; the two paired boxes are labeled PAI and RED, both having the helix-turn-helix motif common for DNA-binding domains. The catalytic domain has the hallmark DDE (sometimes DDD) amino acids that are found in many transposase and recombinase enzymes. In addition, there is a region that is highly enriched in glycine (G) amino acids.
Several signatures for the superfamily of transcriptases have been gi
|
https://en.wikipedia.org/wiki/Zanac
|
is a shoot 'em up video game developed by Compile and published in Japan by Pony Canyon and in North America by FCI. It was released for the MSX computer, the Family Computer Disk System, the Nintendo Entertainment System, and for the Virtual Console. It was reworked for the MSX2 computer as Zanac EX and for the PlayStation as Zanac X Zanac. Players fly a lone starfighter, dubbed the AFX-6502 Zanac, through twelve levels; their goal is to destroy the System—a part-organic, part-mechanical entity bent on destroying mankind.
Zanac was developed by main core developers of Compile, including Masamitsu "Moo" Niitani, Koji "Janus" Teramoto, and Takayuki "Jemini" Hirono. All of these developers went on to make other popular similarly based games such as The Guardian Legend, Blazing Lazers, and the Puyo Puyo series. The game is known for its intense and fast-paced gameplay, level of difficulty, and music which seems to match the pace of the game. It has been praised for its unique adaptive artificial intelligence, in which the game automatically adjusts the difficulty level according to the player's skill level, rate of fire and the ship's current defensive status/capability.
Gameplay
In Zanac, the player controls the spaceship AFX-6502 Zanac as it flies through various planets, space stations, and outer space and through an armada of enemies comprising the defenses of the game's main antagonist—the "System". The player must fight through twelve levels and destroy the System and its defenses. The objective is to shoot down enemies and projectiles and accumulate points. Players start with three lives, and they lose a life if they get hit by an enemy or projectile. After losing a life, gameplay continues with the player reappearing on the screen and losing all previously accumulated power-ups; the player remains temporarily invincible for a moment upon reappearing on the screen. The game ends when all the player's lives have been lost or after completing the twelfth and fin
|
https://en.wikipedia.org/wiki/Riesz%E2%80%93Thorin%20theorem
|
In mathematics, the Riesz–Thorin theorem, often referred to as the Riesz–Thorin interpolation theorem or the Riesz–Thorin convexity theorem, is a result about interpolation of operators. It is named after Marcel Riesz and his student G. Olof Thorin.
This theorem bounds the norms of linear maps acting between spaces. Its usefulness stems from the fact that some of these spaces have rather simpler structure than others. Usually that refers to which is a Hilbert space, or to and . Therefore one may prove theorems about the more complicated cases by proving them in two simple cases and then using the Riesz–Thorin theorem to pass from the simple cases to the complicated cases. The Marcinkiewicz theorem is similar but applies also to a class of non-linear maps.
Motivation
First we need the following definition:
Definition. Let be two numbers such that . Then for define by: .
By splitting up the function in as the product and applying Hölder's inequality to its power, we obtain the following result, foundational in the study of -spaces:
This result, whose name derives from the convexity of the map on , implies that .
On the other hand, if we take the layer-cake decomposition , then we see that and , whence we obtain the following result:
In particular, the above result implies that is included in , the sumset of and in the space of all measurable functions. Therefore, we have the following chain of inclusions:
In practice, we often encounter operators defined on the sumset . For example, the Riemann–Lebesgue lemma shows that the Fourier transform maps boundedly into , and Plancherel's theorem shows that the Fourier transform maps boundedly into itself, hence the Fourier transform extends to by setting
for all and . It is therefore natural to investigate the behavior of such operators on the intermediate subspaces .
To this end, we go back to our example and note that the Fourier transform on the sumset was obtained by taking the sum of two i
|
https://en.wikipedia.org/wiki/3D%20ultrasound
|
3D ultrasound is a medical ultrasound technique, often used in fetal, cardiac, trans-rectal and intra-vascular applications. 3D ultrasound refers specifically to the volume rendering of ultrasound data. When involving a series of 3D volumes collected over time, it can also be referred to as 4D ultrasound (three spatial dimensions plus one time dimension) or real-time 3D ultrasound.
Methods
When generating a 3D volume, the ultrasound data can be collected in four common ways by a sonographer:
Freehand, which involves tilting the probe and capturing a series of ultrasound images and recording the transducer orientation for each slice.
Mechanically, where the internal linear probe tilt is handled by a motor inside the probe.
Using an endoprobe, which generates the volume by inserting a probe and then removing the transducer in a controlled manner.
A matrix array transducer, which uses beam steering to sample points throughout a pyramid shaped volume.
Risks
The general risks of ultrasound also apply to 3D ultrasound. Essentially, ultrasound is considered safe. While other imaging modalities use radioactive dye or ionizing radiation, for example, ultrasound transducers send pulses of high frequency sound into the body and then listen for the echo.
In summary, the primary risks associated with ultrasound would be the potential heating of tissue or cavitation. The mechanisms by which tissue heating and cavitation are measured are through the standards called thermal index (TI) and mechanical index (MI). Even though the FDA outlines very safe values for maximum TI and MI, it is still recommended to avoid unnecessary ultrasound imaging.
Applications
Obstetrics
3D ultrasound is useful, among other things, for facilitating the characterization of some congenital defects, such as skeletal anomalies and heart issues. With real-time 3D ultrasound, the fetal heart rate can be examined in real-time.
Cardiology
Applications of three-dimensional ultrasound in cardiac treatment
|
https://en.wikipedia.org/wiki/180%20%28number%29
|
180 (one hundred [and] eighty) is the natural number following 179 and preceding 181.
In mathematics
180 is an abundant number, with its proper divisors summing up to 366. 180 is also a highly composite number, a positive integer with more divisors than any smaller positive integer. One of the consequences of 180 having so many divisors is that it is a practical number, meaning that any positive number smaller than 180 that is not a divisor of 180 can be expressed as the sum of some of 180's divisors. 180 is a Harshad number and a refactorable number.
180 is the sum of two square numbers: 122 + 62. It can be expressed as either the sum of six consecutive prime numbers: 19 + 23 + 29 + 31 + 37 + 41, or the sum of eight consecutive prime numbers: 11 + 13 + 17 + 19 + 23 + 29 + 31 + 37. 180 is an Ulam number, which can be expressed as a sum of earlier terms in the Ulam sequence only as 177 + 3.
180 is a 61-gonal number, while 61 is the 18th prime number.
Half a circle has 180 degrees, and thus a U-turn is also referred to as a 180.
Summing Euler's totient function φ(x) over the first + 24 integers gives 180.
In binary it is a digitally balanced number, since its binary representation has the same number of zeros as ones (10110100).
A triangle has three interior angles that collectively total 180 degrees. In general, the interior angles of an -sided polygon add to degrees.
In religion
The Book of Genesis says that Isaac died at the age of 180.
Other
180 is the highest score possible with three darts.
See also
List of highways numbered 180
United Nations Security Council Resolution 180
United States Supreme Court cases, Volume 180
Pennsylvania House of Representatives, District 180
|
https://en.wikipedia.org/wiki/Happened-before
|
In computer science, the happened-before relation (denoted: ) is a relation between the result of two events, such that if one event should happen before another event, the result must reflect that, even if those events are in reality executed out of order (usually to optimize program flow). This involves ordering events based on the potential causal relationship of pairs of events in a concurrent system, especially asynchronous distributed systems. It was formulated by Leslie Lamport.
The happened-before relation is formally defined as the least strict partial order on events such that:
If events and occur on the same process, if the occurrence of event preceded the occurrence of event .
If event is the sending of a message and event is the reception of the message sent in event , .
If two events happen in different isolated processes (that do not exchange messages directly or indirectly via third-party processes), then the two processes are said to be concurrent, that is neither nor is true.
If there are other causal relationships between events in a given system, such as between the creation of a process and its first event, these relationships are also added to the definition.
For example, in some programming languages such as Java, C, C++ or Rust, a happens-before edge exists if memory written to by statement A is visible to statement B, that is, if statement A completes its write before statement B starts its read.
Like all strict partial orders, the happened-before relation is transitive, irreflexive (and vacuously, asymmetric), i.e.:
, if and , then (transitivity). This means that for any three events , if happened before , and happened before , then must have happened before .
(irreflexivity). This means that no event can happen before itself.
if then (asymmetry). This means that for any two events , if happened before then cannot have happened before .
Let us observe that the asymmetry property directly follows from the previ
|
https://en.wikipedia.org/wiki/Damovo
|
Damovo is a provider of information and communication technology services. The company, owned by Global Growth, has a regional presence across Europe, the Americas and APAC, with a global support capability spanning over 150 countries.
History
Damovo was formerly a division of the Swedish telecoms group Ericsson. In 2001, the London-based Apax Partners acquired and spun off the direct sales and service operations division into a separate company in a management buy out led by Pearse Flynn. At the time Damovo was based in Glasgow, Scotland. Flynn served as chief executive of the company until 2003. In December 2006 Apax handed over its entire equity stake in the company to its creditors.
In January 2015 Damovo Europe was acquired by Oakley Capital Equity II. Matthew Riley, founder of the Daisy Group, was appointed Executive Chairman. Damovo UK was also acquired by Daisy in a separate deal.
In August 2015 Damovo acquired the voice and unified communications business of Centre de Télécommunications et Téléinformatiques Luxembourgeois (CTTL) in Luxembourg. This was the first acquisition by Damovo, since it was acquired by Oakley Capital earlier in the year.
In November 2016 Damovo completed the acquisition of Netfarmers GmbH – a Germany-based company focused in unified communications, security and networking. In June 2017 Damovo acquired of Swiss-based Voice & Data Network AG (Vodanet).
Damovo was acquired by the American investment fund Global Growth in July 2018.
Products
The Damovo portfolio includes solutions in the areas of Unified Communications & Collaboration, Enterprise Networks, Contact Centers, Cloud Services and Global Services.
Main technology partners are Avaya, Cisco, Microsoft and Mitel.
|
https://en.wikipedia.org/wiki/Martinus%20Beijerinck
|
)
| death_place = Gorssel, Netherlands
| field = Microbiology
| work_institutions = Wageningen UniversityDelft School of Microbiology (founder)
| alma_mater = Leiden University
| known_for = One of the founders of virology, environmental microbiology and general microbiologyConceptual discovery of virus (tobacco mosaic virus)Enrichment cultureBiological nitrogen fixationSulfate-reducing bacteriaNitrogen fixing bacteriaAzotobacter (Azotobacter chroococcum)RhizobiumDesulfovibrio desulfuricans (Spirillum desulfuricans)
| prizes = Leeuwenhoek Medal (1905)
}}
Martinus Willem Beijerinck (, 16 March 1851 – 1 January 1931) was a Dutch microbiologist and botanist who was one of the founders of virology and environmental microbiology. He is credited with the discovery of viruses, which he called "contagium vivum fluidum".
Life
Early life and education
Born in Amsterdam, Beijerinck studied at the Technical School of Delft, where he was awarded the degree of biology in 1872. He obtained his Doctor of Science degree from the University of Leiden in 1877.
At the time, Delft, then a Polytechnic, did not have the right to confer doctorates, so Leiden did this for them. He became a teacher in microbiology at the Agricultural School in Wageningen (now Wageningen University) and later at the Polytechnische Hogeschool Delft (Delft Polytechnic, currently Delft University of Technology) (from 1895). He established the Delft School of Microbiology. His studies of agricultural and industrial microbiology yielded fundamental discoveries in the field of biology. His achievements have been perhaps unfairly overshadowed by those of his contemporaries, Robert Koch and Louis Pasteur, because unlike them, Beijerinck never studied human disease.
In 1877, he wrote his first notable research paper, discussing plant galls. The paper later became the basis for his doctoral dissertation.
In 1885 he became a member of the Royal Netherlands Academy of Art
|
https://en.wikipedia.org/wiki/Quark%20epoch
|
In physical cosmology, the quark epoch was the period in the evolution of the early universe when the fundamental interactions of gravitation, electromagnetism, the strong interaction and the weak interaction had taken their present forms, but the temperature of the universe was still too high to allow quarks to bind together to form hadrons. The quark epoch began approximately 10−12 seconds after the Big Bang, when the preceding electroweak epoch ended as the electroweak interaction separated into the weak interaction and electromagnetism. During the quark epoch, the universe was filled with a dense, hot quark–gluon plasma, containing quarks, leptons and their antiparticles. Collisions between particles were too energetic to allow quarks to combine into mesons or baryons. The quark epoch ended when the universe was about 10−6 seconds old, when the average energy of particle interactions had fallen below the binding energy of hadrons. The following period, when quarks became confined within hadrons, is known as the hadron epoch.
See also
Timeline of the early universe
Chronology of the universe
Cosmology
|
https://en.wikipedia.org/wiki/N-ary%20group
|
In mathematics, and in particular universal algebra, the concept of an n-ary group (also called n-group or multiary group) is a generalization of the concept of a group to a set G with an n-ary operation instead of a binary operation. By an operation is meant any map f: Gn → G from the n-th Cartesian power of G to G. The axioms for an group are defined in such a way that they reduce to those of a group in the case . The earliest work on these structures was done in 1904 by Kasner and in 1928 by Dörnte; the first systematic account of (what were then called) polyadic groups was given in 1940 by Emil Leon Post in a famous 143-page paper in the Transactions of the American Mathematical Society.
Axioms
Associativity
The easiest axiom to generalize is the associative law. Ternary associativity is the polynomial identity , i.e. the equality of the three possible bracketings of the string abcde in which any three consecutive symbols are bracketed. (Here it is understood that the equations hold for all choices of elements a, b, c, d, e in G.) In general, associativity is the equality of the n possible bracketings of a string consisting of distinct symbols with any n consecutive symbols bracketed. A set G which is closed under an associative operation is called an n-ary semigroup. A set G which is closed under any (not necessarily associative) operation is called an n-ary groupoid.
Inverses / unique solutions
The inverse axiom is generalized as follows: in the case of binary operations the existence of an inverse means has a unique solution for x, and likewise has a unique solution. In the ternary case we generalize this to , and each having unique solutions, and the case follows a similar pattern of existence of unique solutions and we get an n-ary quasigroup.
Definition of n-ary group
An n-ary group is an semigroup which is also an quasigroup.
Identity / neutral elements
In the case, there can be zero or one identity elements: the empty set is a
|
https://en.wikipedia.org/wiki/Alabama%20rot
|
Alabama rot or cutaneous and renal glomerular vasculopathy (CRGV) is an often fatal condition in dogs. It was first identified in the US in the 1980s in greyhounds. The initial symptoms are skin lesions on the legs, chest and abdomen followed by renal involvement.
In November 2012 the first cases were suspected in the UK. In January 2014, the outbreak in England was identified as having the same or similar histological and clinical findings as Alabama rot, though this could not be classified as Alabama Rot as the histological results from the UK lacked the relation to E. coli that was present in all the cases in the US, although a wide range of breeds were affected. The suspected disease has been possibly identified across England and Wales, with a case being reported as far north as North Yorkshire in March 2015. A map posted online shows confirmed (with post-mortem) and unconfirmed (without post-mortem) cases of CRGV since December 2012 in the United Kingdom. In May 2017 it was reported that 98 suspected deaths from the disease have occurred in the UK, including 15 in 2017.
Signs and symptoms
The disease is characterized by cutaneous and sometimes renal changes with the latter frequently being ultimately fatal.
Common symptoms of CRGV include, but are not limited to:
Cutaneous lesions involving erythema, erosion, ulceration occurring mainly on extremities such as distal limbs, muzzle and ventrum
Pyrexia (fever)
Lethargy or malaise
Anorexia
Vomiting or retching
Causes
Some veterinary experts theorize the disease is caused by a parasite, while others believe it is bacterial. It is more widely believed that Alabama rot is caused by toxins produced by E. coli but, as there has been no presence of E. coli in histological examination in UK cases, the disease is described there as suspected CRGV rather than Alabama rot per se. Because the exact cause has not been found, developing a vaccine is not possible. The cause of Alabama rot in the UK is under study as of
|
https://en.wikipedia.org/wiki/Bentazon
|
Bentazon (Bentazone, Basagran, Herbatox, Leader, Laddock) is a chemical manufactured by BASF Chemicals for use in herbicides. It is categorized under the thiadiazine group of chemicals. Sodium bentazon is available commercially and appears slightly brown in colour.
Usage
Bentazon is a selective herbicide as it only damages plants unable to metabolize the chemical. It is considered safe for use on alfalfa, beans (with the exception of garbanzo beans ), maize, peanuts, peas (with the exception of blackeyed peas ), pepper, peppermint, rice, sorghum, soybeans and spearmint; as well as lawns and turf. Bentazon is usually applied aerially or through contact spraying on food crops to control the spread of weeds occurring amongst food crops. Herbicides containing bentazon should be kept away from high heat as it will release toxic sulfur and nitrogen fumes.
Bentazon is currently registered for use in the United States in accordance with requirements set forth by the United States Environmental Protection Agency. However as of September 2010, the herbicides Basagran M60, Basagran DF, Basagran AG, Prompt 5L and Laddock 5L are currently under review for pending requests for voluntary registration cancellation.
Water and ground contamination
In general, bentazon is quickly metabolized and degraded by both plants and animals. However, soil leaching and runoff is a major concern in terms of water contamination. In 1995 the Environmental Protection Agency (EPA) stated that levels of bentazon in both ground water and surface water "exceed levels of concern". Despite the establishment of a 20 parts per billion Health Advisory Level there is no requirement to measure for bentazon in water supplies as the Safe Drinking Water Act does not regulate bentazon. The United States EPA found bentazon in 64 out of 200 wells in California - the highest number of detections in their 1995 study. This prompted the State of California to review existing toxicology studies and establish a "Publi
|
https://en.wikipedia.org/wiki/Bandwidth-limited%20pulse
|
A bandwidth-limited pulse (also known as Fourier-transform-limited pulse, or more commonly, transform-limited pulse) is a pulse of a wave that has the minimum possible duration for a given spectral bandwidth. Bandwidth-limited pulses have a constant phase across all frequencies making up the pulse. Optical pulses of this type can be generated by mode-locked lasers.
Any waveform can be disassembled into its spectral components by Fourier analysis or Fourier transformation. The length of a pulse thereby is determined by its complex spectral components, which include not just their relative intensities, but also the relative positions (spectral phase) of these spectral components. For different pulse shapes, the minimum duration-bandwidth product is different. The duration-bandwidth product is minimal for zero phase-modulation. For example, pulses have a minimum duration-bandwidth product of 0.315 while gaussian pulses have a minimum value of 0.441.
A bandwidth-limited pulse can only be kept together if the dispersion of the medium the wave is travelling through is zero; otherwise dispersion management is needed to revert the effects of unwanted spectral phase changes. For example, when an ultrashort pulse passes through a block of glass, the glass medium broadens the pulse due to group velocity dispersion.
Keeping pulses bandwidth-limited is necessary to compress information in time or to achieve high field densities, as with ultrashort pulses in modelocked lasers.
Further reading
Optics
Nonlinear optics
Laser science
|
https://en.wikipedia.org/wiki/Obturator%20veins
|
The obturator vein begins in the upper portion of the adductor region of the thigh and enters the pelvis through the upper part of the obturator foramen, in the obturator canal.
It runs backward and upward on the lateral wall of the pelvis below the obturator artery, and then passes between the ureter and the hypogastric artery, to end in the hypogastric vein.
It has an anterior and posterior branch (similar to obturator artery).
Additional images
|
https://en.wikipedia.org/wiki/SECG
|
In cryptography, the Standards for Efficient Cryptography Group (SECG) is an international consortium founded by Certicom in 1998. The group exists to develop commercial standards for efficient and interoperable cryptography based on elliptic curve cryptography (ECC).
Links and documents
SECG home page
SEC 1: Elliptic Curve Cryptography (Version 1.0 - Superseded by Version 2.0)
SEC 1: Elliptic Curve Cryptography (Version 2.0)
SEC 2: Recommended Elliptic Curve Domain Parameters (Version 1.0 - Superseded by Version 2.0
SEC 2: Recommended Elliptic Curve Domain Parameters (Version 2.0)
Certicom Patent Letter
See also
Elliptic curve cryptography
Cryptography organizations
Cryptography standards
|
https://en.wikipedia.org/wiki/Radwin
|
Radwin is a wireless broadband hardware manufacturing company headquartered in Tel Aviv, Israel. It specializes in wireless telecoms systems and manufactures products used by telecoms carriers, city and town councils, remote communities, ISPs, WISPs, and private networks. It also provides hardware for moving applications such as metro systems, bus networks, ferries, and airports, as well as vehicles such as patrol vehicles, manned and unmanned heavy machinery in mines, and ports. The hardware is used for applications, including mobile and IP backhaul, home and enterprise wireless broadband access, private network connectivity, and video surveillance transmission. As part of the Smart City initiative in India by Prime Minister Narendra Modi, Radwin entered a partnership with Avaya in 2016.
Radwin products are used in more than 150 countries, with roughly 100,000 units in total deployed. The company is headquartered in Tel Aviv, Israel, with regional offices around the world, in Brazil, El Salvador, China, Colombia, Poland, India, Mexico, Peru, Philippines, Singapore, South Africa, Russia, Spain, Thailand, the United Kingdom and United States.
History
The company was founded in 1997 by Sharon Sher. During his military service obligation, he was assigned to a R&D unit where he worked on projects involving telecommunication systems and wireless communications. After receiving a degree in mathematics and physics from Hebrew University of Jerusalem, and a master's degree in electronic engineering from Tel Aviv University, he founded Radwin in 1997 as a spin-off of RAD Data Communications. The first products were point-to-point radios.
By 2005, the company had sold its first 10,000 radios, and its products were chosen for one of Asia's largest WiFi backhaul projects, with more than one thousand wireless links. Radwin was selected by Indian Railways for train-to-track connectivity, and in the same year, the company opened an office in India.
After the 2004 tsunami, Rad
|
https://en.wikipedia.org/wiki/Kostka%20number
|
In mathematics, the Kostka number (depending on two integer partitions and ) is a non-negative integer that is equal to the number of semistandard Young tableaux of shape and weight . They were introduced by the mathematician Carl Kostka in his study of symmetric functions ().
For example, if and , the Kostka number counts the number of ways to fill a left-aligned collection of boxes with 3 in the first row and 2 in the second row with 1 copy of the number 1, 1 copy of the number 2, 2 copies of the number 3 and 1 copy of the number 4 such that the entries increase along columns and do not decrease along rows. The three such tableaux are shown at right, and .
Examples and special cases
For any partition , the Kostka number is equal to 1: the unique way to fill the Young diagram of shape with copies of 1, copies of 2, and so on, so that the resulting tableau is weakly increasing along rows and strictly increasing along columns is if all the 1s are placed in the first row, all the 2s are placed in the second row, and so on. (This tableau is sometimes called the Yamanouchi tableau of shape .)
The Kostka number is positive (i.e., there exist semistandard Young tableaux of shape and weight ) if and only if and are both partitions of the same integer and is larger than in dominance order.
In general, there are no nice formulas known for the Kostka numbers. However, some special cases are known. For example, if is the partition whose parts are all 1 then a semistandard Young tableau of weight is a standard Young tableau; the number of standard Young tableaux of a given shape is given by the hook-length formula.
Properties
An important simple property of Kostka numbers is that does not depend on the order of entries of . For example, . This is not immediately obvious from the definition but can be shown by establishing a bijection between the sets of semistandard Young tableaux of shape and weights and , where and differ only by swapping two
|
https://en.wikipedia.org/wiki/Landscape%20limnology
|
Landscape limnology is the spatially explicit study of lakes, streams, and wetlands as they interact with freshwater, terrestrial, and human landscapes to determine the effects of pattern on ecosystem processes across temporal and spatial scales. Limnology is the study of inland water bodies inclusive of rivers, lakes, and wetlands; landscape limnology seeks to integrate all of these ecosystem types.
The terrestrial component represents spatial hierarchies of landscape features that influence which materials, whether solutes or organisms, are transported to aquatic systems; aquatic connections represent how these materials are transported; and human activities reflect features that influence how these materials are transported as well as their quantity and temporal dynamics.
Foundation
The core principles or themes of landscape ecology provide the foundation for landscape limnology. These ideas can be synthesized into a set of four landscape ecology themes that are broadly applicable to any aquatic ecosystem type, and that consider the unique features of such ecosystems.
A landscape limnology framework begins with the premise of Thienemann (1925). Wiens (2002): freshwater ecosystems can be considered patches. As such, the location of these patches and their placement relative to other elements of the landscape is important to the ecosystems and their processes. Therefore, the four main themes of landscape limnology are:
Patch characteristics: The characteristics of a freshwater ecosystem include its physical morphometry, chemical, and biological features, as well as its boundaries. These boundaries are often more easily defined for aquatic ecosystems than for terrestrial ecosystems (e.g., shoreline, riparian zones, and emergent vegetation zone) and are often a focal-point for important ecosystem processes linking terrestrial and aquatic components.
Patch context: The freshwater ecosystem is embedded in a complex terrestrial mosaic (e.g., soils, geology, and
|
https://en.wikipedia.org/wiki/Prix%20Guzman
|
The Prix Pierre Guzman (Pierre Guzman Prize) was the name given to two prizes, one astronomical and one medical. Both were established by the will of Anne Emilie Clara Goguet (died June 30, 1891), wife of Marc Guzman, and named after her son Pierre Guzman.
Astronomical
This prize was a sum of 100,000 francs, to be given to a person who succeeded in communicating with a celestial body, other than Mars, and receiving a response. Until this occurred, the will also allowed for the accumulated interest on the 100,000 francs to be given, every five years, to a person who had made significant progress in astronomy. The prize was to be awarded by the French Académie des sciences. Pierre Guzman had been interested in the work of Camille Flammarion, the author of La planète Mars et ses conditions d'habitabilité (The Planet Mars and Its Conditions of Habitability, 1892). Communication with Mars was specifically exempted as many people believed that Mars was inhabited at the time and communication with that planet would not be a difficult enough challenge. The prize was later announced in 1900 by the French Académie des sciences.
The five-yearly prize of interest was awarded, starting in 1905, as follows:
In Dec. 1905, to Henri Joseph Anastase Perrotin. A portion of the prize was also given to Louis Fabry.
In Dec. 1910, to Maurice Loewy.
Nikola Tesla claimed in 1937 that he should receive the prize for "his discovery relating to the interstellar transmission of energy." The prize was awarded to the crew of Apollo 11 in 1969.
Medical
This prize was a sum of 50,000 francs, to be awarded by the French Académie de médecine, to be given to a person who succeeded in developing an effective treatment for the most common forms of heart disease. Until this occurred, the will also allowed for the accumulated interest to be given yearly to someone who had made progress in heart disease.
The yearly prize of interest was awarded as follows:
In 1903, to Paul Bergougnan.
In
|
https://en.wikipedia.org/wiki/Feature%20connector
|
The feature connector was an internal connector found mostly in some older ISA, VESA Local Bus, and PCI graphics cards, but also on some early AGP ones. It was intended for use by devices which needed to exchange large amounts of data with the graphics card without hogging a computer system's CPU or data bus, such as TV tuner cards, video capture cards, MPEG video decoders (e.g. RealMagic), and first generation 3D graphic accelerator cards. Early examples include the IBM EGA video adapter.
Several standards existed for feature connectors, depending on the bus and graphics card type. Most of them were simply an 8, 16 or 32-bit wide internal connector, transferring data between the graphics card and another device, bypassing the system's CPU and memory completely.
Their speeds often far exceeded the speed of normal ISA or even early PCI buses, e.g. 40 MByte/s for a standard ISA-based SVGA, up to 150 MByte/s for a VESA-based or PCI-based one, while the standard 16 bit ISA bus ran at ~5.3 MByte/s and the VESA bus at up to 160 MByte/s bandwidth. The Feature connector bandwidths were far beyond the capabilities of e.g. a 386, 486 and barely handled by an early Pentium.
Depending on the implementation, it could be uni or bi-directional, and carry analog color information as well as data. Unlike analog overlay devices however, a feature connector carried mainly data and essentially allowed an expansion card to access the graphics card Video RAM directly, although directing this data stream to the system's CPU and RAM was not always possible, limiting its usefulness mainly to display purposes.
Although its use rapidly declined after the introduction of the faster AGP internal bus, it was, at its time, the only feasible way to connect certain types of graphics-intensive devices to an average computing system without exceeding the available CPU power and memory bandwidth, and without the disadvantages and limitations of a purely analog overlay.
The idea of accessing a vi
|
https://en.wikipedia.org/wiki/KIF3C
|
Kinesin-like protein KIF3C is a protein that in humans is encoded by the KIF3C gene. It is part of the kinesin family of motor proteins.
|
https://en.wikipedia.org/wiki/Orbit%20%28mascot%29
|
Orbit is the name given to Major League Baseball's Houston Astros mascot, a lime-green alien wearing an Astros jersey with antennae extending into baseballs. Orbit was the team's official mascot from the 1990 through the 1999 seasons until the 2000 season, where Junction Jack was introduced as the team's mascot with the move from the Astrodome to then Enron Field. Orbit returned on November 2, 2012, at the unveiling of the Astros new look for their 2013 debut in the American League. The name Orbit pays homage to Houston's association with NASA and nickname Space City.
History
In April 1977, the Astros introduced their first mascot, Chester Charge. At that time there was only one other mascot in major league baseball, which was the San Diego Chicken. Chester Charge was a 45-pound costume of a cartoon Texas cavalry soldier on a horse. Chester appeared on the field at the beginning of each home game, during the seventh inning stretch and then ran around the bases at the conclusion of each win. At the blast of a bugle, the scoreboard would light up and the audience would yell, “Charge!” The first Chester Charge was played by Steve Ross who was then an 18-year-old Senior High School student.
Following a visit to then the AAA-Astros affiliate, the Tucson Toros in 1989, former team marketing Vice President Ted Haracz sought to bring the Toros' mascot, Tuffy to Houston to serve as the team's mascot. Although Tuffy was not promoted from Tucson, Hal Katzman, who performed as Tuffy was invited by the team to serve as Orbit for the 1990 season. The development of a team mascot for the 1990 season was viewed by the team as an important piece in its community outreach programs, specifically with children. John Sorrentino was the newly appointed Director of Community Relations. Sorrentino became instrumental in the design phase of the mascot costume as well as the design of the community outreach program. Sorrentino was able to form a partnership with the FBI and its direc
|
https://en.wikipedia.org/wiki/WIN-35428
|
(–)-2-β-Carbomethoxy-3-β-(4-fluorophenyl)tropane (β-CFT, WIN 35,428) is a stimulant drug used in scientific research. CFT is a phenyltropane based dopamine reuptake inhibitor and is structurally derived from cocaine. It is around 3-10x more potent than cocaine and lasts around 7 times longer based on animal studies. While the naphthalenedisulfonate salt is the most commonly used form in scientific research due to its high solubility in water, the free base and hydrochloride salts are known compounds and can also be produced. The tartrate is another salt form that is reported.
Uses
CFT was first reported by Clarke and co-workers in 1973. This drug is known to function as a "positive reinforcer" (although it is less likely to be self-administered by rhesus monkeys than cocaine). Tritiated CFT is frequently used to map binding of novel ligands to the DAT, although the drug also has some SERT affinity.
Radiolabelled forms of CFT have been used in humans and animals to map the distribution of dopamine transporters in the brain. CFT was found to be particularly useful for this application as a normal fluorine atom can be substituted with the radioactive isotope 18F which is widely used in Positron emission tomography. Another radioisotope-substituted analog [11C]WIN 35,428 (where the carbon atom of either the N-methyl group, or the methyl from the 2-carbomethoxy group of CFT, has been replaced with 11C) is now more commonly used for this application, as it is quicker and easier in practice to make radiolabelled CFT by methylating nor-CFT or 2-desmethyl-CFT than by reacting methylecgonidine with parafluorophenylmagnesium bromide, and also avoids the requirement for a licence to work with the restricted precursor ecgonine.
CFT is about as addictive as cocaine in animal studies, but is taken less often due to its longer duration of action. Potentially this could make it a suitable drug to be used as a substitute for cocaine, in a similar manner to how methadone is used a
|
https://en.wikipedia.org/wiki/NELL2
|
Protein kinase C-binding protein NELL2 is an enzyme that in humans is encoded by the NELL2 gene.
This gene encodes a cytoplasmic protein that contains epidermal growth factor (EGF) -like repeats. The encoded heterotrimeric protein may be involved in cell growth regulation and differentiation. A similar protein in rodents is involved in craniosynostosis. An alternative splice variant has been described but its full length sequence has not been determined.
|
https://en.wikipedia.org/wiki/Active%20matrix
|
Active matrix is a type of addressing scheme used in flat panel displays. In this method of switching individual elements (pixels), each pixel is attached to a transistor and capacitor actively maintaining the pixel state while other pixels are being addressed, in contrast with the older passive matrix technology in which each pixel must maintain its state passively, without being driven by circuitry.
Active matrix technology was invented by Bernard J. Lechner at RCA, using MOSFETs (metal–oxide–semiconductor field-effect transistors). Active matrix technology was first demonstrated as a feasible device using thin-film transistors (TFTs) by T. Peter Brody, Fang Chen Luo and their team at the Thin-Film Devices department of Westinghouse Electric Corporation in 1974, and the term was introduced into the literature in 1975.
Given an m × n matrix, the number of connectors needed to address the display is m + n (just like in passive matrix technology). Each pixel is attached to a switch-device, which actively maintains the pixel state while other pixels are being addressed, also preventing crosstalk from inadvertently changing the state of an unaddressed pixel. The most common switching devices use TFTs, i.e. a FET based on either the cheaper non-crystalline thin-film silicon (a-Si), polycrystalline silicon (poly-Si), or CdSe semiconductor material.
Another variant is to use diodes or resistors, but neither diodes (e.g. metal insulator metal diodes), nor non-linear voltage dependent resistors (i.e. varistors) are currently used with the latter not yet economical, compared to TFT.
The Macintosh Portable (1989) was perhaps the first consumer laptop to employ an active matrix panel. Since the decline of cathode ray tubes as a consumer display technology, virtually all TVs, computer monitors and smartphone screens that use LCD or OLED technology employ active matrix technology.
See also
AMLCD
AMOLED
QLED
TFT-LCD
Passive matrix addressing
Pixel geometry
Compariso
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.